325. Looking Forward to 2024

As we enter this spring semester, we take a break from our usual format to discuss what we are looking forward to in 2024.

Show Notes

Transcript

Rebecca: As we enter this spring semester, we thought we’d take a break from our usual format to discuss what we’re looking forward to in 2024.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: So what tea are you drinking now, Rebecca?

Rebecca: I’m drinking Ceylon tea.

John: And I have ginger peach black tea from the Republic of Tea.

Rebecca: So it’s a new year, John, and we’re gonna have a nice positive episode. So what are you grateful for?

John: I’m grateful that we survived another year. I’m grateful for the continued return to face-to-face instruction and a continuing return to a more normal classroom environment and college environment after all the disruptions from COVID that lasted for a while. And I’m also grateful for the initiatives that we’re using on our campus, and most campuses, to provide more focus on equity and trying to reduce some of the equity gaps that we’ve been seeing. And I’m seeing a general interest in that across a wide range of faculty and the administration. And we’re just in the process of running some workshops, and we’re getting some really good attendance at workshops that focus on techniques that faculty can use to improve equity and reduce some of those gaps. And I’m looking forward to seeing continued expansion of more equitable practices. And also, we’ve had quite a few people trying to implement the TILT approach that Mary-Ann Winklemes has talked about on a past podcast episode, and we’re hoping that that, combined with the increased structure that many people are trying to use in their classes, will help provide all students with more equitable outcomes.

Rebecca: I’m really glad that you have brought up equity, John, because I was just reflecting on a couple meetings that I was in just this week, and thinking about how equity oriented many of our colleagues are. And it’s really exciting to see them really advocating for policies, instructional practices, and many other things that are really equity oriented, and thinking about inclusion, and access, and all the things that you and I have talked about for a long time and have cared about and tried to implement in our classes. I’m also really grateful for, and I know you are too, for the many guests that we’ve had who’ve shared their expertise with us and with our audience. When we do these weekly episodes, it’s so great to have the opportunity to talk to such experts, to learn from them, to stay fresh with what’s going on, and to be able to share it with everyone else. It’s an experience that I didn’t know that I wanted, and I’m glad that we get to continue doing it.

John: And one other thing I’m grateful for from last fall is I attended my first POD conference, and I got to meet dozens of guests that we’ve talked to before. And we’ve talked to them, we’ve seen them on camera, but it was so nice to meet them and talk to them in more detail and in more depth in person.

Rebecca: I felt that way when I went to EDUCAUSE for the first time this year and connected with a number of colleagues focused on accessibility and growing that network and really connecting beyond just names and emails and other ways that we’ve communicated.

John: What are some of the major things you’re watching in the higher ed landscape? We’ve seen a lot of changes going on in the last few years, what are things that you’re going to be focusing more of your attention on in the next year?

Rebecca: I know that some of the things that we’re working on in grad studies and that I’m personally really involved in are kind of some increased accessibility resources for our colleagues at Oswego as well as SUNY. I’m looking forward to building out some of those resources, sharing those resources, and wrapping up a couple of research projects related to accessibility and getting to share those out. And I’m really excited that the higher ed landscape generally is having a lot more focus in this space because students with disabilities have been often overlooked in our diversity, equity, and inclusion initiatives. And there definitely is a push to be a little more inclusive, and to have that population represented in these efforts and initiatives. We’re also really focused on ideas of belonging for a wide range of students, thinking about how do we get our online students connected to each other and to the larger population of students and to see them as members of our community, extending some of those features and opportunities for international students and really thinking about what some of their needs are to be successful at our institution, especially with our kind of rural location, and what can make things really excited. So I’m really looking forward to finding ways to support our students not just in the courses that I’m teaching or in my instructional role, but also in policies and procedures that we’re implementing at the institution,in grad studies, but more broadly as well. How about you?

John: One of the things that I’m really following closely is the development of AI. This came about a little over a year ago and it’s been a really disruptive influence. It offers a lot of tremendous possibilities, but it also provides some challenges to traditional assessment, particularly in asynchronous online courses. So I’m looking forward to continued development, it seems like there’s new tools coming out almost every week. And it offers some really nice capabilities to narrow some of the equity gaps that we have by providing low-cost assistance for students who may not have come into our institutions quite as well prepared. It offers a possibility of students doing a little bit more retrieval practice for those classes where instructors are not providing those opportunities. It offers students who, again have a somewhat weaker background, to take more complex readings that they may have been assigned and simplifying it and creating a more accessible format to help students get up to the level they need in their classes. And so it provides tools that can help students improve their writing, and so forth. The challenge, of course, is that it can also be used as a substitute for learning in some classes unless assignments are designed in a way, and assessment techniques are designed in a way, that reduces the likelihood of that and that’s one of the things we’ll be working on a lot this year, ways of coming up with more authentic assessment and ways of providing more intrinsic motivation for the work that students are doing, so that students can see the value of the learning rather than focusing entirely on grades. And one of the things we’re doing is we have a reading group coming up early this semester on Grading for Growth by Robert Talbert and David Clark. And we’re, in general, encouraging faculty to at least consider the adoption of alternative grading systems, which shift the focus away from students trying to maximize grades to maximizing their learning. And there’s a wide variety of tools that could be used for that, ranging from mastery learning quizzing systems, which many faculty have already been doing through specifications grading, contract grading, labor-based grading, and also ungrading.

Rebecca: Yeah, it’s really exciting that we’re going to focus on that semester, something that I’ve been interested in for a long time and have been using in my classes as well. I’m glad you mentioned AI, there’s certainly a lot of promise, and I’ve been really excited by how many faculty, staff, administrators who’ve actually really been engaged in the conversation around AI. I think sometimes there are new innovations and things and people kind of brush it aside and don’t always think it applies to them or isn’t relevant to them. But I think this is something that’s relevant to everybody. And most people are seeing that and engaging in the conversation, struggling in the conversation, but at least we’re doing that in community. And I think there’s some power in that as we think through policy, in assignments and all the things that we need to think about to provide an enriching experience for our students, but also engage and use the tools and the power that they offer.

John: One other thing I’m following is the development of a wide variety of new edtech tools. We saw an explosive growth in the development of tools and expansion of their capabilities in response to the COVID pandemic, but that growth and expansion hasn’t dropped. And we’re seeing more and more tools that have often been designed based on research about how students learn. And I think we’re going to see expanded use of many of these tools in the coming year.

Rebecca: And I think a lot of faculty got used to experimenting with these tools during remote education and are continuing to use them in physical and virtual classrooms, which I think is really exciting, maybe even more exciting to me was attending big conferences like Middle States and actually having a presenter use some of these edtech tools as part of a plenary. So rather than having more of a lecture style session, it was more of an interactive session, which doesn’t always happen at conferences of such scale, or these more leadership conferences. So it’s exciting to see that we’re modeling some of these practices at the highest level so that the wide variety of individuals involved in higher ed are experiencing learning and engaging with these kinds of tools. Along the same lines, I’m also excited that many of these tools are starting to actually attend to accessibility, in part because higher ed institutions are really pushing back to third-party providers and requesting them provide information about accessibility, and even refusing to adopt tools if they aren’t meeting basic accessibility principles, which I think is really excited and really important.

John: And I saw something very similar both at the POD conference where you might expect to see people creating more interactive workshops, but I’ve also been seeing it in the workshops that we’ve had in the last couple of weeks here. We have a record number of faculty presenting in workshops, they’re using polling, they’re using tools like Mentimeter and they’re doing many more interactive activities than in past years. If we go back a few years, many of the sessions that were presented were essentially straight walkthroughs through PowerPoint slideshows with not a lot of interaction with the participants. And our workshops here have been both in person and remote over zoom. And people have been working really effectively to bring all the participants into the discussion and into the activities, regardless of whether they are in person in the room or remote. And it’s been nice to see that. Much of that I think did grow out of the experiences of COVID, and people just getting more comfortable trying new tools.

Rebecca: Come to find out, practice helps us learn things.

John: Also, our campus enabled the AI Companion in Zoom, which will provide meeting summaries for people who arrive late or people who come in at the end of a discussion. And I think that’s going to offer some nice opportunities for people who may have missed part of the discussion early on in a session, or in a workshop, or in a meeting, because so many of our meetings now take place over Zoom.

Rebecca: And there’s lots to be watching that are also highly concerning, but John and I resolved that we weren’t going to focus on those today.

John: So this will be a relatively short episode. [LAUGHTER]

Rebecca: So continuing on this theme of gratefulness and positivity, John, what are you looking forward to trying this year, or focusing on in your own work this year, or committing to this year?

John: One thing, and this partly follows up a couple of podcasts we’ve had in the last year or so. In our campus, many departments are working to build in some of the NACE competencies into their classroom. And there’s some really significant advantages for that. If it’s done well, it will help students recognize the intrinsic value of the things they’re learning in class and recognize that these are skills that they’re going to need later, which again, helps provide much more motivation for students to learn than if they just see a series of activities that instructors ask them to do, and they don’t see the value of that. So by making the connections between what we’re doing in the classroom in terms of the development of critical thinking skills, teamwork, and all those other NACE competencies, it offers some really serious benefits for students and for faculty. Because if students are more engaged in the activities and understand the purpose of them, I think they’re going to be much more likely to focus on the learning rather than again, trying just to get the highest grade. And that’s also very consistent with the TILT approach that we mentioned earlier. If students understand why you’re doing things, they’re going to receive the techniques and engage in them more productively than if they didn’t see the value of those tasks.

Rebecca: Yeah, I’m glad that you mentioned TILT as well. We mentioned it earlier, but I was just remembering that one of the things I wanted to mention while we were talking today is a commitment to thinking about TILT, not just in a classroom context, but all the other places that touch a student experience. So thinking about policies and procedures and ways that we can use a TILT approach to really improve transparency and clarity for our students and provide some equity and access by doing so. The other thing that I’m committed to trying to do is get back to more play, we’ve had some episodes on Tea for Teaching focused on play. And they always get me really excited about some of the things that I’ve done in the past in some of my classes and that I’ve done with some of my colleagues… and that the burden of transitioning during COVID to remote learning, some of these things have taken time and maybe attention away from play. And I’m hoping to take some time in 2024 to put some more attention back on being a little more playful.

John: So, you think education could be fun?

Rebecca: Maybe.

John: Okay. [LAUGHTER]

Rebecca: I’ve moved to doing some exercises and activities again a little more recently that get to some of these more playful ways of creating and making and thinking through complex problems. And every time I do that the students appreciate it. I have more fun, they have more fun, and I think a lot more learning gets done.

John: Since we want to focus on the positive, we’ll leave challenges for future episodes.

Rebecca: We’ve got all of 2024 to do that, John.

John: And we really appreciate, as Rebecca said, all of the wonderful guests that we’ve had since the beginning of this podcast, and we appreciate our audience too. So thank you for hanging in there with us.

Rebecca: Have a great 2024.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

321. College Students with Disabilities

Sharing student narratives about their experiences can help us to understand how our instructional and policy decisions impact the student experience. In this episode, Amy Fisk joins us discuss to discuss her research project with Rebecca on the perceptions that students with disabilities have of their learning experiences.

Amy is the Assistant Dean for Accessibility at the State University of New York at Geneseo. Amy oversees the Office of Accessibility Services, which coordinates accommodations and support services for students with disabilities. Prior to her role at Geneseo, Amy coordinated a support program for students on the autism spectrum at SUNY Purchase.

Show Notes

Transcript

John: Sharing student narratives about their experiences can help us to understand how our instructional and policy decisions impact the student experience. In this episode, we discuss the perceptions students with disabilities have of their learning experiences.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Amy Fisk. Amy is the Assistant Dean for Accessibility at the State University of New York at Geneseo. Amy oversees the Office of Accessibility Services, which coordinates accommodations and support services for students with disabilities. Prior to her role at Geneseo, Amy coordinated a support program for students on the autism spectrum at SUNY Purchase. Welcome, Amy.

Amy: Thank you. Thank you so much for having me.

Rebecca: Today’s teas are:… Amy, are you drinking any tea?

Amy: I drink tea every morning. So I have Bigelow French Vanilla black tea.

Rebecca: It’s a good way to start the day. How about you, John?

John: In honor of the holiday season, I have Christmas tea today.

Rebecca: I’m drinking Blue Sapphire from my favorite tea shop in Canandaigua.

Amy: Where’s that?

Rebecca: It’s right on Main Street. You should go there.

Amy: I should.

John: We’ve invited you to talk about the article: “Perspectives among college students with disabilities on access and inclusion,” which you co-authored with someone else… Rebecca, I think it was.

Amy: That name sounds familiar.

John: …which was published in College Teaching earlier this year. Before we talk about the article, could you tell us a little bit about your role at SUNY Geneseo.

Amy: So I oversee our Office of Accessibility Services, or OAS. I meet with students to coordinate accommodations and other kinds of support services for our students with disabilities. I monitor policies and procedures within our office. And I often work with faculty and staff on issues related to accessibility and inclusion. So for example, I might do trainings across campus, work with administrators on various committees, and having a voice on issues related to disability, education, awareness, and accessibility.

Rebecca: So prior to this project, Amy, and I didn’t actually know each other. Do you want to share the origin story?

Amy: I started my position here a month before COVID became a thing. So I was kind of thrown into some challenges I did not anticipate. But one of the things I had been thinking about, many of my colleagues were thinking about, was: How are we going to support our students with disabilities? We’re really kind of concerned about their trajectory during this challenging time. We wanted to just get some more information about students’ experiences during COVID. I started talking to Nazely about this, and she says, “You know, I know someone who does research who also might be interested in a potential collaboration.” So that’s how I got connected to Rebecca. And ultimately, we shared an interest in learning more about the impact of COVID on our students within our respective roles on our campuses. We knew that this was a really challenging time for all of us, but especially for our students with disabilities who had already been experiencing barriers pre pandemic. And so we really wanted to hear from our students about their experiences, and what can we learn about access and inclusion moving forward, even when the dust settles and we talk about things post COVID?

John: A lot of the studies that have been done have been quantitative studies. And your study is a qualitative study. Could you talk a little bit about how this qualitative research complements the quantitative research that’s been done?

Amy: Sure. So ultimately, we wanted to gather students’ stories, and many of our findings from our studies are reflective of findings from past studies on challenges and barriers students with disabilities face compared to students without disabilities. But we wanted to identify these specifically within the context of remote learning. And also within the context of navigating this challenging time just in life, we really wanted that student narrative. And we also wanted to assess the positive things that were happening, the practices that were helping our students feel successful, to really help inform tangible takeaways and recommendations to our readers. And we hoped for this information to be relevant, like I said, when the dust settles and regardless of teaching modality. And I think it’s important to highlight that despite the obvious challenges that COVID brought, it has highlighted the importance of accessibility in higher education and really gave us an opportunity to reassess what we’ve been doing, our everyday policies and practices, and we really wanted to highlight that from the student perspective. Beyond that, we also wanted to talk about the needs of our students with disabilities within the context of access and inclusion. So, often disabilities and identities, that tends to be left out of conversations related to diversity, equity, and inclusion. So it wasn’t just about how we need to provide appropriate classroom accommodations, but what are the ways that we can be more inclusive, promote a sense of belonging, proactively provide equal access? So those were the things that we had in mind as we were designing our study.

Rebecca: One of the things that comes up in a lot of conversations, at least more recently in higher ed., is this growing number of students who are registering for accommodations and also the mental health crisis. Can you talk a little bit about that to provide some context for our discussion today?

Amy: Sure. So the mental health needs of college students with disabilities was becoming really apparent before COVID hit, really significant needs related to depression, anxiety, other severe psychiatric impairments. And the studies that had been done around the time of COVID really highlighted those issues of more and more students connecting to their disability services offices, self-identifying as a student with a disability where they have a clinical levels of depression, anxiety, other debilitating mental health needs. And that theme came out in our study as well.

John: Did all the discussions of the challenges of COVID help encourage students to become more willing to declare their mental health challenges or their other needs that perhaps they might have been more reluctant to state prior to this time?

Amy: I think so. I think there is a shift in our culture, and it being okay to talk about mental health and mental illness, for students to say, “I’m having a really hard time, I’m struggling,” because mental health is a spectrum. We all experience a variety of emotions throughout the day, throughout an hour, and throughout our lifetime. And I think it’s becoming slowly de-stigmatized in talking about mental illness and the importance of promoting mental health, especially among our college-age population. A lot of college campuses are really taking seriously the wellness of their students on campus just because of the rise in numbers of students needing that extra support, because colleges across our country are noticing a pretty significant increase. And I do think COVID has propelled that de-stigmatization of talking about mental health.

Rebecca: We’ve talked in the past on this podcast with Kat MacFarlane about some of the barriers that students face in just even approaching and asking for accommodations, having to register with an office of disability services, or whatever the equivalent is on the campus, and having to self identify. And then a lot of students don’t actually choose to do that for a wide variety of reasons, some associated with stigma, but we are seeing increased registrations. So does that mean that there’s increased disability?

Amy: Yes, I think there are a variety of factors and more students connecting with disability services offices. One, I think high schools are better preparing students with disabilities to enter the post-secondary environment. Two, I think our offices are becoming more visible on campus. Again, I think there’s also a de-stigmatization of disability and accessibility services offices, and we’re becoming more visible and relevant on college campuses. And third, I think colleges are starting to talk about disability as an important facet of diversity more and more, I think there’s certainly room for improvement, but I think that conversation is starting to happen. So more students are finding their way into our offices.

Rebecca: So three key themes emerged in our research about the perceptions of students with disabilities, of our institutions, and their experience, and of belonging. And so those three themes are accommodations and accessibility, building relationships, and community, and then course structure and design. Perhaps we can take them one at a time here. Let’s start with accommodations and accessibility. Can you first start with what’s the difference between accommodations and accessibility? Because we know that this is often something that’s confusing to folks.

Amy: Sure. So an a ccommodation, by definition, is designed to remove some sort of barrier that an individual with a disability is experiencing. So an academic accommodation, for example, might be having extra time to take an exam, because timed tests can be a barrier for some students. Maybe it’s a notetaking accommodation because they need assistance accessing that lecture material. Sometimes it’s ensuring that the course materials themselves are accessible, that they can be read through a screen reader. Sometimes the accommodation is related to a course policy such as attendance for a student with a more severe chronic medical condition. So it is an individualized process to assess what an appropriate accommodation would look like. But the purpose of it is to remove some sort of barrier so that this person has equal access to their environment. And so accommodations, though necessary, is something that we’re legally required to provide for the ADA. It’s really a reactive way of ensuring equal access. It’s a floor, it’s a minimum. Accessibility, on the other hand, is about inclusion from the start, so that individual accommodation may not even be needed. And something I like to highlight is that accessibility is not about lowering standards. It’s sending a message that everyone belongs in this space and that inclusion matters.

John: What were some of the most common barriers that students reported facing related to accommodations and accessibility in your study.

Amy: Some of those barriers for students just not receiving their approved accommodations during remote learning, including extended time on tests, for example, or online course materials just not being accessible, or having to continually remind instructors about their accommodations, explain why they needed the accommodation in the first place, negotiating terms of pre-approved accommodations. And this was particularly true among students with what we might call an invisible or non-appearance disabilities such as learning disabilities, ADHD, mental health disabilities, these students are less likely to be believed and questioned about the validity of their disability or their need for accommodation. So those were some pretty significant barriers for students and just not receiving the accommodations that they were approved for.

Rebecca: I think one of the things that was also highlighted as a result of our study taking place, whilst COVID was in full force is how many campus resources students with disabilities and other students depend upon every day. So we had students reporting things like” I didn’t have access to a printer to pull up text when it was more of an image instead of an accessible text that could have been expanded digitally, or having access to a quiet space, like the library.”

Amy: Yeah, that was really significant. And then that is where you also saw some other equity disparities. So there were some students who live down near New York City in very populated areas, there was a lot going down there at the time. COVID, if we recall, some students did not have quiet spaces at home, whereas other students had quiet home offices and their parents may have been at home with them, helping to support them. And then other students who didn’t have a quiet space whatsoever took on more caretaking responsibilities, didn’t have access to WiFi. So those equity disparities continued to widen during COVID beyond the disability barrier, so that was something significant, I think that needed to be highlighted,

Rebecca: What are some of the factors related to accessibility and accommodations that actually resulted in positive perceptions?

Amy: So our students actually reported some very positive interactions with their instructors. So when receiving a student’s letter of accommodation, or like an accommodation notification that would come through our office, some would reach out and ask the student “How can I support you? How can I help provide this accommodation?” One student even noted how they appreciated that the instructor didn’t call them out in class, because that had happened before. So I think just preserving the students’ dignity, reaching out to the student, those were the kinds of things that our students reported as making a significant difference.

John: I know you’re study focused on the status of students during COVID, but in your role addressing these issues now, have the changes in faculty behavior persisted? Have faculty continued to become more sensitive to some of the accessibility and accommodation needs of students as we move back to more classroom instruction?

Amy: So in conversation with colleagues, other disability service providers across SUNY, but also across the country, I think we’ve seen a mix. I think there are some who just wanted to go back to normal, and didn’t we all. I think COVID, again, was a very challenging time and faculty too didn’t have a ton of support, and also really struggled with having that emergency shift to a remote learning modality and some didn’t have the skills or support to really deliver courses in the way that would have facilitated student success. So they were really looking forward to getting back to that in-person modality, back to the pedagogy that we’re used to, and that may have posed some new barriers for students coming back to college campuses. Conversely, we also saw instructors taking some of those learned lessons from the remote learning period and applying them when we did come back to campus. So I do know a number of instructors who, for example, are still utilizing the lecture videos they created during COVID and post them on their learning management system for students who may not have been able to attend class that day, for example, so they can still get the lecture material or recreating their course materials and documents so that they are accessible, creating videos, captioning their videos, modifying some course policies to be a bit more inclusive for students. So I think there has been a change in realizing we can still have students be successful and meet the learning objectives, but in a different and more inclusive way.

Rebecca: I think one of the things that we can highlight that also came out in the student experience, and students reported this in our study, is that some of them actually experienced better access during COVID. Not all but some, in part because some of the technology caught up. And when we first went remote, Zoom didn’t have captions available by default and now it does. And so a lot of these things have become norms that people with disabilities have fought for for a long time and never got.

Amy: And I would say that’s true as well with regard to course policies that may have been amended as well, introducing more self paced work, which is also something that students really appreciated during the remote learning period.

John: I just recently returned from the POD conference where there were many, many discussions of this very issue. And in general, the results there were pretty much the same as what you’ve described, that a lot of the changes that faculty made to better accommodate students needs persisted, but some faculty have moved back to old practices and the results are a little bit mixed. But on average, there seemed to be, in a number of studies, some substantial improvement in faculty responses to student needs.

Rebecca: Based on what students have reported, what recommendations do you have for faculty related to accommodations and accessibility to continue the forward movement as opposed to regressing?

Amy: So, actually I did a talk with faculty of one of our academic departments at the start of the semester, reviewing our office, some of the logistical pieces of implementing accommodations, that sort of thing. But before I started really getting into that, we had a discussion about how accommodations, and the dialogue about accommodations with students are approached, how it’s discussed, how it’s communicated, something as simple as taking time to actually review the portion in your course syllabus related to an accommodation, so maybe an accessibility statement. That tells students that this is important, making sure that your online materials are accessible from the start, that tells students that accessibility and inclusion is important. And students are more likely to engage in a reciprocal dialogue with you about their needs when they feel like they’re heard, when they feel like they’re a valued member of the class, that their accommodations are important and not burdensome. That’s a term we heard a lot in our study, that they’re not a burden, or it’s some sort of requirement that the faculty has to fulfill. And so I think this is probably true for most students, regardless of disability. But students in our study specifically noted how they appreciated when the instructor showed empathy and understanding and flexibility, recognizing that students have significant issues outside of the classroom. We all do, between family, finances, things that are happening in our world today. And I think this is important to acknowledge as well, given that we’re seeing an increase in students from various diverse backgrounds coming into the college environment.

Rebecca: And as we’ve talked about many times on the podcast, flexible doesn’t mean not having standards. [LAUGHTER] And it doesn’t mean a free for all. In fact, a lot of our students benefit from structure, which we’ll talk about, I think, in a few minutes, because that ties to one of our other themes. You talked a little bit about faculty workload related to this, and sometimes the perception that faculty put off is that it’s a burden to provide these accommodations. And the reality is that a lot of our students need very similar things. And so if we think about the common requests for accommodations, or digital accessibility strategies, from the start, we often don’t have a lot of one off things that we do need to accommodate, because we’ve already built it into our courses. That’s not to say that there aren’t accommodations that we need to provide additionally, but it may result in less work, ultimately, to really think about these accessibility principles upfront.

Amy: Right. And I think something as simple as making your course lecture materials available on the learning management system available to students. That can help reduce a lot of barriers for many students who might struggle with keeping up with the pace of the lecture and they end up missing material. A student who may have missed class that one day and just needs that material, other students who need to kind of re-teach themselves the material because, perhaps, they had challenges with staying focused during class. I think there’s a variety of reasons why students would benefit from that, but something as simple as that. Often, when students come to see me, there are maybe students who hadn’t needed accommodations previously, but they encountered a particular course where the policies were such that there were new barriers that arose and if the policy was different, perhaps they wouldn’t need that accommodation. That’s a concrete example of the difference between accommodation and accessibility. Some of our course policies and course design may be inadvertently barriers to students with or without disability. So this might include use of pop quizzes, not making lecture materials available to students, not permitting use of technology, not allowing students to even take breaks in class. And so although the purpose of these policies is probably to make students engaged and have accountability in the course, which these are things, of course, we want… again, we’re not lowering standards… students still need to go to class and do the work. But I think some of these policies actually might be having the opposite effect, and it does for students who request accommodations, rather than focus on learning in the course.

John: I think that many faculty who had only taught in a face-to-face modality before COVID, were able to avoid issues of accessibility by not creating digital content. When they moved to remote teaching, though, they were forced to begin developing digital materials and often received some training in creating accessible digital content. Do you think that that training received during COVID helped encourage more accessible practices by faculty in general?

Amy: I think so again, I think some of these practices have shifted over time, and I think COVID has shed light on the benefits of accessibility, not just for people with disabilities, but for all people. I mean, again, use of captions and subtitles can be beneficial for a lot of folks, whether you’re sitting in a busy Starbucks, whether you have a lot going on in the background, maybe you’re trying to juggle work and family, maybe, again, you’re hard of hearing, and so you need access to those captions. Again, accessibility is for all, not just about or for people with disabilities.

Rebecca: The second theme that kind of emerged in our research was building relationships and community, can you share some insights with faculty about the role that they can play in helping students with disabilities feel connected and included? And you highlighted some of those already: providing accommodations and showing students some dignity and respecting their dignity.

Amy: So again, I think engaging with a student and even something as simple as taking the student aside and asking, “How can I make this course more accessible to you?” speaks volumes to the student, that they are valued, they belong, that their needs aren’t burdensome, and they’re more likely to engage in a reciprocal dialogue with the faculty member when they feel like “Oh, they care about me and my success in this course.” I actually knew about a professor who did an anonymous Google form, asking students “How can I make this course more accessible to you? Are there barriers? In reviewing the syllabus, do you have concerns about something within the course?” One of my students actually told me about this, and said how it really made them feel seen and valued. And they were more likely to reach out to the instructor when they needed help, because some students fail to do so out of shame. They’re in a very vulnerable position to talk about their disability related needs to a faculty member, to an authority figure. And so when you do something as simple as asking a student, “How can I make this accessible to you? Are you experiencing barriers right now?” really opens that line of communication with the student and helps them build a positive relationship with that instructor and for maybe other instructors. It also helps to build a sense of community so that other students know that this is really important, and that inclusion matters. And that’s also sending a message to all their students within the classroom that we appreciate and respect diverse learners here in this classroom. I think that’s a teachable moment for our students as well.

Rebecca: So one of the other things that I think emerged is a desire to be connected with peers, but that faculty can play a really important role in facilitating that connection. So I think oftentimes, we just assume in a classroom that at the beginning of class students are socializing and getting to know other folks and have those contacts, but students really reported that having more structured ways of connecting with peers was really beneficial to them outside of class. And that’s something that I think we might take for granted as instructors in the classroom, that it would just kind of organically happen. But that structure, that scaffolding around that really bubbled up as being pretty important to our students,

Amy: Yeah, that peer-to-peer interaction for an even if it was virtual. One of our students said, “Our instructor had a virtual whiteboard that we could all do group work even when it was asynchronous, which is pretty neat.” So that helps set the stage for positive peer interactions, for peers to ask peers for help and mentorship, which is important. Often, students just feel that going to office hours is the only way that they can receive help. And when you provide opportunities to work together, learn together, that really helps, again, open up a line of communication among peers as well, which is a skill that we’re trying to teach our students.

John: And that was especially severe during COVID. But also, when we returned to the classroom, and students were asked to sit at least six feet away from any other student, it certainly reduced the amount of interaction and it has made it a little more challenging for all students to interact with others. That’s been improving, but I think, perhaps, that experience may remind faculty of the importance of building those types of connections. Because even before COVID, there were always some students who may not have felt as much a part of the class community. But I think we’ve all learned the importance of community during that time.

Rebecca: I think that’s just another example of something that students with disabilities have pointed out as being really important to them. But it’s also important to many other students, too.

John: The third theme that emerged from your research was course structure and design. And most of your findings in that particular category align with many other studies involving inclusive pedagogy and Universal Design for Learning. Can you highlight some of the common barriers that students with disabilities faced in terms of course structure and design.

Amy: So one of our students in the study commonly referred to one of their course LMS pages as a scavenger hunt, where they spent more time trying to find the materials and the information on the course rather than on the assignments themselves. So students in general benefit from an organized LMS and an organized syllabus for deadlines, instructions, policies are very clear and concise, but for students with disabilities, this is particularly important. Many of my students with ADHD, health or chronic medical conditions, or a learning disability, they need to plan ahead, because it might take the students double or triple the time to finish a task. So if students don’t know when their next test is, or if instructions aren’t posted a few days before something is due, we’re really not setting them up for success. And I also talked about some of those other policies and course design that might be inadvertent barriers to our students. And so some of our students reported that they did benefit from self-paced tasks, or on untimed learning assessments, having some autonomy and options for completing assignments in a different format, such as doing a presentation or a podcast, instead of a paper, working in groups or choosing to work individually on a project. Those are some of the specific practices our students highlighted as being really helpful. And again, we’re not lowering standards, they have to meet the same standards and learning objectives, as every other student, just perhaps meeting those same standards in a different way. And that’s what Universal Design for learning is all about.

John: One time in a workshop, a faculty member mentioned that they have students do a scavenger hunt in the LMS, to find various course policies, or to find materials. And I cringed at that and I suggested that it might be better to design your course in a way where the students don’t have to struggle to find things so they can focus their cognitive efforts on learning materials, rather than engaging in scavenger hunts, trying to navigate the course. Has that improved recently?

Amy: I think it has, again, in conversations with some of my colleagues who do this work and talking with faculty, I think it’s a mixed bag as it relates to how instructors are approaching course design in their policies. But other faculty are seeing that changing their pedagogy, changing their policies, changing the way they interact and see students and helping to meet those student needs have evolved, because perhaps they themselves have experienced accessibility barriers during COVID as well. And so it’s become more relevant, because they have that lived experience. And they’re seeing that adopting some of these inclusive practices are actually helping to keep their students engaged, that the students, even if they’re struggling, are more likely to tell their faculty member “I’m struggling and I need help, but I want to stay in this course, what kind of flexibility could be provided?”… rather than, we’ll use a college student term, ghosting [LAUGHTER] the class. So I think things are changing in a direction that speaks to some degree of flexibility and helping students meet those same standards, where the focus is more on learning, rather than adherence to an arbitrary policy.

Rebecca: I think the students really underscored maybe without realizing things like the transparency and learning and teaching or TILT, where being really clear and explicit about what the expectation is and how to get there and how you’re going to be assessed really helps and supports students… that structure and those guardrails is what all of us need. How many times have we worked on a paper the second before a deadline? We work on deadlines, and so if we help students with intermediary deadlines, we’re actually helping them and that doesn’t mean that we’re not flexible,and flexibility doesn’t mean not having those.

Amy: It’s about scaffolding. It’s about recognizing that not all students are coming from the same background and experiences and privilege. They’re not on the same playing field, and so providing those scaffolded learning opportunities… that can really help even the playing field, just providing those scaffolded learning opportunities.

Rebecca: And it’s really some of this scaffolded accountability, so it’s not all due at once, It’s helpful to faculty to remind them that there’s feedback throughout a process on a larger assignment, but also it’s helpful for students to hit individual deadlines to evolve their work as well.

John: And that’s something that is found, as you noted earlier, by Mary-Ann Winklemes in her research on Transparency in Learning and Teaching, and also by Viji Sathy and Kelly Hogan in their research on the importance of structure in reducing equity gaps. While transparency and structure benefits all students, it especially benefits the students who have equity gaps of some form, and it sounds as if that’s also true for students with disabilities.

Rebecca: Yeah, I think none of this is really new, but oftentimes students with disabilities aren’t necessarily included in those studies about equity always, it’s not always one of the groups that’s pulled out separately.

Amy: Part of what’s next is also hearing about the experiences of students with disabilities from other diverse backgrounds, including students of color, students from lower SES backgrounds, students in the LGBTQ+ community, that those experiences are different and that intersectionality is really key in understanding students’ experiences in the classroom and how we can be more accessible and inclusive because, again, accessibility is not just related to are we providing a legally required accommodation, but are we creating a sense of belonging in that space, and giving students an equal opportunity to demonstrate their knowledge and be successful, which is ultimately why we’re all here, I would hope.

Rebecca: So we always wrap up by asking what’s next?

Amy: I think it’s important to not just put a focus on what individual faculty can be doing in their classrooms to support students with disabilities. But how are we promoting access and inclusion at the institutional level, supporting students with disabilities and students from other diverse backgrounds is a whole campus responsibility and faculty needs support in doing that work as well. So I’m hoping what’s next is working with administration, other campus leaders and identifying ways we can really help move that needle in a meaningful way. Making accessibility into larger DEIB (or diversity, equity, inclusion and belonging) campus initiatives, our campus-wide policy, strategic planning, campus-wide faculty and staff training, and other professional development opportunities, hiring diverse faculty and staff on our campus. So not just about talking the talk, but walking the walk when it comes to access and inclusion in higher education.

Rebecca: I think that’s definitely a theme that we’ll see throughout all of higher ed. I hope that we’ll all go home and arm and move in this direction collaboratively.

John: Well, thank you for joining us. It’s been great talking to you and we’re looking forward to hearing more of your future work on this topic.

Amy: Well, thank you so much for having me today. I appreciate it.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

319. AI in the Curriculum

In late fall 2022, higher education was disrupted by the arrival of ChatGPT. In this episode, Mohammad Tajvarpour joins us to discuss his strategy for preparing students for an AI-infused future. Mohammad is an Assistant Professor in the Department of Management and Marketing at SUNY Oswego. During the summer of 2023, he developed an MBA course on ChatGPT for business.

Show Notes

Transcript

John: In late fall 2022, higher education was disrupted by the arrival of ChatGPT. In this episode, we discuss one professor’s strategy for preparing students for an AI-infused future.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

Rebecca: Our guest today is Mohammad Tajvarpour. Mohammad is an Assistant Professor in the Department of Management and Marketing at SUNY Oswego. During the summer of 2023, he developed an MBA course on ChatGPT for business. Welcome Mohammad.

Mohammad: Hello, and thank you for having me here.

John: Thanks for joining us. Today’s teas are:… Mohammad. Are you drinking tea?

Mohammad: Yes. So I love tea. And from where I’m coming from, originally from Iran, tea is a big thing. So we have a big culture around tea. And it’s very interesting because we go to a coffee shop and we drink tea there. So we call it a coffee shop, but the most we get was tea. So I love brewed tea, and it’s kind of a time-consuming process, and it needs devices, tools that they don’t have here. So for a while, I tried tea bag, but I couldn’t connect well with that, so I decided to switch to coffee. But when we drink tea, we have rock candy. So we try to sweeten it with rock candy instead of sugar, because I love tea, and I’d love to drink my tea with rock candy. Now I drink coffee with rock candy, [LAUGHTER] which is a very funny mix, but it works for me. And time to time when I go to a restaurant that has middle eastern food, I get tea there and I really enjoy it. So that is a luxury for me. So it happens once a month when I get brewed tea, but I also like herbal tea, so I like mint tea and other types of herbal tea. I tried to get them mostly before bed.

Rebecca: Today I have blue sapphire tea, brewed fresh this morning.

John: And I have an English breakfast tea, but after a conversation we had earlier, I have some rock candy with saffron in it as a sweetener. So it is very good. So thank you for that suggestion.

Mohammad: Good. Good. Yeah, I have a big mix of rock candy with different flavors with different taste so I will bring you some, so you can try a different one. Good that you have the saffron one. I will bring a different version to you.

Rebecca: So we invited you here today to discuss the course that you offered last summer on ChatGPT for Business. Can you tell us a little bit about how this course came about?

Mohammad: So this course had a very interesting story. It was spring semester 2023, and ChatGPT was out from, I think, end of 2022, November, December. I was using that, and I really enjoyed how powerful the system is. I was following AI even before ChatGPT, and I was expecting such a thing to happen, but to be honest, I wasn’t expecting it to happen in 2022. I was thinking like 2027, 2030. But it happened, and I was so fascinated by the technology, by the quality of the answers that it provided. I was using it every day, to be honest, and I was trying different things with it, trying to find biases in it, trying to find how it can help me. And then it was the break, we had a week of break, spring semester, because reading break or spring break. And I made the first modules of the course event without discussing it with my department. I was so interested, I said, “Okay, let’s try,” and I said “worst scenario is I’m going to put it online for everyone to enjoy it. If the school doesn’t approve this course, then I will put this on YouTube.” So I’ve made the first module, and then we had a faculty gathering at this Italian restaurant in Liverpool, New York, called Avicolli. We were there and the director of our MBA program was there as well, Irene. So I told Irene, I have this idea of ChatGPT for Business, and I have worked this much on it. And she was so supportive, said: ”That’s a wonderful idea, let’s go for it.” So I sent her a proposal, and everything worked very well. And the school was so open to try new things, which I was very happy about. And then we made the course and submitted the proposal. It was approved, and we offered it in summer. That was the story, actually.

John: Could you tell us a bit more about the course? How many students were enrolled in it? What was the modality?

Mohammad: So, for our MBA program, most of our MBA students are professionals. They have a career already, they’re working full time, and then they’re getting their master’s degree, their MBA, actually, to move forward with their career. Many of them already have master’s degree, they may be doctors, they may be nurse supervisors, so the modality that we use for summer courses is mostly asynchronous online, which means we record the session, we put it online, they take online exams, and we go that way, we communicate online. For this course, I designed it in three modules. In the first module, we discuss the ethics and foundations of AI. We discuss how ChatGPT was trained, what was the data that they use? What are the biases that can happen? How can we use this system ethically because there are so many things that we can do with AI, which are very good things. And there are so many not right things that people can use AI for. So we wanted to make sure about the ethics first. And every course that I want to design on AI, I will start with ethics and foundations, because I think that’s the most important element. So we discussed the biases in AI, for example, gender biases, racial biases that may happen if we solely rely on these systems that are trained on biased data from internet, let’s say. So we discussed that. The second module was on prompt engineering. So as we know, prompt is the query that we sent to the AI, that’s the ChatGPT or Bard. So the quality of question that we ask is directly related to the quality of answer that we get from the system. So we want to make sure we ask questions that give us the best answers. And most of the time it’s not one question or one prompt, it’s a sequence of prompts. So we call it a prompt flow. So, at the first round, you may not get the best answer. But as you improve it, you will get closer and closer to what you want. And that’s what we did in the second module. So we designed an eight-step method for prompt engineering. And there are different stages actually in it. So for example, in one step, you have to anonymize the data to make sure that privacy of your client is considered. You want to set the context for the system, so it understands its role in helping you do the job, etc, etc. So we call it the Kharazmi prompt engineering method, which is named after the person who developed the algorithm, actually. So we made that 8-step method, and it worked very well for my students. In the third module, we went one step further. So as you know, these large language models are very efficient and very effective in writing code in different languages. So one of the things that I tested ChatGPT for in 2022, early 2023, was writing codes with it. So I gave it a task and asked it to write the code for me in R, Python, Stata. And it was so good at writing efficient code in these languages. I even used it to optimize my code. So I intentionally, for example, gave it a for loop in R, to see if it can optimize it. And as you know, in R, we can use sapply(), or lapply() to optimize for those. And it was so good at getting it. So I found that it’s very helpful with coding, with programming. And we made the third module actually on data analytics, which requires a lot of coding. And many of the MBA students, because of their background, they’re coming from degrees, or fields that have nothing to do with programming or coding. They have to use it time to time, they have to read the output, but they may not have written their own code. So in my class, I had a student who said, the last time I wrote the code was 20 years ago, that was like the diversity of my class. And I had the students who had taken economics, and they did a lot of coding. So we made the third module on data analytics and how we can use ChatGPT to write us the code and help us with data analytics. And it was wonderful to see that the students with no background in programming tin either R or Python, were able to write code, they’re able to debug code. So I intentionally gave them codes that had some intentional error. So I removed a part or I removed a small comma there, and they were able to debug it in a couple of seconds. And that was one of the fascinating parts of this course. And interesting, I had a student who told me that our company was moving actually from one software to another. And they used ChatGPT and what they learned in that class to migrate their code from one language to another. So with regards to enrollment, we had a lot of interest. So we had so many people who registered for the course and we had so many who were in the waitlist, but we had to make it small cohorts because we wanted to give very personal attention to each student to make sure that everything goes well. So we limited the enrollment to 12. And we promised the rest that we will offer this course again, and you will have a chance to take it. So we had a cohort of 12 MBA students, and understand the MBA students, as I mentioned, they’re professionals. So in class we had a very high profile journalist, three times Emmy Award winner journalist, we had a neurosurgeon, we had a CFO, we had an activist who was running for office. They had so many different backgrounds that helped actually enrich the learning for everyone. I was learning from how they are using the system for their own specific niche. And that was wonderful, I would say, learning process for everyone.

Rebecca: With the diversity of students that you had in your class, can you talk about some of the kinds of activities that they did individually or together?

Mohammad: As I mentioned, the course was asynchronous, because of the course that we have at SUNY-Oswego, most of our MBAs are professionals. So we intentionally try to make, especially summer courses, asynchronous online. But the level of enthusiasm in this class was so high. So we set up weekly meetings. And most of the time we did it during lunchtime, because everybody was working, that was the best time. In my situation, I think we set the time for 6pm, so 6pm we were on Zoom discussing the module that you have learned that week. So there was a lot of interesting discussions in those sessions. I think one of the best discussions that we had was about ethics of using AI. People from different areas were talking about how these biases can affect, let’s say, patients it has, how these AI tools can be used for fake journalism, making fake news, and what are the dangers of that. And then we discuss the inherent biases in the system. So ChatGPT was trained on data that was on internet, data on internet was created by human beings, human beings are prone to biases, those biases will be transferred to the system. So we discussed that. And we had a very healthy discussion about the need for diversity in data, and diversity on the teams who work on this data to train the models. Because if the team members are diverse and sensitive to different issues that may happen, they will make an effort to fix it. So I think the most interesting part for me was the discussion of ethics, and the wrong and right ways that we can use AI and how we can mitigate those biases or harmful uses of AI.

John: Many people in academia are talking about AI and the need to train students in the use of AI. Could you talk a little bit about some of the ways in which AI tools are already being used in business applications.

Mohammad: I will go from academia point of view and how students are using it day to day. And then some of the uses of AI in industry. So in academia, the very basic things that students use AI for are about, let’s say, summarizing a big text. And that’s what I teach them actually, in any course that I have. I’m teaching marketing research, I’m teaching principles of marketing, any course that they teach, I remind them that, okay, you have this big article, and you want to read that, you don’t have time for it, ask ChatGPT to summarize it for you. It helps us read more and more articles, more and more books. So that’s one of the things that people can use it for. The other thing that I have seen many of our international students actually use AI for, to improve their writing skills. So you’re an international student, you have wonderful idea, but you don’t have the best writing skills, writing experience in English. You can write wonderful articles in your own language, but when it gets to English, your vocabulary is limited, you may make grammar errors. So they use it to improve their writing. And in all my courses, I tell them, I’m more than happy to see you use AI to improve your grammar, to improve the flow of your writing,and to check for any writing errors in your text. So that’s totally fine, If they use it for. And there are many other things that the students use it for, for example, they use it to generate individualized examples. So let’s say you’re a student, you have a small problem with one of your courses, let’s say calculus. There is no good example in the textbook, let’s say. But you can ask AI to generate an example that will help you understand that specific niche research problem that you have. So that’s what I see from different areas, use AI for their coursework. When it comes to industry, it’s an abundance of AI use. So many marketing teams are using AI to generate content, especially a start. Because then you’re a startup and you’re a small business, you don’t have a marketing department. You’re one person, you’re the CEO, you’re the CFO, you’re the HR, you’re the marketing manager, you have to do all those jobs, and these LLMs, these large language models, these AI systems, help entrepreneurs to do the marketing and many other aspects of their business on their own. If you want to create content for your social media, ChatGPT can do that for you. You want to make a job posting, ChatGPT can take care of that for you. And then you can focus on improving and developing your business.

Rebecca: I want to circle back to some of the ethics questions that you were grappling with in class. I’m hoping that you can share some more details about the kinds of conversations that you had with students around ethics? Because this is a topic that I think comes up a lot for faculty, in particular, in thinking about how they might want to encourage or discourage students from using tools like ChatGPT.

Mohammad: Definitely. So what we did at SUNY Oswego was we set up an AI committee, I’m talking about the School of Business, I’m sure other schools are doing the same. So we set up an AI committee to make sure that we have a certain policy or certain plans on how we want our students to be trained and use AI. Because it’s the new computer, it’s the new calculator, it’s the new Wikipedia. We cannot stop people from using it. So we want to train them on the use of AI with integrity, we want to make sure that they are using it in an ethical way. So what we did was, we developed three different policies for courses. For some courses, very fundamental courses, we don’t want the students to use AI, because we want them to learn the tool. For example, in calculus, we want them to learn the mathematics behind doing the calculation. Or let’s say in marketing, we want them to understand the fundamentals of what’s the target market, how we can pick the target market, how we can make a fit between our business offering, and what the target market needs and wants. For those fundamental courses, we either ban use of ChatGPT, or we make it very limited to certain purposes, for example, you can use it to fix the grammar in your writing, you can use it to improve the writing of your assignment. Then we have a second level use of AI. Some courses, we are fine if a students uses it to generate some ideas for them to help them do assignments, create examples for them. And then we have a third layer, which is we ask them to use AI. So we tell them in the syllabus that you’re not only are allowed to use AI, you are expected to use AI, text to text AI, text to image AI, text to voice AI, all of that to improve the quality of assignment that you submit, to improve the quality of the projects that you do for this course. For example, for ChatGPT for Business, in the syllabus, it said that you’re learning text to text AI, but you’re expected to use other types of AI when you do your assignment. And many of my ChatGPT for Business students actually use that and they develop logos and many visuals for the assignments totally generated by AI.

Rebecca: Can you talk a little bit about what came up in those conversations in class about the ethics and how they’re using it in different ways. So if they’re using it for images, or they’re using it to write code, or all these other varieties of uses that you’ve outlined.

Mohammad: So one of the discussions that we had around biases, we discussed how gender bias may be inherent in those AI systems. And when we talked about it, it’s not just ChatGPT, any AI system can be prone to those biases. For example, our facial recognition systems, they’re mostly trade on Western pictures, faces from Western people. So they may not do well when it comes to let’s say, African Americans. And they may cause a lot of bias. We have cases actually of that in the news. So that was one of the things that we discussed. And one of the conclusions that we had in those discussions was that it’s not just about the data to train the model it’s about the team that is working on that. The team needs to be diverse enough. If you have African Americans, if you have different ethnicities, if you have different genders in the team, then we’ll be more sensitive to these biases, and we make sure that these are not happening. The other thing was about gender bias. So let’s say the system was trained on data that we had on the internet, go check the Fortune 500 list, the CEOs of Fortune 500 list, the majority of them are male CEOs. So if you train the system on that type of data, it will assume that males are better at doing those jobs, which is wrong. We had a very healthy discussion about that, or different ethnic backgrounds. So if you check the top 100 US companies, only eight of them have African-American CEOs. So when you train your system on that data, you are making inherent biases in the system. The bias is in the DNA of that system, let’s say. So we want to make sure that we at least have those biases in mind, so we are not solely relying on AI for any purpose that you’re using it for. So AI is now being used and ChatGPT… companies are using that, but sooner or later, governments will start using AI. They will use it for let’s say immigration purposes. Just imagine how those biases can affect people’s lives actually. Health care will start using that. So there are so many dangerous decisions that doctors can make. There’s so many things can go wrong with solely relying or blindly relying on AI. And that was one of the biggest things that we discussed. So we want to use it to be more efficient, and sometimes be more effective. But we want to use it with supervision, somebody should check the output, someone should read the output carefully. That person should be aware that these systems are prone to many errors, many biases. So that was one of the discussions that we had. The main thing, I think, that we discussed regarding biases and errors was gender biases and ethnic biases in AI. And then we discussed the wrong ways of using AI. One of the main things that we discussed was fake news. So somebody can make fake news, make a fake Twitter account, and keep posting with the same language that a certain politician is doing. And, as we know, it’s not just text to text AI, you have text to voice AI. So we can give it a sample of a person’s voice, and it can generate the same voice. So just type the speech for the AI and we’ll read it with the same voice. So there’s so many things that can go wrong, especially when it comes to disinformation and fake news.

Rebecca: it seemed like one of the other ethical areas that you talked about, based on what you had said previously, is about data, the data inputs that train the systems, and also the data that you’re putting into the system that you might be analyzing. So there’s privacy issues, copyright issues, etc. Can you share a little bit about how those conversations unfolded as well.

Mohammad: So, for example, one of the ways that people are using it, especially many doctors are actually using ChatGPT to ask it questions. For example, what are the side effects of this new medicine that I’m using. So sometimes you’re inserting private information to the system. So in the prompt engineering session that we had, one of the steps was anonymize, we write the prompt for the system, then we check it for any private information. It can be a name, it can be an address, it can be even a vehicle plate number. All of those should be removed from your prompt, before you submit it to the AI, because you never know what happens to that data. So one of the things that we did was to make sure that no personal or private data is being inserted into the system, at least for the systems that we have right now. In future, we may have private GPTs. So your organization may have an institutional GPT, that makes sure that all the data is private, it may change then. But the systems that are general purpose right now, Bard, ChatGPT, any other system, we want to make sure that the data that we insert into the system is totally anonymized, no private information is being sent to the system, even an email address. We use placeholders for that in our course, to make sure that even emails are not being fed to the system. The other important question that you raised was about copyright. So there are two things with corporate. First, the systems were trained on content that was generated by a person. So what if I asked AI to generate content similar to that? So write me a Harry Potter story, for example, exactly use the same language that JK Rowling was using? What happens then? That’s a big question. The other concern is who owns the copyright for the output that we get from Ai? For example, in my courses, I’m redesigning all my PowerPoints. And I’m removing all the images that I was using before with images that AI has generated. So when AI generates those images for me, who owns the copyright? Is it ChatGPT? Is it is Dall-E? Is it Midjourney? Or is it the person who directed the system to make those content? So at least for ChatGPT, based on what they wrote on their website, they don’t assume any copyright for themselves. The person who’s generating or giving the prompts will own the content. So at least we know that’s the answer to that question for one system, but what happens in future? There should be lots and lots of discussions on copyright, who owns the copyright of the output? And if the system was trained on somebody else’s writing, somebody else’s art, who owns the output? If I prompted it to write a JK Rowling Harry Potter for me, do I own the copyright or do the original writer usually will get the copyright of something that I’ve prompted to ChatGPT? So I think one of the biggest questions that we have had is regulations. How do we want the regulations to evolve in a way that accommodates all these questions that we have today? I think the pace of change is very fast. So policy makers, those who are setting the rules, should be very fast in responding. The technology’s not waiting for anyone, they have to be as fast as these changes in the system are, otherwise there will be chaos, there will be a lot of unanswered questions, and it will go in any direction that we cannot expect. So one of the big things that should happen, I would say, is regulation. We need to regulate the system in a way that fosters improvement, but at the same time, protects people.

John: In addition to all discussions of regulations that are going on globally, there’s also quite a few lawsuits going on in terms of potential copyright violations, which could have some really devastating implications on the development of AI. So a lot of this, I think, we’ll have to just wait and see, because it’s going to be challenging.

Rebecca: A number of interesting cases too of folks trying to register things with the copyright office that were generated by AI that have been denied. So lots of interesting things to be watching for sure.

Mohammad: Definitely.

John: Another area of a lot of concern, and a lot of research that’s beginning to take place is to what extent AI tools will enhance the productivity of workers, and to what extent it may end up replacing workers. And there are some studies now that are finding both of those. Were your students very concerned about the possibility that some of their potential jobs might disappear, or substantially alter, as a result of AI tools.

Mohammad: So I think the best saying with regards to jobs is that nobody will take your job, let me say it in different words.The CEOs who can use AI will take place of CEOs who cannot use AI. So it’s not, “you’re going to lose your job to AI,” it’s mostly about those who are not equipped, those who don’t know how to use AI, will be replaced by the ones who know how to use AI. In short term, there may be some changes in the job market, some of the jobs may be automated, but new jobs will be created. For example, now we have a lot of companies looking for prompt engineers, something that wasn’t there before, like a year ago we didn’t have such a need in the market. So the other thing that will happen is that we need to train people to use AI. But at the same time, the pace of change is so fast. So we train people for a year to take AI jobs. And by the time they finish their education, the system has changed. Now you have to retrain them. So that’s one of the things that is happening and educational institutions should find a way. They should keep updating and updating their curriculum, I would say every day, to keep up with the changes in technology. The other thing that I personally expect and hope to happen in the long run is that we will work less. In the Industrial Revolution, our working hours were reduced, we could do the same amount of productivity with less work. Same thing may happen 10 years from now, five years from now. Instead of working nine hours, we may two hours, three hours, a day, and then be even more productive than what we are right now. Because this system can make us be more efficient. There is a good metaphor that people use for AI, they call it human algorithm centaurs. So in Greek mythology centaurs are half human, half horses. They can be as fast as a horse and they can have the human intelligence and human capabilities. Now we have half human, half algorithm, we can do so many things much faster, much more effectively than before, and will increase the productivity manyfold. So I’m expecting a better life actually for human beings, morat the same time being more productive than before.

Rebecca: It’s interesting, some of the kinds of conversations I’ve had with my students who are design students about AI, have really been about is it going to replace a designer? Well, maybe in some contexts, people are going to use AI to create designs or visual elements, it’s not going to have the same thought [LAUGHTER] and strategy necessarily behind them that a designer might use. But what they’re mostly discovering is that AI is really helpful in making the process faster. So generating more ideas, finding out what they don’t want to design [LAUGHTER] and getting just a place to start and moving forward and developing their work more rapidly. And so that really gets to that efficient idea that you were just talking about.

Mohammad: That’s very true. And I agree with you, sometimes you are just thinking and you cannot start. AI can give you an idea to start with. And then you come up with ideas that you wanted. So regarding the design jobs or any job. I have students who will come to me and say, “Should I change my field to AI?” I said, “No, do what, whatever you’re interested in, if you’re doing design, keep doing design; if you’re into, let’s say, marketing, keep doing marketing; if you’re in finance, keep doing finance; but use AI in your field. If you’re doing design, see how you can use AI to design better. If you are doing marketing, see how you can use AI to make better content, to make better decisions.” So I think it’s not AI replacing people, it’s AI enhancing people. So in any field, we have to equip ourselves with the skills of using AI to do our jobs better.

Rebecca: From an experience I’ve had with my students, we’ve definitely discovered that if you don’t have the right language around the thing that you’re trying to make, it doesn’t do a good job. [LAUGHTER] So you need some disciplinary background or some basic knowledge of the thing that you’re trying to do for it to come out successful.

Mohammad: That’s very true. So one of the limitations of AI that we discussed in our classes was about different languages. So most of the content that was used to train ChatGPT was written in English. So think of other languages that didn’t have that much content on the internet. AI is not as capable in those languages. So that’s one of the things that we need to think of. So this is a system that is super capable in English language, but when it comes to languages that don’t have that many speakers, then it falls behind. So I tried it and I learned that sometimes the system tries to think in English and then translate it in the other language, and it makes so many mistakes in that process. So that was one of the things that came to my mind of what you mentioned..

John: We’re recording this in the middle of November. And in just the last few weeks, we’ve seen a lot of new AI tools come out, we’ve seen ChatGPT expand the size of the input that it’s allowed, and we now see this market they’re offering for GPTs, as they’re calling them. And the pace of change here is more rapid than in pretty much any area that I’ve seen, at least since I’ve been working in various tech fields. It would seem that this would be a challenging course to teach in that the thing you’re studying is constantly changing. Will you be offering this again? And if so, how will the course be different in your next iteration of the course?

Mohammad: That was a very good question, actually. So yes, the course is being offered in January 2024. And as you mentioned, one of the biggest challenges with this course, I would say the biggest challenge with teaching AI, is to keep the content current. So that’s not just what happened today. When I was teaching this course in summer, I made the second module, and then open AI announced the plugins. Now I had to redo the content to make sure that I can use those plugins because they were so powerful. The plugins that ChatGPT introduced were so powerful, and there are so many companies who were making different plugins. So I remember, for the second module, I had to start and re-record my content. I updated my content. I recorded everything 1am, 2am before the session in the morning, because everything was changed. So I had to incorporate that into my class. Same thing is happening with new developments. So what I learned is that every day I have to update my content, I have to update my course. So ChatGPT API was one of the things that I was thinking of as the fourth module and was working on that. Now. I think GPTs is one of the modules that needs to be there. That’s like the app store of Open AI. So, that’s a big game changer. As you mentioned, it has a larger memory right now we can provide it larger context. So that’s another capability that AI has, and it changes the way that we prompt it the way that we ask it questions. So keeping the curricular updated, I think is the biggest challenge. And this is something that we should have in mind. Every week, every day I see something new. I update my slides, update my content to make sure that everything is correct. Because if you don’t do that, let’s say two months, three months, if you don’t update your content, then you have to redo it, you h ave to start over. So that’s definitely one of the things that I do and GPTs is one of the things that I will definitely incorporate into my course for January 2024.

Rebecca: Iterative change definitely seems like a good way to go to manage that, for sure.

Mohammad: We don’t know what will be announced in December. [LAUGHTER] So, I always count on a big change.

Rebecca: But yeah, buckle up and be ready, right?

Mohammad: Yeah.

John: And we welcome our new AI overlords…

Rebecca: Yeah.

John: ///n case, by the time this is released that they have taken over.

Rebecca: Can you talk a little bit about how your colleagues in the School of Business have responded and whether or not more faculty in the School of Business are incorporating AI.

Mohammad: I see that many of my colleagues are super interested in this new technology. So what I like most about SUNY Oswego in general is that everyone is so open to accept new technology, accept new things, accept innovation, and everybody’s trying to absorb the new innovation that we have seen and incorporated one way or another into their work or into their courses. So as I mentioned, we have the AI committee, and in our meetings we have very good discussions about how we should update our curricula. I know that some of my colleagues are already doing that, are already using AI to generate let’s say, visuals for their content, or teaching or talking with the students about the ethical users of AI. So I think at least the ecosystem that they see at SUNY Oswego is very open to accepting innovation, and is very fast to incorporate it into their curricula and educate the students, or at least have discussions with the students about how to use that and how to equip themselves with the skills that they need for future.

John: Just a few weeks ago, your department scheduled a symposium on AI, could you talk a little bit about that?

Mohammad: So we wanted to take a lead in AI education at SUNY-Oswego. So we’re very focused on teaching the students and equipping them with the skills that they need to take future jobs. And we are making a big move toward AI. So we wanted to make sure that our students are exposed to the new developments in this field and understand the importance of this area. So we set up an AI symposium, Bridging Bytes and Business to show them how technology, how AI, how computer, is changing the way that we do business. So we set up a hybrid conference or symposium. They had two panels. The first part was online with scientist discussing the new technology, discussing how AI is evolving. What are the biases, what are the errors that we have in this AI? And they were discussing what is the next big thing that will happen in AI? So in the first round, we had Suroush Saghafian from Harvard. He has a lab that works on developing AI, we had Diane, Diane is a three times Emmy Award journalist, and she was one of our MBA students, actually. And she talked about how AI is used in journalism, what are the challenges of, let’s say, disinformation generated by AI, how journalists need to address those concerns. And we had Saeideh who is a computer scientist. Saeideh worked for Yahoo, Meta, and Google. And she gave us her knowledge, her experience with what these big companies are working on for the next big thing that is happening. So we had a very healthy discussion about the science part of AI. And then we had the business leaders from upstate. We had Michael Backus from Oswego Health, we had John Griffith from insurance, and we had Mohamed Khan from Constellation Energy. So they were discussing how their companies, how their industry is using AI, and what they expect students to know about AI before they go to the job market. What are the skills that they need to have? So we had this very successful symposium, and since it was a hybrid symposium, we’re broadcasting it online. It was kind of a webinar. So we had many attendees from all over the country. So we had attendees from all over U.S. I think we had California, we had Texas, we had Arkansas, New York, obviously, we had people from Canada joining us, Ireland, United Kingdom, France, Germany, and interestingly, we had attendees from Australia. It was 2 am there, I think, but they joined us, and they stated very last minute of the symposium. And that made us very happy and very proud of SUNY Oswego on taking the lead in providing this type of discussions actually around AI. And we’ll keep doing that. We’ll keep having more and more symposiums and panel discussions to keep our students current and to encourage our students to learn more and educate themselves more about AI.

Rebecca: So we always wrap up by asking: “What’s next?”

Mohammad: So we have big plans. One of the things that we’re doing is ChatGPT for Business, it will be offered again in January 2024, and hopefully summer, but aside from that, we are going one step further. We are designing a new course, more advanced than ChatGPT for Business. That course is Prompt Engineering for Artificial Intelligence. So in that course, we’ll focus on different ways that students can use prompt engineering for different purposes: for HR, or marketing, for finance, for different fields. So that course will be an advanced level to ChatGPT for business. And we are going to offer degree in our MBA program on strategic analytics and artificial intelligence. So we are incorporating AI into actually all courses that we offer in that program. And then we will have a micro credential on prompt engineering, because that’s what industry is looking for. They want somebody who is good at asking the right questions from ChatGPT, Bard, and any other AI that you’re using. So they need somebody who is good at writing good prompts for them. So that’s what we are focusing on right now, to equip our students with those skills, with the knowledge that they need to be effective and efficient prompt engineers. And I believe we will be among the very first institutions in North America to offer those courses and those degrees, actually.

Rebecca: Well, thank you so much for joining us and sharing the work that you’ve been doing.

John: We’re always curious about where this is going, and I’m sure we’ll be back in touch with you again in the future. So thank you.

Mohammad: Thank you very much. I really appreciate the wonderful podcast that you have. I time to time listen to your podcast, and I actually bought a book on ChatGPT, based off of one of your podcasts, one of the guests that you had, they wrote a book on ChatGPT, 80 Ways that ChatGPT can help you with your courses, I think. And I’m still reading that book and I’m enjoying that. So thank you for the wonderful podcast that you have.

John: And we’ll include a link to that book by Stan Skrabut, and we’ll also include a link to the recording of that symposium as well in the show notes for this episode.

Mohammad: Thanks so much.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]

312. Alice: Finding Wonderland

Many of our disciplines are unfamiliar to students until their first encounter in an introductory course. In this episode, Rameen Mohammadi joins us to discuss his first-year course that introduces students to computer science using an approachable hands-on experience.

Show Notes

Transcript

John: Many of our disciplines are unfamiliar to students until their first encounter in an introductory course. In this episode, we look at a first-year course that introduces students to computer science using an approachable hands-on experience.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

Rebecca: Our guest today is Rameen Mohammadi. Rameen is the Associate Provost at SUNY Oswego and an Associate Professor of Computer Science. Welcome, Rameen.

Rameen: Thank you. Thank you both.

John: It’s good to have you here. Today, for the first time ever, we’re all drinking the same tea, which is Chimps tea, a black tea with honey flavor, fig, and thyme. And this is a gift from one of our listeners. Miriam in France. Thank you, Myriam.

Rebecca: Yeah, so far. It’s really tasty.

Rameen: Yeah, really excellent tea. We love it.

Rebecca: So we invited you here today, Rameen, to discuss your First-Year Seminar course that combines animation and storytelling using the Alice 3 programming environment. Before we discuss that, though, could you provide an overview of the goal of first-year seminar courses at Oswego?

Rameen: This is not a standard first-year seminar. First-year seminar courses are designed to extend orientation, familiarize students with resources, and things like that. Our perspective about this type of course, which we call signature courses at SUNY Oswego, is that you are welcoming students to the intellectual community that we have. So as first-year students we desire a number of outcomes to be met by these courses, one of them is critical thinking, they have to have a significant critical thinking component. Also, these courses need to have both writing and oral communication embedded in them. And one of my favorites is that they have to enhance the cultural competency of our students, we’re a very diverse student body, there’s quite a bit of opportunity to make sure students experience other perspectives. And I think courses of this type really need to address that. Our provost, Scott Furlong, brought the idea to us even during his interview at SUNY Oswego, about what they had called, at his previous institution, passion courses. Now, as I said, we call them signature courses here. But those of us who love our discipline certainly can understand when somebody uses the term passion. So what makes you excited about your discipline? That’s what the course should help students experience.

John: So, you’re using the Alice 3 programming environment. Could you talk a little bit about the types of things that students are going to be doing in the class?

Rameen: So Alice 3 is a VR programming environment. So what you do is you build a scene, you can bring avatars of various types, could be people, could be aliens, could be dogs, into a scene, and you have props, trees, mountains, buildings, that you could bring into a scene, and then you learn to program something. So they can talk to each other, they can move from point A to point B. And it actually turns out, they’re able to, and they will be, writing reactive programming, which typically is what we do when we design games. So the user acts in some way, and then you program the reaction in the VR world in that context, or things run into each other. And obviously, when you’re designing games cars, or other things may run into each other, and you have the ability to detect that and actually act on that. But at this point, they are already running about a month ahead of where I thought they could be in just about a month of the semester. So I’m really hoping we can get that far.

Rebecca: Can you talk a little bit about why you chose the Alice platform, and what you were really hoping to foster with students.

Rameen: So, just a bit of background about Alice, Alice is supported by researchers in Carnegie Mellon. I think Randy Paudsch, when he was at University of Virginia, is really the person that began the innovation with Alice. And he thern moved to Carnegie Mellon. Many people in computer science would know who he is because of his work in VR, but what he should universally be known for is The Last Lecture, which is a pretty amazing hour plus lecture he gave before he died from cancer. But that group has been working on Alice for a very, very long time, and of course, has had new actors along the way. Don Slater is one of the people that has been part of that group for a long time, and he’s very much involved and was at the time when I met him very much involved in advanced placement. And that’s something I’ve been involved in for a long time, advanced placement for computer science. So one of the things we do in AP readings, we have people do professional development activities, and he gave a talk about Alice and this is a long time ago. But when I first listened to him talk about it, and he showed the features of the system, I really didn’t have a place for it in anything I taught at the time. So it has been brewing in the back of my head as a thing to build a course around for a long time, but really couldn’t have done it until the opportunity came along to build a signature course.

John: For those students who do go on to computer science. It seems like a really nice way of introducing the concept of object oriented programming. Does it work in that way?

Rameen: So the thing to understand about object orientation is that most of us who are software engineers by discipline, database types, we are very comfortable thinking of a student information being an object and the fact that we have a container of student objects and so on. But it turns out that’s not necessarily as comfortable for students as it is for those of us who do this for a living. But when you say, here’s an avatar, and you put this on the screen, and you could tell it to go from point A to point B, that seems like a very natural idea to students, and the fact that this particular avatar, so suppose it’s a bird, has wings, and opposed to a person who has legs, you don’t have to explain that. It’s a concept as inherent in being a human being and 18,19 years old. So some aspects of object orientation that often is difficult for students that are really obvious in this context. So any object like an avatar of a person, dog, cat, whatever, they can all be moved from point A to point B. So they share a set of expectations and attributes. They have a location in a 3D world, and you can move them from A to B, piece of cake, they understand. But then you say, “Well, this one is a bird, it has wings.” So the fact that you can spread the wings or fold the wings would be a characteristic that exists only because of it’s a bird. So inheritance, which is a concept that we like to teach in computer science is just built into the way the system behaves. And no student will say, “Well, what do you mean, a bird can spread this way or fold its wing.” People just naturally know what it all means. And believe me, it’s not always natural, in some of the other things we try to do with students to teach these topics. So it does lend itself extremely well, in understanding that objects have attributes, they have functionality, and it’s all there on the screen, and they can see it.

Rebecca: I think Alice is really nice, because it is so visual. And so you get those immediate, “I can see the thing I did,” whereas I remember when I started learning some code, I was building a database for car parts, and it was completely abstract. And I cared nothing about car parts.

Rameen: Yeah.

Rebecca: So it didn’t make it that accessible to me.

Rameen: It’s not, exactly, then the other aspect of it that I think we need to think about, about the platform, is that you don’t write a single line of code, you generate 1000s and 1000s of lines of code, but you don’t write any. So if you have a particular avatar as the object that you’re processing at the time, in building your code on the screen, you could just drag and drop functionality it has into your code. If you need to loop and repeat steps, you drop a loop into your code and then put the steps you want to repeat inside that loop. So all the typical barriers a student had with syntax or various languages, whether it was Java, Python, C++ kind of wash away, because you don’t really have to know syntax at all you, need to know “what are you trying to do?” and what will enable you to do it, and then you can execute that. So far, clearly, that’s not a problem for them. Here is the screen, this part of it is dedicated to X, that part of it is dedicated to Y and they’ve been able to handle it probably from week one. So all the standard things that tend to take a long time, don’t take any time. And besides doing 3D graphics, if you are a computer science person, in my mind, is super senior level type of activity. You got to teach them an awful lot about data structures and other event handling elements that they must learn because that is what we all had to learn. But guess what, you can learn it with Alice in short order. And this is the course is proving that you can.

John: Now one of the challenges that I could imagine you might have as that students come in with different levels of prior knowledge or interest or engagement with computer science. Some students may have not written a single line of code in any language, while others may also be taking other CS courses at the same time, or have some prior programming experience. How do you address the differences in background?

Rameen: So my sample case here is small, I only have 17 students, but this is not a computer science required course. So this is one that has art students in it, it has biology students, and and it does have a few computer science students and then maybe this one with an AP computer science background from high school, and none of them are doing any better than these other kids. So I guess the point is, it levels the playing field in a pretty significant way. If you can think a thought you can probably write code in Alice. And I’m finding it quite interesting, since I’m not preparing them for another course… not only this course doesn’t have a prerequisite, it’s not a prerequisite for anything else. So the way I designed the course going into it, I went into it with thinking, okay, so if storytelling and writing a really cool story within groups is the best I can get out of them, great. If I can get them to a point where they can write new functionality for objects, and I can help them write reactive programming so they react to the mouse click or collisions of objects and so on, maybe I’m dreaming, but that would be fantastic. At this point, I’m pretty certain I can get him there without other stuff. But that was kind of the key coming into the course, I walked in with a mindset of being flexible, that if they are struggling, I’m not going to keep pushing it like I would typically do in a CS course, which is partly why you would also lose students, but at least in my experience with these kids, and I can’t say until I teach it again (and hopefully I can) whether it will be the way it works is that you show them how to do something, and then they go to work, and they start doing it and then they make mistakes, and we all do, and then you give them a little bit of a hint about potentially maybe a different technique they could have used to accomplish the same task. I’m just going to give you an example. So you want the bird to go from point A to point B, so it’s on the ground, needs to go up on top of the tree. So Alice lets you put a marker where you want it to go on the tree, because you can’t go to the tree, you’re going on a branch of a tree. So you need to know how to put a marker there. So you put a marker there. And then it just goes from point A to point B, it goes from the ground to the top of the tree, then you say “Wait a minute, that’s really not the way birds fly.” So now you got to figure out, well, how am I going to flap his wings to go from point A to point B, to go from the ground to the branch on that tree? So it turns out and I’ve come up with a solution to this myself, obviously, you can’t really teach these things if you haven’t thought about how you could possibly solve them. And one of my students, after like three weeks of instruction, she figured out how to do what we call in Alice a “do together.” So as it’s moving from point A to point B, the step that is happening at the same time is the flapping event of spreading and folding the bird’s wing and she made it very clear that the bird was flying [LAUGHTER] from the ground to the tree with no interruption. Then we need to talk about well, do they really need their legs hanging out as they’re flying? I don’t know much about birds, but I think they fold their legs back. So now we have to learn how to address some kind of a functionality that is about a part of the body of the bird. So this is the way the learning is happening in the course, kind of naturally, you’re trying to make a realistic action on the screen in the animation. Well, how are we going to do that? Well, we have to now address the joints like the hip joint or the knee joint or the ankle joint to make that much more natural in the way it works. And there’s no persuasion here, the student is trying to make an interesting thing. And then I’m there to help them figure out how do you make that much more realistic.

Rebecca: What I really love about these courses, and in what you’re describing with Alice, as someone who’s also taught code to students, particularly ones that are not in computer science, is that they’re thinking like a computer scientist, and you’re really getting them completely within the discipline, you’re hooking them right in because they’re leading with their curiosity. They’re not satisfied with the way something looks, so they’re digging in and digging in and digging in. And unlike our traditional way of structuring curriculum, where we think this is the foundational information, and this is the next thing we build on, it almost turns it totally on its head [LAUGHTER], and does it like backwards from what we traditionally do. And it’s really fun.

Rameen: Well, I think the students are at the center of that type of a decision, that for years, you see human beings that probably could do this kind of work, but shortly after they try and they get errors after errors after errors, they say, “Hey, listen, this is great that there are geeks like you would like to do this kind of thing. It’s not for me,” and they walk away from the discipline, even though they could have had great contributions in computer science. So for me if some of these concepts are introduced this way, where syntax and semantics, which is typically what slows people down when they first begin, even the systems we use… like how do you type your program? And how do you run your program? …there’s a whole bunch of instruction around how do you do anything. You just go to alice.org, you download Alice 3, but once you do, it’s here you go, you click here, and then you set up your scene; you click here, you begin writing code. Well, how you write code? Well, the object is on the left side, you drag the command from the left to the right, how far do you want it to go? Well, you gotta choose a certain distance that it needs to travel… really, really easy for students to take to right away. And I just had no idea what I should expect. You watch a lot of YouTube videos. I mean, I certainly do when I was preparing this course, of all these different people, young and old, building things and being proud of what they had built. And I thought, if I could bring that to a course for our first-year students, that would be really, really awesome. And I think that’s what has happened.

John: You mentioned that the students are able to interact. Are they all in one virtual shared space for the class? Or do they have to invite the other students into the spaces that they’ve developed?

Rameen: So this is a really good question, John. So when I imagined how the course was going to work, I had to think of a number of things. One, I asked our technology people to install Alice on all lab computers because I can’t assume or assert that every single student that will take a course like this will necessarily have the equipment that could enable them to run it. Even the Mac kids who had trouble at first installing the thing, and I needed people to help them to get it installed, even they could continue to work because we had the software on our machines. The type of collaboration that I advocate for in class is a little untraditional. At least I think you could argue that it may be. So like the other day, I gave them a 10-question quiz. So they answer the 10-question quiz. And then I said, “Find a couple of other people and persuade them why your answers are correct and their answer is wrong.” So now the whole room starts talking about the quiz. I don’t know if they’ve ever had an experience where somebody says, “This is not cheating, what I’m asking you to do.” Who gives a quiz that says, talk to everybody else to see what they answered for the quiz.

REBECCA; John does. [LAUGHTER]

Rameen: And that’s not surprising, but in my mind, is it about the learning process? Or is it about assessing or giving a grade? This is a very low-stake experience. So why exactly would I care if you talk to someone else about it? So why not persuade someone else that your answer is right? That’s a very different tactic than to say, “Do you know the answer? And are you right or wrong?” Persuading someone requires talking to them, requires thinking for yourself, first of all. Well, why is this answer right? And then opposite of that, you hear the explanation. Are you persuaded that what they said is accurate? Or do you think they’re wrong? In which case now you’re giving them back a different perspective, and then they change their answer. And of course, you could change your answer for the wrong reason. That’s just one example. I really want them to collaborate and work with each other. And every time somebody does something interesting, like the young woman who built the code that I had not been able to myself, having the birds fly from point A to point B, looking very natural. I had her come to the front of the room, plug in with a connector that is in the room and show everybody how she wrote her code. And we’ve done that at least a dozen times so far, where people just come up, plug their computer and show everybody their code. So we often are worried about students cheating and using other people’s work. And if it is about collaborative learning, then you really have to cultivate the idea that, you know, that was a really good idea, maybe I can do that. And I think hopefully, the course will continue to behave that way where I’m confident everybody’s learning from it. That’s the concern that I’m the only one who knows something, whoever I am as a student, and everybody’s just copying me or whatever. That is not my experience so far in this course. They’re just trying to do it better, and if you have a better idea, maybe I can take that and move with it. Then maybe I’ll make it a better idea. And you see that also with students. One of them figures out how to make somebody walk more naturally, and then the other one even enhances that, even makes it even more realistic in the way you would walk. And that’s kind of what I like to see happen and is actually happening.

Rebecca: So how are students responding? Are you cultivating a whole new crop of computer scientists?

Rameen: So, this is an interesting question. I am wondering, those who are not computer science students, whether or not they decide that this may be something for them. But I’m also… with Rebecca here, is good to bring this up… they might become interested in interaction design as a discipline to pursue and become passionate about. Those of us who do this kind of work for a living and have done it for 40 years or whatever, the engagement aspect is the critical aspect. If they are really invested in the learning process, they can overcome an awful lot of barriers, that frankly, I cannot persuade you to put the time in, if you’re persuaded that you could do something a little bit better, then I’m done. As a teacher, I’ve set up the environment in the way it should be where you are driving the learning process yourself as a student, and they look like that’s what you’re doing at this point. And we’re only within a month into the course and they are behaving that way. Now whether or not they will continue to take more computer science courses, and get a degree in computer science. I’m not really sure, but, hopefully, if they do interaction design, there’ll be better interaction design students, because some of the structures that they have to learn here would definitely benefit them in that curriculum too.

Rebecca: Yeah. When can I come recruit?

Rameen: Anytime. I’ve already had Office of Learning Services. That was one of the things I wanted to point out, that I’ve had Office of Learning Services, they came and they talked about the learning process. And besides what they have done for me and talking about the learning process, and it’s all research-based discussions, which is really critical for students to hear things that actually do work, and we can prove they can work. I talk about the learning process on a regular basis with them. And I’m very interested in them understanding why we’re doing anything that we’re doing. I mean, they may be used to somebody standing in front of them talking for an hour and I just don’t do that. I may talk for 10 minutes and then have them work on stuff and then as we see gaps in what they understand then I talk for 10 more minutes, maybe. I try not to talk a whole lot. I want them to be working. So the time I’ve spent is on building what they are supposed to do to learn, not so much talking to them on a regular basis during the 55 minutes that the class goes on. Unfortunately, I’m not quite sure that that’s the experience they’ll have in other courses that they take, because to me, there is a freedom embedded in the way the course is designed that is hard to replicate if you have to cover from A to Z of some topic. You get to “H” and people are having trouble, well, you just keep going. Well, I don’t if everybody has the luxury to say, well, maybe we need to pause longer because we can’t get past this point. I mean, what’s the point? We get past this point, you’ll never catch up to where the end is. So I am hoping some of them will decide to be CS majors as a result, but I’m more interested in seeing how they will do if they take more CS courses. I mean, if they take a CS 1 course, are they going to do better than a typical person taking a CS 1 course if they move on and take data structures and other courses we require? Will it come to them easier? I think it’s a really interesting question. And I think there’s a lot of research that advocates for the fact that they will do better, but I like to see it firsthand.

John: One of the things I believe was mentioned in the title of the course was storytelling, what types of storytelling takes place in the class? Is it the design of the scenes or is there some other type of communication going on?

Rameen: So the way you do animation, in general… and I probably should back up and say I spent about an ungodly amount of hours trying to learn how to do this, but I went about it backwards because I went after event-driven programming and interesting things that I knew Alice could do to write games. But then you step back from it and say, well, students can’t start there. I mean, that’s just not a good place. So then you look at the alice.org website, which gives you tremendous amount of resources and say, “Oh, designing scenes happens to be the first thing they teach you to do. Oh, maybe I should learn how to design a scene.” So you put the pieces that you want in your story on the screen, and if you don’t want them to appear, you could make them invisible. But the way Alice works, you have to put all the components in on the front end. It’s a little different than the way we do object-oriented programming. When we do object-oriented programming, you create things when you need them, you don’t think about setting up the scene on the front end. So that took a little getting used to, but that’s what you do. And then the characters you put on the screen can move, they can talk, they can fly, they can do whatever you need them to do. And my biggest interest was storytelling, when I was conceiving of the course was that I really want students, especially those from other cultures, other backgrounds, to tell their story, find a way to tell their story. And this is probably going to be starting as a group project for my students in a few weeks. And we’ll see how that actually goes. And obviously it has to have a beginning and middle and an end for it to be an actual story. But I’m just excited to see what they will actually decide to do and how they actually do it. Along the way, though, they’re going to need some tools. And that’s kind of what my contribution will be in making sure that they can tell the story the way they want.

John: It sounds as if developing this course required you to learn quite a few new things that were outside your normal teaching experiences. Do you have any advice for faculty who are working on the development of similar courses?

Rameen: So for those out there who teach for a living, the opportunity to build something from the ground up, especially something that frankly, when I first thought of it, I thought it’d be a lot easier than it turned out, because there was so much that I didn’t know how to do, but when you don’t know something you don’t necessarily know that you don’t know something. People who were doing it and I was watching them do it made it look very easy to me. But once I began to do it, I discovered how much work there was to come to a point that actually orchestrates a course, I mean something that is meaningful, and has a clear direction to it. So if you have that opportunity, even after 40 years of teaching, to start over in some ways and build something that you feel not particularly comfortable about, I really highly recommend people do that. Because that is what your students are experiencing every single day when they are trying to learn this stuff that you know so well. So having a little bit of a taste of what it takes to learn something you know very little about, I think is critical. So my message to the faculty who are listening, is that if that opportunity arises, by all means, take it.,

Rebecca: You certainly get a lot more empathy for what it feels like to not be an expert, when you’re learning something brand new again.

Rameen: Well, that’s the thing about most of the things we do. I’ve been programming for 50 years. So it’s one of those things where you’re completely in tune with the idea of: understand the problem, solve the problem, whatever. But where should the camera be in a 3D world in order for it to point at the person talking just the right way? I had to figure that out. It didn’t come naturally to me. The first bunch of programs I wrote the camera was always in the same spot and then I began learning that “Oh I have control over where this camera goes. [LAUGHTER] So maybe it needs to be somewhere else when this person is talking versus this other one.” That’s been a lot of fun to get a sensibility back into the system here that this stuff is not as obvious as it may seem.

John: We always end with the question: “What’s next?”

Rameen: So I certainly would like to teach the course more, and I also want to do some presentations with the faculty in computer science, and if there is interest in graphic design faculty to do some for them, too, because I think the platform is extremely powerful. It doesn’t cost anything, the resources that exist have been getting developed for a very long time and they’re pretty mature. And again, back to no cost. We all know how much books cost. You really don’t need one, you just use the exercises that they give you at the Carnegie Mellon site for Alice and go with it. So I really want to advocate for faculty to consider using it beyond just this first-year signature course.

John: Well, thank you. This sounds like a really interesting project that can really engage students.

Rebecca: Yeah, it sounds like a lot of fun. I can’t wait to come visit.

Rameen: Yeah. Thank you both. Really, this was fun. Thanks for the tea also.

John: Well, we have Myriam to thank for that.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]

311. Upskilling in AI

With so many demands on faculty time, it can be difficult to prioritize professional development in the area of AI. In this episode, Marc Watkins joins is to discuss a program that incentivizes faculty development in the AI space. Marc is an Academic Innovation Fellow at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers.

Show Notes

Transcript

John: With so many demands on faculty time, it can be difficult to prioritize professional development in the area of AI. In this episode, we examine a program that incentivizes faculty development in the AI space.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Marc Watkins. Marc is an Academic Innovation Fellow at the University of Mississippi, where he helped found and currently directs the AI Institute for Teachers. Welcome back, Marc.

Marc: Thank you, John. Thank you, Rebecca. It’s great to be back.

Rebecca: We’re glad to have you. Today’s teas are:… Marc, are you drinking tea?

Marc: I am. I have a Cold Brew Hibiscus, which is really great. It’s still very warm down here in Mississippi. So it’s nice to have something that’s a little bit cool. That’d be refreshing.

Rebecca: That sounds yummy. How about you, John?

John: I am drinking a peppermint spearmint tarragon blend today. And it’s not so warm here. In fact, my furnace came on for the first time yesterday.

Rebecca: Yeah, transitions. And, I have English tea time today.

Marc: Well, that’s great.

John: So we have invited you here to discuss your ongoing work related to ChatGPT and other AI tools. Could you first describe what the AI Institute for Teachers is and its origins?

Marc: Sure, I think that when I was last a guest here in January of this year on your show. And it seems like 1000 years ago [LAUGHTER], but during that spring semester, I really took a much deeper dive than the original pilot with a lot of the generative AI tools in the fall. And we started noticing that the pace that big tech was deploying these tools and integrating these with existing software from Microsoft and Google was only accelerating. So in about April or May, I went to my chair, Stephen Monroe, and said, “I think we need to start training some people to get them prepared for the fall,” because we kind of thought that fall was going to be what it is right now, which is a chaotic just sort of mash up of sort of everything you can imagine that some people dive in deeply, some people tried to ban it, some people are trying to do some critical approaches with it too. So we actually worked with the Institute of Data Science here at the University of Mississippi, and we got some money. And we were able to pay 23 faculty members $1,000 apiece to train them for a day and a half about everything we knew about Generative AI, about AI literacy, ethics, what tools were working in the classroom, which wasn’t. And their whole goal was to go back to their home departments over the summer and serve as ambassadors and help prepare them for the fall semester. And we started that, we’ve had funding for one Institute, and now we’re doing workshops, and searching, as we all will, for more funding for doing,

Rebecca: How did faculty respond to (A) the incentive, but (B) also [LAUGHTER] the training that went with it?

Marc: Well, not surprisingly, they responded really well to the incentives, where you can pay people for their time, they generally do show up and do so as well. We had quite a few people wanting to take the training both internally from the University of Mississippi and then people started finding out about it, because I was posting it out on Twitter, and writing about it on my substack. So when we had interest from graduate students in Rome, interest from other SEC schools wanting to attend, and even interest from a community college in Hawaii. Definitely seen a lot of interest within our community, both locally and more broadly, nationally.

Rebecca: Did you find that faculty were already somewhat familiar with AI tools? I had an interesting conversation with some first-year students just the other day, and we were talking about AI and copyright. And I was just asking, “Hey, how many of you have used AI?” And I and another faculty member indicated that we had used AI to make it safe to indicate. And many of them really kind of shook their heads like “No, they hadn’t,” and they were unsure. And then I started pointing to places where we see snippets of it, in email and in texting and other places where there’s auto-finishing of sentences and that kind of thing. And then they’re like, “Oh, yeah, I have seen that. I have engaged with that. I have used that.” What did you find faculty’s knowledge?

Marc: Extremely limited. They thought of AI as ChatGPT. And one of the things we did with the session was basically frame it out as “Look, this was not just going to remain as a single interface anymore.” One of the things that actually happened during the institute that was completely wild to me was the last day, I woke up that morning. And I’d signed up through Google Labs, and you can do it as well, to turn on the features within the Google suite of tools, including in search and Google Docs, and Sheets and everything else. And they gave me access that last day, right before we began. And so I literally just plugged in my laptop and said, “This is what it’s going to look like in Google docs when you have generative AI activate in Google Docs. it pops up and immediately greets you with a wand with a phrase “Help me write.” And what I tried to explain to them and explained to faculty ever since then, is that it makes having a policy against AI very difficult when it shows up at an existing application with no indication whatsoever that this is in fact Generative AI. It’s just another feature that’s in the application that you have grown up with, from many of our students’ perspectives their entire lives. So yeah, we need to really work on training faculty, not just in the actual systems itself, but also getting them outside of that mindset that AI that we’re talking about is just ChatGPT. It’s a lot more than that.

John: Yeah, in general, when we’ve done workshops, we haven’t had a lot of faculty attendance partly because we haven’t paid people to participate [LAUGHTER], but what’s been surprising to me is how few faculty have actually explored the use of AI. My experience with first-year students was a little different than Rebecca, about half of the students in my large intro class had said that they had explored ChatGPT, or some other AI tool. And they seem pretty comfortable with it. But faculty, at least in our local experience, have generally been a bit avoidant of the whole issue. I think they’ve taken the approach that this is something we don’t want to know about, because it may disrupt how we teach in the future. How do you address that issue, and getting faculty to recognize that this is going to be a disruptive technology in terms of how we assess student learning and in terms of how students are going to be demonstrating their learning, and also using these tools for the rest of their lives in some way?

Marc: That’s a great question. We trained 23 people, I’ve also been holding workshops for faculty too, and again, the enthusiasm was a little bit different in those contexts, too. And I agree that faculty, I feel like they feel overwhelmed and maybe some of them want to ignore this and don’t actually want to deal with it, but it is here and it is being integrated at phenomenal rates in everything around us too. But if faculty don’t come to terms with us, and start thinking about engagement with their technology, both for themselves and for their students, then it is going to create incredible disruption that’s going to be lasting, it’s not going to go away. We’re also not going to have things like AI detection, like it is with plagiarism detection to come in and save the day for them too. And those are all things we’ve been trying to very carefully explain to faculty and get them on board. Some of them though, just aren’t there yet, I understand that. I empathize, too. This is a huge amount of time that you spend on these things to think about and talk about as well. And we’re just coming out of the pandemic, people are exhausted, they don’t want to deal with another, quote unquote, crisis, which is another thing that we’re seeing too. So there’s a lot of factors that are at play here that make faculty engagement, less than what I’d like to see.

Rebecca: We had a chairs’ workshop over the summer, and I was somewhat surprised based on our experience with other interactions with faculty, how many chairs had used AI. The number was actually a significant number. And most of them were familiar. And that to me was encouraging [LAUGHTER], it was like, “Okay, good, the leaders of the ship are aware. That’s good, that’s exciting.” But it’s also interesting to me that there are so many folks who are not that familiar, who haven’t experimented, but seem to have really strong policies around AI use or this idea of banning it or wanting to use detectors, and not really being familiar with what they can and cannot do.

Marc: Yeah, that’s very much what we’re seeing across the board too, is that the first detectors that I’m aware of that really came online, I think, for everyone was basically GPTZero, there are a few others that existed beforehand to IBM had one called the Giant Language Testing Lab. But those were all based on GPT-2, you’re going back in time to 2019. I know how ridiculous is it to go back four years in technology terms and think about this… that was a long time ago. And we really started adopting that through education or seem to be adopted in education based off of that panic. The problem is in incidents of education putting a system like that in place, it’s not necessarily very reliable. TurnItIn also adopted their own AI detector as well too. A lot of different universities began to explore and play around with it, I believe, and I don’t want to be misquoted here or misrepresent TurnItIn. I think what they initially came out with it, they were saying there was only 1% false positive rate for detecting AI. They’ve since raised that to 5%. And that has some really deep implications for teaching and learning. Most recently, Vanderbilt Center for Excellence in Teaching and Learning made the decision to not turn on the AI detection feature in TurnItIn. Their reasoning was that they had, I think, in 2022 some 75,000 student papers submitted. If they had the detector on during then that would give them a false positive grade about 3000 papers. And they just can’t deal with that sort of situation through a university level..No one can. You’d have to go through it investigating each one. You would also have to get students a hearing because that is part of the due process. It’s just too much. And that’s one of the main concerns that I have about the tools that it’s just not reliable in education.

John: And it’s not reliable both in terms of false positives and false negatives. So some of us are kind of troubled that we have allowed the Turnitin tool to be active and have urged that our campus shut it down for those very reasons, and I think a number of campuses, Vanderbilt was one of the biggest ones, I think to do that, but I think quite a few campuses are moving in that direction.

Marc: Yes, the University of Pittsburgh also made the decision to turn it off. I think several others did as well, too.

Rebecca: It’s interesting, if we don’t have a tool to measure, a tool to catch if you will, then you can’t really have a strong policy saying you can’t use it at all. [LAUGHTER] There’s no way to follow up on that or take action on that.

Marc: Where we’re at, I think, that for education, that’s a sort of conundrum. We’re trying to explain this to faculty. I think much more broadly, in society, though, if you can’t have a tool that works when you’re talking about Twitter, I’m sorry, X now, and understanding if the material is actually real or fake, that becomes a societal problem, too, and that’s what they’re trying to work on with watermarking. And I believe the big tech companies have agreed to watermark audio outputs, video outputs, and image outputs, but they’ve not agreed to do text outputs, because text is a little bit too fungible, you can go in and you can copy it, you can kind of change it around a little bit too much. So, definitely it’s gonna be a problem, too when state governments start to look at this, and they start wondering that the police officer taking your police report is writing this with their own words, the tax official using this as well, too. So it’s gonna be a problem well outside of education.

Rebecca: And if we’re not really preparing our students for that world in which they will likely be using AI in their professional fields, then we’re not necessarily doing our jobs and education and preparing our society for the future.

Marc: Yeah, I think training is the best way to go forward too and again, going back to the idea of intentional engagement with the technology and giving the students these situations where they can use it and where you, hopefully if you’re a faculty member, you actually have the knowledge and the actual resources to begin to integrate these tools and talk about the ethical use case, understanding what the limitations are and the fact that it is going to hallucinate and make things up, and to think about what sort of parameters you want to put on your own usage too.

John: One of the things that came out within the last week or so, I believe,… we’re recording this in late September… was the introduction of AI tools into Blackboard Ultra. Could you talk a little bit about that?

Marc: Oh boy, yes indeed, they announced last week that the tools were available to us in Blackboard Ultra. They turned it on for us here at the University of Mississippi, and I’ve been playing around with it, and it is a little bit problematic, because for right now, what you can do is with a single click, it will scan your existing materials in your Ultra course and it will create learning modules. It will create quiz questions based off that material, it will create rubrics, and will also generate images. Now compared to what we’ve been dealing with ChatGPT and all these other capabilities, this is almost a little milquetoast by comparison. But it’s also an inflection event for us in education, because it’s now here, it’s directly in our learning management system, it’s going to be something we’re going to have to contend with every single time we open up to create an assignment, or to do an assessment. And I’ve played around with it. It’s an older version of GPT. The image version I think is based on Dall-E, so you would ask for a picture of college students and you get some people with 14 fingers and weird artifacts all over their face, which may not be the one that would actually be helpful for your students. And while the other learning modules there are not my thinking necessarily, it’s just what the algorithm is predicting based off the content that exists in my course. We have that discussion with our faculty, we have them cross that Rubicon on and saying, “Okay, I’m worried about my students using this, what happens to me and my teaching, my labor, if I start adopting these tools. There could be some help, definitely, this could really streamline the process, of course creation and actually making it aligned with the learning outcomes my department wants for this particular class.” But it also gets us in a situation where automation is now part of our teaching. And we really haven’t thought about that. We haven’t really gotten to that sort of conversation yet.

Rebecca: It does certainly raise questions about, obviously, many ethical questions and really about disclosing to students what has been produced by us as instructors and what has been produced by AI and authorship of what’s there. Especially if we’re expecting students to [LAUGHTER] do the same thing.

Marc: It is mind boggling, the cognitive dissonance, with having a policy and saying “No AI in my class,” then all of a sudden, it’s there in my Blackboard course, and I could click on something. And, at least at this integration of Blackboard, they may very well change this, but once you do this, there’s no way to natively indicate that this was generated by AI. You have to manually go in there and say this was created. And I value my relationship with my students, it’s based off of mutual trust. I think almost everyone in education does. If we want our students to act ethically, and use this technology openly, we should expect ourselves to do the same. And if we get into a situation where I’m generating content for my students and then telling [LAUGHTER] them that they can’t do the same with their own essays, it is just going to be kind of a big mess.

John: So given the existence of AI tools, what should we do in terms of assessing student learning? How can we assess the work reasonably given the tools that are available to them?

Rebecca: Do you mean we can just use that auto-generated rubric right, that we just learned about? [LAUGHTER]

Marc: You could, you can use the auto-generated rubric separately from Blackboard. One of the tools I’m piloting right now is the feedback assistant, it was developed by Eric Kean and Anna Mills. I consulted with them on this, too. She’s very big on the AI space for composition. It’s called MyEssayFeedback. And I’ve been piloting this with my students. They know it’s an AI, they understand this. I did get IRB approval to do so. But I’ve just got the second round of generated feedback, and it’s thorough, it’s quick, it’s to the point. And it’s literally making me say, “How am I going to compete with that?” And maybe the way is that maybe I shouldn’t be competing with that, maybe it’s I’m not going to be providing that feedback. But then maybe then I should be providing my time in different ways. Maybe I should be meeting with them one on one to talk about their experiences, maybe that way. But I think you raise an interesting question. I don’t want to be alarmist, I want to be as level-headed as I can. But from my perspective, all the pieces are now there to automate learning to some degree. They haven’t been all hooked up yet and put together a cohesive package. But they’re all there in different areas. And we need to be paying attention to this.Our hackles need to be raised just slightly at this point to see what this can do. Because I think that is where we are headed with integrating these tools into our daily practice.

Rebecca: AI generally has raised questions about intellectual property rights. And if our learning management systems are using our content in ways that we aren’t expecting, how is that violating our rights or the rights that the institution has over the content that’s already there.

Marc: A lot of perspectives of the people that I speak with too, their course content, their syllabi, from their perspective is their own intellectual property in some ways. We get debates about that, about the actual university owns some of the material. But we have had instances where lectures were copyrighted before in the past. And if you’re allowing the system to scan your lecture, you are exposing that to Generative AI. And that gets at one aspect of this. The other aspect, which I think Rebecca is referring to is the issue with training this material for these large language models itself could indicate that it was stolen or not properly sourced from internet and you’re using it and then you’re trying to teach your students [LAUGHTER] to cite material correctly too, so it’s just a gigantic conundrum of just legal and ethical challenges. The one silver lining in all this, and this has been across the board with everyone in my department. This has been wonderful material to talk about with your students, they are actually actively engaged with it, they want to know about this, they want to talk about it. They are shocked and surprised about all the depths that have gone into the training of these models, and the different ethical situations with data and all of it too. And so if you want to just engage your students by talking to them about AI too, that’s a great first step in developing their AI literacy. And it doesn’t matter what you’re teaching, it could be a history course, it could be a course in biology, this tool will have an impact in some way shape or form in your students’ lives they want to talk about, I think maybe something to talk about is there are a lot of tools outside of ChatGPT, and a lot of different interfaces as well, too. I don’t know if I talked about this before in the spring, the one tool that’s really been effective for a lot of students were the reading assistant tools, one that we’ve been employing is called ExplainPaper. They upload a PDF to it, it calls upon generative AI to scan the paper and you can actually select it to whatever reading level you want, then translate that into your reading level. The one problem is that students don’t realize that they might be giving up some close reading, critical reading skills to it as well too, just like we do with any sort of relationship with generative AI. There is kind of that handoff and offloading of that thinking, but for the most part, they have loved that and that’s helped them engage with some really critical art texts that normally would not be at their reading level that I would usually not assign to certain students. So those are helpful. There are plenty of new tools coming out too. One of them is called Claude 2 to be precise by Anthropic. That just came out, I think, in July for public release, it is as powerful as GPT-4. It is free right now, if you want to sign up for it as well too. The reason why I mentioned Claude is that the context window, what you can actually upload to it is so much bigger than ChatGPTs. I believe their context window is 75,000 words. So you can actually upload four or five documents at a time, synthesize those documents. One of the things I was using it for as a use case was that I collected tons of reflections for my students this past year about the use of AI. It’s all in a messy Word document. It’s 51 pages single spaced. It’s all anonymized so there’s new data that identifies them. But it’s so much of a time suck on my time, just go through to code those reflections. And I’ve just been uploading to Claude and having it use a sentiment analysis to point out what reflections are positive from these students, in what way, and it does it within a few seconds. It’s amazing.

John: One other nice thing about Claude is that has a training database that ends in early 2023. So it has much more current information, which actually, in some ways is a little concerning for those faculty who were trying to ask more recent questions, particularly in online asynchronous courses, so that ChatGPT could not address those. But with Claude’s expanded training database, that’s no longer quite the case.

Marc: That’s absolutely correct. And to add to this rather early discussion about AI detection, none of the AI detectors that I’m aware of had time to actually train on Claude, so if you generated essay… and you guys are free to do this on your own, your listeners are too… if you generated and essay with Claude, and you try to upload that to one of the AI detectors, very likely you’re going to get zero detection or a very low detection rate for it too, because it’s again, a different system. It’s new, the existing AI detectors have not had time. So the way to translate this is don’t tell your students about it right now, or in this case, be very careful about how you introduce this technology to your students, which we should do anyway. But this is one of those tools that is massively popular, a lot of people just haven’t known about it because, again, ChatGPT just takes up all the oxygen in the room when we talk about Generative AI

John: What are some activities where we can have students productively use AI to assist their learning or as part of their educational process?

Marc: That’s a great question. We actually started developing very specific activities for them to look at different pain points for writing classes. One of them was getting them to actually integrate the technology that way. So we built a very careful assignment, which called on very specific moves for them to make both in terms of their writing, and their integration of the technology for that. We also looked at bringing some research question, building assignments that way. We have assignments from my Digital Media Studies students right now about how they can use it to create infographics. Using the paid for version of ChatGPT Plus, they can have access to plugins, and those plugins then give them access to Canva and Wikipedia. So they can actually use Canva to create full on presentations based off of their own natural language and use actual real sources by using those two plugins in conjunction with each other. I just make them then go through it, edit it with their own words, their own language too, and reflect on what this has done to their process. So lots of different examples, too, I mean, it really is limited only to your imagination in this time, which is exciting, but it’s also kind of the problem that we’re dealing with, there’s so much to think about.

Rebecca: From your experience in training faculty, what are some getting started moves that faculty can take to get familiar enough to take this step of integrating AI by the spring?

Marc: Well, I think the one thing that they could do is, there are a few really fast courses. I think it’s Ethan Mollick from even from the Wharton School of Business put out a very effective training course that was all through YouTube, I think it’s like four or five videos, very simple to take, to get used to understanding how ChatGPT works, how Microsoft’s Bing works as well too, and what sort of activities students can use it for, what sort of activities faculty could. Microsoft has also put out a very fast course, I think takes 53 minutes to complete about using generative AI technologies in education. And those are all very fast ways of basically coming up to speed with the actual technology.

John: And Coursera has a MOOC through Vanderbilt University, on Prompt Engineering for ChatGPT, which can also help familiarize faculty with the capabilities of at least ChatGPT. We’ll include links to these in the show notes.

Marc: I really, really hope Microsoft, Google and the rest of them calm down, because this has gotten a little bit out of control. And integration of these tools are often without use cases, they’re often waiting to see how we’re going to come up and use them too. And that is concerning. Google has announced that they are committed to releasing their own model that’s going to be in competition with GPT4, I think it’s called Gemini by late November. So it looks like they’re just going to keep on heating up this arms race and you get bigger models, more capable and I think we do need to ask ourselves more broadly what our capacity is just to keep up with this. My capacity is about negative zero at this point… going down further.

John: Yeah, we’re seeing new AI tools coming out almost every week or so now in one form or another. And it is getting difficult to keep up with. I believe Apple is also planning to release an AI product.

Marc: They are. They also have a car they’re planning to release, which is the weirdest thing in the world to me, that there could be your iPhone charged in your Apple Car.

John: GM has announced that they are not going to be supporting either Android or Apple CarPlay for their electric vehicles. So perhaps this is Apple’s way of getting back at them for that. And we always end with the question, what [LAUGHTER] is next, which is perhaps a little redundant, but we do always end with that.

Marc: Yeah, I think what’s next is trying to critically engage the technology and explore it not out of fear, but out of a sense of wonder. I hope we can continue to do that. I do think we are seeing a lot of people starting to dig in. And they’re digging in real deep. So I’m trying to be as empathetic as I can be for those that don’t want to deal with the technology. But it is here and you are going to have to sit down and spend some time with it for sure.

John: One thing I’ve noticed that in working with faculty, they’re very concerned about the impact of AI tools on their students and student work. But they’re really excited about all the possibilities that opens up for them in terms of simplifying their workflows. So that, I think, is a positive sign.

Rebecca: They could channel that to help understand how to work with students.

Marc: I hope they find that out, there’s a positive pathway forward with that too.

John: Well, thank you. It’s great talking to you and you’ve given us lots more to think about.

Marc: Thank you guys so much.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]

309. Preparing Students for an AI Future

New technology is often seen as a threat to learning when first introduced in an educational setting. In this episode, Michelle Miller joins us to examine the question of when to stick with tools and methods that are familiar and when to investigate the possibilities of the future.

Michelle is a Professor of Psychological Sciences and President’s Distinguished Teaching Fellow at Northern Arizona University.  She is the author of Minds Online: Teaching Effectively with Technology and Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World. Michelle is also a frequent contributor of articles on teaching and learning in higher education to publications such as The Chronicle of Higher Education.

Show Notes

Transcript

John: New technology is often seen as a threat to learning when first introduced in an educational setting. In this episode, we examine the question of when to stick with tools and methods that are familiar and when to investigate the possibilities of the future.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Michelle Miller. Michelle is a Professor of Psychological Sciences and President’s Distinguished Teaching Fellow at Northern Arizona University. She is the author of Minds Online: Teaching Effectively with Technology and Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World. Michelle is also a frequent contributor of articles on teaching and learning in higher education to publications such as The Chronicle of Higher Education. Welcome back, Michelle.

Michelle: Hey, it’s great to be here.

Rebecca: Today’s teas are: ….Michelle, are you drinking tea?

Michelle: I’m actually still sticking with water. So it’s a healthy start so far for the day.

Rebecca: Sounds like a good plan.

John: I have ginger peach black tea today.

Rebecca: And I’ve got some Awake tea. We’re all starting the day [LAUGHTER].

John: So we’ve invited you here to discuss your August 17th Chronicle article on adapting to ChatGPT. You began that article by talking about your experience teaching a research methods course for the first time. Could you share that story? Because I think it’s a nice entree into this.

Michelle: Oh, thank you. I’m glad you agree. You never know when you’re sharing these kinds of personal experiences. But I will say this was triggered by my initial dawning awareness of the recent advances in AI tools, which we’re all talking about now. So initially, like probably a lot of people, I thought, well okay, it’s the latest thing and I don’t know how kind of attentive or concerned I should be about this. And as somebody who does write a lot about technology and education, I have a pretty high bar set for saying, “Oh wow, we actually kind of need to drop everything and look at this,” I’ve heard a lot of like, “Oh, this will change everything.” I know we all have. But as I started to get familiar with it, I thought “Oh my goodness, this really is a change” and it brought back that experience, which was from my very first assignment teaching the Research Methods in Psychology course at a, well, I’ll just say it was a small liberal arts institution, not my graduate institution. So I’m at this new place with this new group of students, very high expectations, and the research methods course… I think all disciplines have a course kind of like this, where we kind of go from, “Oh, we’re consuming and discussing research or scholarship in this area” to “Okay, how are we going to produce this and getting those skills.” So it is challenging, and one of the big challenges was and still is, in different forms, the statistical analysis. So you can’t really design a study and carry it out in psychological sciences without a working knowledge of what numbers are we going to be collecting, what kind of data (and it usually is quantitative data), and what’s our plan? What are we going to do with it once we have it, and getting all that statistical output for the first time and interpreting it, that is a big deal for psychology majors, it always is. So students are coming, probably pretty anxious, to this new class with a teacher they haven’t met before. This is my first time out as the instructor of record. And I prepared and prepared and prepared as we do. And one of the things that I worked on was, at the time, our methodology for analyzing quantitative data. We would use a statistics package and you had to feed it command line style input, it was basically like writing small programs to then hand over to the package. And you would have to define the data, you’d have to say, “Okay, here’s what’s in every column and every field of this file,” and there was a lot to it. And I was excited. Here’s all this knowledge I’m going to share with you. I had to work for years to figure out all my tricks of the trade for how to make these programs actually run. And so I’ve got my stack of overheads. I come in, and I have one of those flashbulb memories. I walked into the lab where we were going to be running the analysis portion, and I look over the students’ shoulders, and many of them have opened up and are starting to mess around with and play around with the newest version of this statistics package. And instead of these [LAUGHTER] screens with some commands, what am I looking at? I’m looking at spreadsheets [LAUGHTER]. So the data is going into these predefined boxes. There’s this big, pretty colorful interface with drop down menus… All the commands that I had to memorize [LAUGHTER], you can point and click, and I’m just looking at this and going, “Oh no, what do I do?” And part of my idea for this article was kind of going back and taking apart what that was like and where those reactions were coming from. And as I kind of put in a very condensed form in the article, I think it really was one part just purely sort of anxiety and maybe a little bit of loss and saying, “But I was going to share with you how to do these skills…” partly that “Oh no, what do I do now?” I’m a new instructor. I have to draft all this stuff, and then partly, yeah, curiosity and saying, “Well, wait a minute, is this really going to do the same thing as how I was generating these commands and I know you’re still going to need that critical thinking and the top level knowledge of “Okay, which menu item do you want?” Is this going to be more trouble than it’s worth? Are students going to be running all the wrong analyses because it’s just so easy to do, and it’s going to go away.” So all of that complex mix is, of course, not identical to, but I think pretty similar to how I felt… maybe how a lot of folks are feeling… about what is the role of this going to be in my teaching and in my field, and in scholarship in general going forward?

Rebecca: So in your article, you talk a lot about experimenting with AI tools to get started in thinking about how AI is related to your discipline. And we certainly have had lots of conversations with faculty about just getting in there and trying it out just to see how tools like ChatGPT work to become more familiar with how they might be integrated into their workflow. Can you share a little bit about how you’d recommend for faculty or how you were thinking about [LAUGHTER] jumping in and experimenting and just gettin g started in this space?

Michelle: Well, I think perhaps, it also can start with a little bit of that reflection and I think probably your listenership has a lot of very reflective faculty and instructors here. And I think that’s the great first step of “Alright now, if I’m feeling worried, or I’m feeling a very negative reaction, where’s that coming from and why?” But then, of course, yeah when you get it and actually start using it the way that I had to get it and start using my statistics package in a brand new way, then you do start to see, “Okay, well, what’s great, what’s concerning and not great, and what am I going to do with this in the future? So experimenting with the AI tools, and doing so from a really specific perspective. When I started experimenting at first, I think I thrashed around and kind of wasted some time and energy initially, looking at some things that were not really education focused. So something that’s aimed at people who are, say, social media managers, and how this will affect their lives is very different than me as a faculty member. So make sure you kind of narrow it down, and you’re a little planful about what you look at, what resources you’re going to tap into, and so on. And so that’s a good starting point. Now, here’s what I also noticed about my initial learning curve with this. So I decided to go with ChatGPT, myself, as the tool I wanted to get the most in depth with. So I did that and I noticed really that, of course, like with any sort of transfer of learning situation, and so many of those things we do with our students, I was falling back in a kind of an old pattern. So my first impulse was really funny, it was just to ask it questions, because I think now that we’ve had several decades of Google under our belts and other kinds of search engines, we get into these AI tools, and we treat them like search engines, which for many reasons, they really are not. Now, this is not bad, you can certainly get some interesting answers. But I think it’s good to really have at the front of your mind to kind of transition from simply asking questions to what these tools really shine with, which is following directions. I think one of the best little heuristics I’ve seen out there, just very general advice, is: role, goal, and instructions. So instead of coming in and saying “what is” or “find” or something like that, what perspective is it coming from? Is it acting as an expert? Is it acting as an editor? Is it going to role play the position of a college student? Tell it what you’re trying to accomplish, and then give it some instructions for what you want it to do. That’s a big kind of step that you can get to pretty quickly once you are experimenting. And that’s, I think, real important to do. So we have that. And of course, we also want to keep in mind that one of the big distinguishing factors as well is that these tools have memory, your session is going to unfold in a particular and unique way, depending not just on the prompts you give it, but what you’ve already asked it before. So, once you’ve got those two things, you can start experimenting with it. And I do think coming at it from very specific perspectives is important as I mentioned because there’s so little super general advice, or discipline-independent advice that I think is really going to be useful to you. And so doing that, I think a lot of us we start in a sort of a low-stakes, tentative way with other interests we might have. So for example, one of the first things that I did to test it out myself was I had it work out a kind of a tedious little problem in knitting. So I had a knitting pattern, and there’s just a particular little counting algorithm where to put increases in your pattern that always trips us up. And I was about to like, “Oh, I gotta go look this up,” then I thought “You know what, I’m gonna see if ChatGPT can do this.” And it did that really well. And by doing that in an area where I kind of knew what to expect, I could also push its parameters a little bit, make sure is this plausible? is what it’s given me… [LAUGHTER] does that map onto reality? and I can fact check it a little bit better as I go along. So those are some things that I think that we can do, for those who really are starting from scratch or close to it right now.

John: You’re suggesting that faculty should think about how AI tools such as this… and there’s a growing number of them, it seems more are coming out almost every week…, how they might be useful in your disciplines and in the types of things you’re preparing students for, because as you suggested it’s very different in different contexts. It might be very different if you’re teaching students to write than if you’re teaching them psychology or economics or math. And so it’s always tempting to prepare students for the way we were prepared for the world that we were entering into in our disciplines. And as you suggest in the article that we really need to prepare students for the world that they’re going to be entering. Should people be thinking about how it’s likely that students will be using these tools in the future and then helping prepare them for that world?

Michelle: Yeah, that’s a really good way to start getting our arms around this. In kind of the thinking that I’ve been doing and kind of going through this over the last couple of months… that just absolutely keeps coming up as a recurring thing, that this is so big, complicated, and overwhelming, and means very different things for different people in different fields. Being able to kind of divide and break down that problem is so important. So, yeah, I do think that and, for example, one of the very basic things that I’ve made some baby steps towards using myself is, ChatGPT is really good at kind of reformulating content that you give it, expanding or condensing it in particular. The other day, for example, I was really kind of working to shape a writing piece, and I had sort of a longer overview and I needed to go back and kind of take it back down to basics and give myself some ideas as a writer. So I was not having it write any prose for me. But I said, “Okay, take what I wrote and turn it into bullet points” and it did a great job at that. I had a request recently from somebody who was looking at some workshop content I had and said, “Oh, we really want to add on some questions where people can test their own understanding.” And you know, as the big retrieval practice [LAUGHTER] advocate and fan of all time, I’m like, “Oh, well, that’s a great idea. Oh, my goodness, and I’m gonna have to write this, I’m on a deadline.” And here too, I got, not a perfectly configured set of questions. but I got a really good starting point. So I was able to really quickly dump in some text and some content and say,”Write this many multiple choice and true/false questions.” And it did that really, really well. So those are two very elementary examples and some things that we can get in the habit of doing as faculty and as people who work with information and knowledge in general.

Rebecca: I’ve used ChatGPT, quite often to get started on things too, and generate design prompts, all kinds of things and have it revise and add things and really get me to think through some things and then kind of I do my own thing. But I use that as a good starting point to not have a blank page.

Michelle: Absolutely. Yeah, the blank page issue. And I think where we will need to develop our own practice is to say, “Okay, make sure we don’t conflate or accidentally commingle our work with ChatGPT’s, as we figure out what those acceptable parameters are.” But that reminds me too, I mean, we all have the arenas where we shine and the arenas where we have difficulty as, again, as faculty, as working professionals. I know graphic design is your background. I’m terrible. I’m great at words, but it reminds me, one of the things that I kind of made myself go and experiment with was creating a graphic, just for my online course that’s running right now, which would, for me, that would typically be a kind of an ordeal of searching and trying to find something that was legitimate to use and a lot of clipart, and I had it generate something. Now, I do not advise putting in like “exciting psychology image in the style of Salvador Dali,” [LAUGHTER] and seeing what comes out. He was not the right choice. It was quite terrifying. But after a lot of trial and error, I found something that was serviceable and there too, it’s not like I need to develop those skills. If I did, I would go about that very, very differently. But it’s something that I need in the course of my work but it’s a little outside of my real realm of expertise. So helpful there too. So yeah, the blank page… I think you really hit on something there.

John: Now did you use DALL-E or Midjourney or one of the other AI design tools to generate that image?

Michelle: Oh my goodness. Well, here again, [LAUGHTER] I was really out of the proverbial comfort zone for myself is really going to show. I did use DALL-E and I really wrestled with it for a couple of reasons. And so, as a non-graphic person, it did not come easily to me. Midjourney as well, if you’re not a Discord user, you’re really kind of fighting to figure out that interface at the same time and those that are familiar with cognitive load concept of [LAUGHTER] “I’m trying to focus on this project, but all this other stuff is happening. And then I had a good friend who’s a computer engineer and designs stained glass as a hobbyist [LAUGHTER] and kind of took my hand and said, “Okay, here’s some things you can do.” It actually came up with something a lot prettier, I have to say.

John: You had just mentioned two ways in which faculty could use this to summarize their work or to generate some questions. Not all faculty rely on retrieval practice in an optimal manner. Might this be something that students can use to fill in the gaps when they’re not getting enough retrieval practice or when they’re assigned more complex readings then they’re able to handle.

Michelle: Yeah, having the expertise is part of it, and I think we’re going to see a lot of developing understanding of that really cool tradeoff and handoff between our expertise and what the machine can do. I’m kicking around this idea as well, so I’m glad you brought that up. A nice side effect could be a new era for retrieval practice, since that is something of a limiting factor is getting quality prompts and questions for yourself. It’s funny, one of the things that I did do while taking a little prompt engineering course right now to try to build some of these skills and the facility with it. And one of the things they assigned was a big dense article [LAUGHTER] on prompt engineering, which was really great, but a little out of my field, and so I’m kind of going “Well, did I get that?” And then I thought, I better take my own medicine here and say, “Well, what’s the best way to ensure that you do and to find out if you don’t have a good grasp of what you were assigned?” And I was able to give it the content, I gave it, again, a role, a goal, and some instructions and said “Act as a tutor or a college professor, take this article, and give me five questions to test my knowledge. And then I told it to evaluate my answers [LAUGHTER] and see whether it was correct.” So that was about as meta as you can get, I think, in this area right now. So I’ve done it. And here again, it does a pretty good job, actually an excellent job. Do you want to use it for something super high stakes, probably not, especially without taking that expert eye to it. But wow, here’s something, here’s content that was challenging to me personally. It did not come with built in retrieval practice, or a live tutor to help me out with it. I read it, and I’m kind of going, “I don’t know, I don’t have a really confident feeling.” So I was able to run through that. And so yeah, that could be one of the initial steps that we suggest to students as a potentially helpful and not terribly risky way of using these really powerful new tools.

Rebecca: One of the things that this conversation is reminding me of and some of the others that we’ve had about ChatGPT is we have to talk a little bit about how students might use it in an assignment or something, or how we might coach a student to use it. But we don’t often talk a lot about ways that students might just come to a tool like this, and how they’re just going to use it on their own without us having any [LAUGHTER] impact. I think, often we jump to conclusions that they’re gonna have a tool write a paper or whatever. What are some other ways that we can imagine or roleplay or experiment in the role of a student to see how a tool like this might impact our learning?

Michelle: So that is another kind of neat running theme that does come up, I think, with these AI tools is role playing. I mean, this is what it’s essentially doing. And so having us roleplay the position of a student or having it evaluate our materials from the perspective of a student, I think, could be useful. But, it kind of reminds me let’s not have a total illusion of control over this. I think, as faculty, we have a very individualistic approach to our work. And I think that’s fine. But yeah, there’s a lot happening outside of the classroom that we should always have in mind. So just like with me on that hyper planned first course that I was going to be teaching, it just happened and students were already out there experimenting with “Oh, here’s how I can complete this basic statistics assignment with the assistance of this tool I’m going to teach myself. So that could be going on, almost certainly is going on, out there in the world of students. And it’s another time to do something which I know I have to remind myself to do, which is ask students and really talk to them about it. Early on, I think there was a little bit of like, “Oh, this is a sort of a taboo or a secret and I can’t talk to my professors about it and I want to broach it and professors, they didn’t want to broach it with their students because we don’t want to give anybody ideas or suggest some things are okay where they’re not. But I think we’re at a good point to just kind of level with our students and ask them “How do you think we could bring this in?” I think next semester, I’m going to run maybe an extra credit assignment and say, “Oh, okay, we’re gonna have a contest, you get bragging rights, and maybe a few points to “What is a good creative use of this tool in a way that relates to this class? Or can you create something, kind of a creative product or some kind of a demonstration that in some way ties to the class?” And I’ve learned through experience when I’m stumped, and I don’t quite know where to go with a tool or a technique or a problem, take it to the students and see what they can do with it.

Rebecca: I can see this is a real opportunity to just ask the students, how are they using it, and then take a look at the results that it’s creating. And then this is where we can provide some information about how expertise in a field [LAUGHTER] could actually make that better why that result is in what they think it is.

Michelle: Absolutely, and some of the best suggestions that I’ve seen out there, I’m kind of eagerly consuming across a lot of disciplines as much as I can to look at those suggestions. The most intriguing ones I’ve seen are kind of with things with a media literacy and critical thinking flair that tells students “Okay, here’s something to elicit from your AI tool that you’re using, and then we, from our human and expert perspectives, are going to critique that and see how we could improve it. So here too, critical thinking and those kinds of evaluation skills and abilities are some of the most prized things we want students to be getting in higher education. And they are simultaneously. for many different reasons, they are some of the hardest. So if we can bring that to bear on the problem, I think that can be a big benefit.

John: In the article, you suggested that faculty should consider introducing some AI based activities in their classes. Could you talk a little bit about some that you might be considering or that you might recommend to people?

Michelle: One of the things that I am going to be teaching, actually for the first time in a very long time, is a writing in psychology course, which has the added challenge of being fully online asynchronous, so that’s going to be coming up pretty soon for me. It’s still under construction, as I’m sure a lot of our activities and a lot of things are that we’re thinking about in this very fluid and rapidly developing area. I think things like outlining, things like having ChatGPT suggest improvements, and finding ways for students to also kind of track their workflow with that. I do think that one of the things that in our different professional [LAUGHTER] lives, because as I mentioned in the article, I think that should really lead the way of what work are we doing as faculty and as scholars in our particular areas. One of the things we’re going to have to be looking at is alright, how do I manage any output that I got from this and knowing what belongs to it and what was generated by me. What have I already asked it? If they’re particularly good prompts, how do I save those so I can reuse them? …another really good thing about interacting with the tools. But, I’m kind of playing around with some different ideas about having students generate maybe structures or suggestions that they can work off of themselves. And having CHATGPT give them some feedback on what they’ve developed so far. So one of the things you can ask it to do is critique what you tell it, so [LAUGHTER] you can say, “Okay, improve on this.” And then you can repeat, you can keep iterating on that, and you can keep fine tuning in different areas. You can also have it improve on its own work. So once it makes a suggestion you can, I mean, it’s virtually infinite what you can tell it to go back and do: to refocus, expand, condense, add and delete, and so on. So that’s kind of what I am shaping right here. I think too, at the Introduction to Psychology level, which is the other level that I frequently teach within, I’m not incorporating it quite yet. But I think having students have the opportunity or option to create a dialogue, an example, maybe even a short play or skit that it can produce to illustrate some concepts from the book and there ChatGPT is going to be filling in kind of all the specifics, the student won’t be doing it, but it’ll be up to them to say, “Well, what really stood out to me in this big, vast [LAUGHTER] landscape of introductory material that I think would be so cool to communicate to another person in a creative way?” And this can help out with that. I’m also going to be teaching my teaching practicum for graduate students coming up as well. And, of course, I want to incorporate just kind of the latest state of the art information about it. But also, it’s supposedly, I haven’t tried it myself yet, but supposedly it’s pretty good at structuring lesson plans. We don’t do formal lesson plans the way they’re done in K through 12 education, of course, but to give it the basics of an idea and then have a plan that you’re going to take into a course since that’s one of the things they do in that course is produce plans for courses and I gotta say it’s not a critical skill, the formatting and exactly how that’s all going to be laid out on the page, is not what they’re in the class to do. It’s to really develop their own teaching philosophy, knowledge, and the ability to put those into practice in a classroom. So if it can be an aid to that, great, and I also want them to know what the capabilities are if they haven’t experimented with them yet, so they can be very aware of that going into their first classes that they teach.

Rebecca: When you mentioned the example of a writing intensive class that’s fully asynchronous online, I immediately thought of all of the concerns [LAUGHTER], and barriers that faculty are really struggling with in really highly writing intensive spaces, and then fully online environments, especially around things like academic integrity. Can you talk a little bit about [LAUGHTER] some of the things that you’re thinking about as you’re working through how you’re gonna handle AI in that context?

Michelle: As I’m been talking with other faculty right now, one of the things that I really settled on is the importance of keeping these kind of threads of the conversation separate and so I’m really glad we’re kind of piecing that out from everything [LAUGHTER] else. Because once again, it’s just too much to say, well, on the one hand, how to prepare students and give them skills they might need in the future? How do I use it to enhance learning and oh my gosh, is everybody just going to have AI complete their assignments? It’s kind of too much at once. But once we do piece that out, as you might pick up on that I’m a little enthusiastic about some of the potential, does not mean I don’t think this is a pretty important concern. So I think we’re gonna see a lot of claims about “Oh, we’re going to AI proof assignments and I think probably many of your listeners have already run across AI detection tools and the severe problems with those right now. So I think we have to just say right now, for practical purposes, no, you cannot really reliably detect AI written material. I think that if you’re teaching online especially, I think we should all just say flat out that AI can take your exams. If you have really conventional exams, as I did before [LAUGHTER] this semester in some of my online courses, if you’ve got those, it can take those. And just to kind of drive home to folks, this is not just simple pattern matching, looking up your particular question that floated out into a database, no, it’s processing what you’re putting in. And it’s probably going to do pretty well at that. So for me, I’m kind of thinking about, in my own mind, a lot of these more as speed bumps. I can put speed bumps in the road, and to know what speed bumps are going to at least discourage students from just dumping the class work into ChatGPT. To know what’s effective, it really helps to go in and know what it does well and what it really stumbles on, that will give you some hints about how to make it less attractive. And that’s kind of what I’m settling on right now myself, and what I’ve shared with students, as I’ve spoken with them really candidly to say I’m not trying to police or catch people, I am not under an illusion that I can just AI proof everything. I want to remove obvious temptation, I want to make it so a student who otherwise is inclined to do the right thing, wants to have integrity and wants to learn doesn’t go in feeling like, “Oh, I’m at a disadvantage If I don’t just do this, it’s sitting right there.” So creating those nudges away from it, I think, is important. And yeah, I took the step of taking out conventional exams from the online class I’m teaching right now. And I have been steadily de-emphasizing them more with every single iteration. I think those who are into online course design might agree well, maybe that was never really a good fit to begin with. That’s something that we developed for these face-to-face environments, and we just kind of transplanted it into that environment. But I sort of ripped off that [LAUGHTER] bandaid and said, “Okay, we’re just not going to do this. I’ve put more into the other substance of the course, I put in other kinds of interactions. Because if I ask them Psychology 101 basic test questions, even if I write them fresh every time, it can answer those handily, it really can.

John: Recently, someone ran the Test of Understanding in College Economics through with the micro and macro versions. And I remember on the macro version ChatGPT-4 scored at the 99th percentile on this multiple choice quiz, which basically is the type of things that people would be putting in their regular tests. So it’s going to be a challenge because many of the things we use to assess student’s learning can all be completed by ChatGPT. What types of activities are you thinking of using in that online class that will let you assess student learning without assessing ChatGPT’s or other AI tools’ ability to represent learning?

Michelle: Well, I’ll share one that’s pretty simple, but I was doing anyway for other reasons. So just to take one very simple example of something that we do in that class, I really got an a big kick with Kahoot!, especially during the heyday of fully hybrid teaching where we were charged, as faculty, I know at my institution, where you have to have a class that can run synchronously with in-person and remote students at the same time, and run [LAUGHTER] asynchronously for students who need to do their work at a different time phase. And that was a lot and Kahoot! was a really good solution to that. It’s got a very K through 12 flavor to it, but most students just really take a shine to it anyway. And it is familiar to many of them from high school or previous classes right now. So it’s a quiz game, runs a timed gamified quiz. So students are answering test questions in these Kahoot!s that I set up. And because it has that flexibility, they have the option to play the quiz game sort of asynchronously on their own time, or we have those different live sessions that they can drop in and play against each other and against me. So that’s all great. But here’s the thing, prior to ChatGPT, I said I don’t want to grade this on accuracy, which feels really weird, right, as a faculty member to say, well, here’s the test and your grade is not based on the points you earn for accuracy. It’s very timed, a little hiccup in the connectivity you have at home can alter your score, and I just didn’t like it. So what students do is for their grade, they do a reflection. So I give the link to the Kahoot!, you play it, and then what you turn into me is this really informal and hopefully very authentic reflection, say, “Well, how did you do? What surprised you the most? Were there particular questions that tripped you up?” And also kind of getting them to say, “Well, what are you going to do differently next time?” And for those who are big fans of teaching metacognition, I mean, that comes through loud and clear, I’m sure. So every single module they have this opportunity to come in and say, “Okay, here’s how I’m doing, and here’s what I’m finding challenging in the content.” Is it AI proof? Absolutely not. No, it really isn’t. But it is, at least I think at that tipping point where the contortions you’d have to go through to come up with something that is gonna pass the sniff test with me, and if I’ve now read 1000s of these, I know what they tend to look like. And Kahoot!s are timed. I mean, could you really quickly transfer them out and type them in? Yes. It’s simply a speed bump. But the time would make that also a real challenge to kind of toggle back and forth. So I feel good about having that in the class. And so it’s something again, I’ve been developing for a while, I didn’t just come up with it, fortunately, the minute that ChatGPT really impinged on this class, but it was already in place. And I kind of was able to elevate that and have that be part of it. And so they’re doing that. I do a lot of collaborative annotation, I continue to be really happy with… I use Perusall. I know, that’s not the only option there is, but it’s great. They’ve got an open source textbook. And they’re in there commenting and playing off each other in the comments. So that is the kind of engagement I think that we need in force anyway, it is less of a temptation. And so I feel like that’s probably better than having them try to quickly type out answers to, frankly, pretty generic definitions and so on that we have in that course. Some people are not going to be happy with that, but that’s really truly what I’m doing in that course instead.

John: Might this lead to a bit of a shift to more people using ungrading techniques with those types of reflections as a way of shifting the focus away from grading, which would encourage the use of ChatGPT or other tools to focus on learning, which might discourage it from being used inappropriately?

Michelle: What a fantastic connection. And you know what? When I recently led a discussion with faculty in my own department about this, that is actually something that came up over and over just, it’s not ungrading, because not everybody is even kind of conversant with that concept. But how there are these trends that have been going on for a while of saying, you know, is a timed multiple choice test really what I need everything to hinge on in this online course. Ungrading, this idea of kind of, I think there’s this emerging almost idea I’ll call both sides-ism, or collaboration between student and teacher, which I think was also taking root through the pandemic teaching and that came to the forefront with me of just saying, “Okay, we’re not going to just be able to keep running everything the same way traditionally it’s been run,” which sometimes does have that underlying philosophy of, “Okay, I’m going to make you do things and then you owe me this work, and I’m going to judge it and you’re going to try to get the highest points with the least effort. I mean, that whole dynamic, that is what I think powers this interest in ungrading, which is so exciting, and it’s gonna maybe be pushed ahead by this as well. Ultimately, the reason why you’re going to do these exercises I assign to you is because you want to develop these skills. You are here for a reason, and I am here to help you. So that is, I think, a real positive perspective we can bring to this and I would love to see those two things wedded together, especially now that tests can be taken by ChatGPT, then, we should relook at all of our evaluation and sort of the underlying philosophy that powers it.

John: One of the concerns about ChatGPT is it sometimes makes mistakes, it’s sometimes will make stuff up, and it’s also not very good with citations. In many cases, it will just completely fabricate citations, where it will get the authors of people who’ve done research in the field, but will grab other titles or make up other titles for their work. Might that be a way in which we could could give students an assignment to use one of these tools to generate a paper or a summary on some topic, but then have them go out and verify the arguments made and look for citations and document it just as a way of helping prepare them for a world where they have a tool which is really powerful, but is also sometimes going off in strange directions, so that they can develop their critical thinking skills more effectively.

Michelle: Yeah, looping back to that critical thinking idea. Could this also be a real way to elevate what we’ve been doing and give us some new options in this really challenging and high value area? And yes, this is another thing that I think faculty hopefully will discover and get a sense of as they experiment themselves. I think probably a lot of us have also experimented with, just ask it about yourself. Ask it, what has Dr. Michelle Miller written? There’s a whole collaborator [LAUGHTER] I have never heard of, and when it goes off the rails, it goes. And it’s one thing to say really kind of super vaguely to say like, “Oh, AI may produce output that can’t be trusted.” And that has that real like, okay, caution, but not really, feel to it. That’s a whole other thing to actually sit with it and say, alright, have it generate these citations. They sure do look scholarly, don’t they really look right? Okay, now go check them out. And say, this came out of pure thin air, didn’t it? Or it was close, but it was way off in some particular way. So as in so many areas, to actually have the opportunity to say, okay, generate it and then look at it, and it’s staring you right there in the face some of the issues. So I think that we will see a lot of faculty coming up with really dynamic exercises that are finely tuned to their particular area. But yeah, when we talk about writing all kinds of scholarly writing and research in general, I think that’s going to be a very rich field for ideas. So I’m looking forward to seeing what students and faculty come up with there.

Rebecca: That’s a nice lead into the way that we always wrap up, Michelle, which is to ask: “what’s next?”

Michelle: Well, gosh, alright. So I’m continuing to write about and disseminate all kinds of exciting research findings. I’ve got my research base substack, that’s still going pretty strong. After summer. I actually focused it on ChatGPT and AI for a couple of months. But now I’m back to more general topics in psychology, neuroscience, education, and technology. So articles that pull in at least three out of four on those. I’ve got some other bigger writing projects that are still in the cooker. And so I’ll leave it at that with those and I’m continuing to really develop what I know about and what I can do with ChatGPT. As I was monitoring this literature, it was really very clear that we are at a very, very early stage of scholarship and applied information that people can actually use. Those are all things that are very much on the horizon for my next couple of months.

Rebecca: Well, thank you so much, Michelle, we always enjoy talking with you. And it’s always good to think through and process this new world with others.

Michelle: Absolutely.

John: It certainly keeps things more interesting and exciting than just doing the same thing in the same way all the time. Well, thank you.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]

305. 80 Ways to Use ChatGPT in the Classroom

Faculty discussions of ChatGPT and other AI tools often focus on how AI might interfere with learning and academic integrity. In this episode, Stan Skrabut joins us to discuss his book that explores how ChatGPT can support student learning.  Stan is the Director of Instructional Technology and Design at Dean College in Franklin, Massachusetts. He is also the author of several books related to teaching and learning. His most recent book is 80 Ways to Use ChatGPT in the Classroom.

Show Notes

Transcript

John: Faculty discussions of ChatGPT and other AI tools often focus on how AI might interfere with learning and academic integrity. In this episode, we discuss a resource that explores how ChatGPT can support student learning.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by

John: , an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guest today is Stan Skrabut. Stan is the Director of Instructional Technology and Design at Dean College in Franklin, Massachusetts. He is also the author of several books related to teaching and learning. His most recent book is 80 Ways to Use ChatGPT in the Classroom. Welcome, Stan.

Stan: Well, thank you ever so much for having me on. I have been listening to your podcast since the first episode, you guys are crushing it. I recommend it all the time to my faculty. I’m excited to be here.

John: Thank you. And we very much enjoyed your podcast while you were doing it. And I’m hoping that will resume at some point when things settle down.

Rebecca: Yeah, we’re glad to have you here.

Stan: Yeah, thanks.

John: Today’s teas are:… Stan, are you drinking any tea?

Stan: A little bit of a story. I went over to the bookstore with the intent of getting tea. They had no tea in stock. I went to the vending machine on the same floor. The vending machine was down. I went to another building. I put in money. It did not give me tea. I’m stuck with Mountain Dew. I’m sorry. [LAUGHTER]

Rebecca: Not for lack of trying. Clearly. [LAUGHTER]

Stan: I tried. I tried.

Rebecca: I have some blue sapphire tea.

John: And I have Lady Grey.

Rebecca: You haven’t drink that in a while John,

John: no. [LAUGHTER]

Rebecca: Little caffeine today huh. [LAUGHTER]

John: Yeah well i am back in the office, I’ve returned from Duke and I have more options for tea again.

Rebecca: That’s good. So Stan, we invited you here today to discuss 80 Ways to Use ChatGPT in the Classroom. What inspired you to write the book?

Stan: Well, I’m an Instructional Technologist and my responsibility is to help faculty deliver the best courses possible. And in November 2022, ChatGPT came onto the scene and in December, faculty are up in arms, “Oh, my goodness, this is going to be a way that students are going to cheat and they’ll never learn anything again.” And as an instructional technologist, I see technology as a force multiplier, as a way to help us do better things quicker, easier. And so I didn’t feel threatened by ChatGPT. I’ve been looking at the horizon reports for the last 20 years. And they said, “AI is coming. It’s coming. It’s coming. Well, it’s here.” And so it was just a matter of sitting down in January, write the book, publish it, and provided a copy to all the faculty and we just started having good conversation after that. But the effort was that we should not ban it. That was the initial reaction; that this is a tool like all the other tools that we bring into the classroom.

Rebecca: Stan, I love how you just sat down in January and just wrote a book as if it was easy peasy and no big deal. [LAUGHTER]

Stan: Sell, I will have to be honest, that I was using ChatGPT for part of the book, it was a matter of I asked ChatGPT kind of give me an outline, what would be important for faculty to know about this, so I got a very nice outline. And then it was a matter of creating prompts. And so I’d write a prompt and then I would get the response back from ChatGPT. It was a lot of back and forth with ChatGPT, and I thought ChatGPT did a wonderful job in moving this forward.

John: Most of the discussion we’ve heard related to ChatGPT is from people who are concerned about the ability to conduct online assessments in the presence of this. But one of the things I really liked about your book is that most of it focuses on productive uses by both faculty and students and classroom uses of ChatGPT because we’re not always hearing that sort of balanced discussion about this. Could you talk a little bit about some of the ways in which faculty could use ChatGPT or other AI tools to support their instruction and to help develop new classes and new curriculum?

Stan: Yeah, absolutely. I guess first of all, I would like to say that this is not going anywhere. It is going to become more pervasive in our life. Resume Builder went out and did a survey of a couple thousand new job descriptions that employers were putting out. 90% of them are asking for their employees to have AI experience. As higher education, it’s upon us to make sure that the students that are going out there to be employees know how to use this tool. With that said, there has to be a balance. In order to use the tool properly, you have to have foundational knowledge of your discipline. You have to know what you’re talking about in order to create the proper prompt, but also to assess the proper response. With ChatGPT sometimes it doesn’t get it right… just how chat GPT is built, it’s built on probabilities that these word combinations go together. So it’s not pulling full articles that you can go back and verify, kind of like the human mind has been working. We have built up knowledge all these years. My memory of what happened when I was three, four or five years old is a little fuzzy. Who said what? I’m pretty confident what was said. I’m pretty confident, but it’s still a little fuzzy. And I would need to verify that. So I see ChatGPT as an intern, everybody gets an intern, now. They do great work at all hours, but you as the supervisor still have to verify the information is correct. Back to the classroom, students can’t or should not, or regardless of who’s using it, should not just hit return on a prompt, and then rip that off and hand it in to their supervisors or instructor without verifying it, without making it better, without adding the human element to working with the machine. And that is, I think, where we can do lots of wonderful things in the classroom. You know, from the instructor side of go ahead and use this for your first draft. Now turn on the review tools that track changes and show me how you made it better, as you’re working towards your final product. Instructors can go ahead and craft an essay, craft out some supposedly accurate information from ChatGPT. tThrow it in the hands of the students and say: “Please, assess this. Is this right? Where are the policies? Where are the biases? Tell me where the gaps are. How can we make this better?” And using it to assess it.” Those are some initial ways to start asking students or using it in the class. I don’t know if I’m tapping into all the things. There’s just so many things that you could do with this thing.

John: And you address many of those things in the book. Among those things that you address was having it generate some assignments, or even at a more basic level, having it develop syllabi, or course outlines and learning objectives and so forth, for when faculty are building courses.

Stan: Oh, absolutely. We have a new dean at our School of Business. And he came over and wanted to know, “Tell me a little bit more about ChatGPT, how we can use this. They’re looking at creating a new program for the college. And it’s like, “Well, let’s just start right there.” What are the courses that you would have for this new program and provide course descriptions, titles, and descriptions? Here comes the list of 10, 12 different courses for that particular program. Okay, let’s take this program, what are the learning outcomes for this particular program? So we just copied and pasted, asked for learning outcomes, here comes the list of outcomes. Now for these different outcomes, provide learning objectives. And it starts creating learning objectives. And so you can just continue to drill down. But this moves past the blank page. Normally you’d bring in a group of faculty to work on that program, what are your ideas and send everybody off, and they would pull ideas together and you would start crafting this. This was done in 30 seconds. And now okay, here’s the starting point for your faculty. Where are the problems with this? How can we make it better? Now go. Instead of a blank page, starting with nothing? That was one example. But even for your course, using ChatGPT, having a course description, you can ask it to say, provide me a course plan for 16 weeks. What would I address in this? What would be the different activities? Describe those activities. If you want it to have the activities use transparent assignment design, it’ll craft it in that format. It knows what transparent assignment design is, and it will craft it that way. And then going back to assessment, you can build content. So looking at that OER content, open education resources, that it can get you a jumpstart on that OER content. What are gaps that I want or taking content that’s there and localizing it based on your area to say here we are in New England, Massachusetts, specifically, I need an example. Here’s the content that we’re working with. Give me an example, a case study, and it will craft a case study for you. It allows you to go from that zone of drudgery to your zone of genius very rapidly. I’ve been working on a new book, and got down to the final edits, and I was like, “Oh, I’m missing conclusions to all these different chapters.” I just fed the whole chapter in and said, “Could you craft me a conclusion to this chapter?” And it just knocked it out. I mean, I could do it. But that’s my zone of drudgery, and I’d rather be doing other things.

Rebecca: It’s interesting that a lot of faculty and chairs and administrators have been engaged in this conversation around ChatGPT quite a bit, but many of them haven’t actually tried. ChatGPT. So if you were to sit down with a faculty member who’s never tried it before, what’s the first thing you’d have them do?

Stan: This is an excellent question because I do it all the time. I have a number of faculty members that I’ve sat down, looked at their courses and say, “What is the problem that you’re working with? What do you want to do?” And that’s where we start. We say “What is the problem that you’re trying to fix?” ChatGPT version three had 45 terabytes of information it was given. They say the human brain has about 1.25 terabytes. So this is like asking thirty-some people to come sit with you to work on your problem. One class was a sports management class dealing with marketing. And they were working with Kraft enterprises that has the Patriots, and working on specific activities for their students and developing marketing plans and such. We just sat down with ChatGPT and started at a very basic level to see what we could get out of it. And the things we weren’t happy with, we just rephrased it, had it focus on those areas, and it just kept improving what we were doing. But, one of the struggles that I hear from faculty all the time, because it’s very time consuming, is creating assessments, creating multiple choice questions, true and false, fill in the blank, all these different things. ChatGPT will do this for you in seconds. You feed all the content that you want, and say, “Please craft 10 questions, give me 10 more, give me 10 more, give me 10 more. And then you go through and identify the ones you like, put them into your test bank. It really comes down to the problem that you’re trying to solve.

John: And you also know that it could be used to assist with providing students feedback on their writing.

Stan: Absolutely

John: …that you can use it to help generate that. Could you talk a little bit about that.

Stan: We’re right now working with the academic coaches. And this is one of the areas to sit down. I’m also not only the Director of Instructional Technology and Design, but also my dotted line is Director of Library. So I’m trying to help students with their research. And the writing and the research go hand in hand. So from the library side, we look at what the students are being assigned, and then sit down and just start with a couple key terms or phrases, keywords that we want and have ChatGPT to give us ideas on these different terms. And it’ll provide ten, twenty different exciting ideas to go research. Once again, getting past the blank page. It’s like “I gotta do an assignment. I don’t know what to do.” It could be in economics, I don’t know what to write about in economics, it’s like, well, here pull these two terms together, and what does it say about that?” So we start at that point. And then once you have a couple ideas that you want to work with, what are some keywords that I could go and start researching the databases with, and it will provide you these ideas. It’ll do other things, it’ll draft an outline, it’ll write the thing if you want it to, but we try to take the baby steps in getting them to go in and research but getting pointed in the right direction. On the writing side, for example, I have a class that I’m going to be teaching at the University of Wyoming to grad students. I’m going to introduce ChatGPT. It’s for program development and evaluation, and I’m going to let them use ChatGPT to help with this. One of the things that academic writers struggle with is the use of active voice. They’re great at passive, they’ve mastered that. Well, this will take what you’ve written and say, “convert this to active voice” and it will rewrite it and work on those issues. I was working with one grad student and it was after playing with ChatGPT a couple of times, she finally figured out what really was the difference and how to overcome that problem and now she is writing actively, more naturally. But she struggled with it. With ChatGPT, you can take an essay, push it up into ChatGPT and say, “How can I make this better?” And it will provide guidance on how you can make it better. You could ask it specifically, “How can I improve the grammar and spelling without changing any of the wording here.” It’ll go and check that. So for our academic coaches, because there’s high volume, this is another tool that they could use to say, “Here’s the checklist of things that we’ve identified for you to go work on right away,” not necessarily giving solutions, but giving pointers and guidance on how to move forward. So you can use it at different levels and different perspective, not where it does all the work for you but you could do it incrementally and say, “here assess this and do this.” And it will do that for you.

Rebecca: Your active and passive voice example reminds me of a conversation I had with one of our writing faculty who was talking about the labor that had been involved previously of making example essays to edit of to work on writing skills. And she just had ChatGPT write things that [LAUGHTER] are of different qualities, and to compare and also to do some editing of as a writing activity in one of her intro classes.

Stan: Absolutely. What I recommend to anyone using ChatGPT is start collecting your prompts, have a Google document or a Word document, and when you find a great prompt, squirrel it away. Some of the workshops that I’ve been giving on this, I demonstrate high-level prompts that are probably two pages long that you basically feed this basic information to ChatGPT and it talks everything about the information that you’re going to be collecting, how you want to collect it, how you want it to be outputted, what items are you going to output, and you’re basically creating this tool that you can then call up and say, for example, developing a course, that it will write the course description, give you a learning outcomes, recommended readings, activities, and agenda for a 16 week, all in one prompt. And all you do is say “this is the course I want” and let it go. It’s amazing what problems that we can build this tool just like we build spreadsheets, we build these very complex spreadsheets, to do these tasks. We can do the same with Chat GPT, we just have to figure out what the problems we’re trying to solve.

John: Our students come into our classes with very varied prior preparation. In your book, you talk about some ways in which students can use ChatGPT to help fill in some of the gaps in their prior understanding to allow them to get up to speed more quickly. Could you talk about some ways in which students can use ChatGPT as a personalized tutor,

Stan: I’m going to take you through an example that I think can be applied for students. A student comes to your class. Ideally, they’re taking notes, one of the strategies that I use is I have my notebook, I’ll open my notebook, and I’ll turn on otter.AI, which is a transcription program. And I will go over my notes, I will basically get a transcription of those notes, I can then feed that transcription into ChatGPT and say clean it up, make a good set of notes for me. And it will do that. And then I can build this document and then I can review what we did in class, build a nice clean set of notes, and have that available to me. Over a series of setw of notes, I could do the same thing by reviewing a textbook and highlight and talk about, transcribe key points of the textbook or I can cut and paste. And then I can feed that information into ChatGPT and say, “Build me a study bank that I can build a Quizlet, for example, or I need to create some flashcards on what are the key terms and definitions from this content?” Here you go. Create some flashcards from that material. It could be that no matter how great the instructor is, I still don’t get it. They introduced a term that is just mind boggling, and I still don’t get it. And so I can then ask ChatGPT to explain that at another level. They talk about non-fiction, some of the best non-fiction books or the most popular that are out there getting on the bestsellers list, they’re written at a certain grade level. And I know that I write typically higher than that grade level, I can go ask ChatGPT to rewrite it at a lower grade level. I could, as a student, ask ChatGPT, to give an explainer at a level that I do get to understand. Those are certain ways that you can do this. And you basically can build your own study guides that have questions that have examples of all the materials, so you can feed that material in and get something out, just enhance it. And I think for faculty, this is also an easy way to create good study guides, that you can get the key points and build the study guides a lot easier, just going with the blank page and trying to craft it by hand, can be very difficult. But if you already have all your material, you feed it in there, and then say here, let’s build a study guide out of this year with some parameters, definitely much more useful.

Rebecca: We’ve talked a lot about how to use ChatGPT as an individual, either as an instructor or as a student. Can you talk a little bit about ways that instructors could use ChatGPT for in class exercises or other activities?

Stan: Absolutely. And I’m sorry, some of the examples other folks have actually contributed first, and I saw him and I thought they were just brilliant, but I don’t have their names right in front of me. So I apologize ahead of time. But as an instructor, I would invite ChatGPT into the classroom as another student. We call it Chad, Chad GPT and bring Chad into the classroom. So you could have an exercise in your classroom, ask the students to get into groups, talk about an issue, and then up on the whiteboard, you start getting their input, you start listing it. And then once you’re done, you can feed Chad GPT the same prompt and get the list from Chad GPT, and then compare it to what you’ve already collected from the students, what their input has been. And from there, you can do a comparison, like “We talked about that, and that, and that, oh, this is a new one. What do you think about this?” And so you can extend the conversation by what Chad GPT has provided? …and there I go, Chad, I’ll be hooked on that for a while. But you can extend the conversation with this or if students have questions that are coming up in class, you can field that to the rest of the class, get input and then say “Okay, let’s also ask Chad, see what Chad has to say about that particular topic?” Those grouping exercise we typically do the think-pair-share exercise, well part of that is each student gets to get Chat in that group. So, each group you can have Chad come in where they have to discuss, they have to think about it first, write something down, pair, discuss it, then add ChatGPT into the mix, talk about it a little bit more, and then share with the rest of the class. Lots of different ways that you can bring this into the classroom, but I bring it right in as another student.

Rebecca: Think-pair-chat-share. [LAUGHTER]

Stan: Yep. And that’s that mine that actually somebody was clever enough, they found that. I just happen to glom on to it. But yeah, definitely a great way of using it. It’s a new tool. We’re still figuring our way, but it’s not going away.

Rebecca: So whenever we introduce new technology into our classes, people are often concerned about assessment of student work using said technologies. So what suggestions do you have to alleviate faculty worry about assessing student work in the age of ChatGPT?

Stan: Well, students have been cheating since the beginning of time. That’s just human nature. Going back to why are they cheating in the first place? In most cases, they just got too much going on, and it becomes a time issue. They’re finding the quickest way to get things done. So ensuring that assignments are authentic, that they’re real, they mean something to a student ,is certainly very important in building this. The more it’s personally tied to the student, the harder it is for ChatGPT to tap into that. ChatGPT is not connected to the internet yet. So having current information, that’s always a consideration. But I would go back to the transparent assignment design, and part of the transparent assignment design that is often overlooked is the why. Why are we doing this. If you use ChatGPT to do this, this is what you’re not going to get from the assignment. So, when building those assignments, I recommend being very explicit that yes, you can use ChatGPT to work on this assignment, or no, you cannot, but here’s why. Here’s what I’m hoping that you get out of this. Why this assignment’s important. Because otherwise, it just doesn’t matter. And then when I have an employee that just simply hits the button and gives me something from ChatGPT, I’m going to ask, “Why do I need you as an employee? Because I could do that. Where’s the human element? …bringing that human element into it, why is thisimportant?” What learning shortcut or shortcutting you’re learning, if you just rely on the tool and not grasp what the essence of this particular assignment is. But I think it goes back to writing better assignments… at least that’s my two cents on it.

Rebecca: Thankfully, we have ChatGPT for that.

John: For faculty who are concerned about these issues of academic integrity, certainly creating authentic assignments and connecting to individual students and their goals and objectives could be really effective. But it’s not clear that that will work as well when you’re dealing with, say, a large gen-ed class, for example. Are there any other suggestions you might have in getting past this?

Rebecca: John? Are you asking for a friend? [LAUGHTER]

John: [LAUGHTER] Well, I’m gonna have about 250 students in class where I had shifted all of the assessment outside of the classroom. And I am going to bring some back into the classroom in terms of a midterm and final but they’re only 10 and 15% of their grade, so much of the assessment is still going to be done online. And I am concerned about students bypassing learning and using this, because it can do pretty well on the types of questions that we often ask in introductory classes in many disciplines.

Stan: That’s a hard question, because there’s certainly tools out there that can identify where it suspects it’s been written by AI. ChatGPT is original text so you’re not dealing with plagiarism, necessarily, but you’re dealing with, it’s not yours, it’s not human written. There are tools out there, but they’re not necessarily 100% reliable. Originality.AI is a tool that I use, which is quite good, but it tends to skew, everything is written AI. TurnItIn, they’ve incorporated technologies into being able to identify AI, but it’s not reliable. This honestly comes down to really an ethics issue, that folks who do this feel comfortable in bypassing the system for the end game, which is to get a diploma. But then they go to the job and they can’t do the job. And a recent article that I read in The Wall Street Journal was a lot of concern about employees not having the skill sets that they have, and how to convince students of this, that “why are you here? What’s the whole purpose of doing this? I’m here to guide you based on my life experience on how to be successful in this particular discipline, and you don’t care about that.” That’s a hard problem to fix. So I don’t have a good answer for that. I’m always on the fence on that because it’s hurting the integrity of the institution that students can bypass, but it’s harder. Peer review is another tool, you know, to have them go assess it. They seem to be a lot harder [LAUGHTER] on each other. Yes, this is a tough one. I don’t have a good answer. Sorry.

John: I had to try again, [LAUGHTER] because I still don’t have very good answers either. But certainly, there’s a lot of things you can do. I’m using clickers.I’m having them do some small group work in class and submitting responses. And that’s still a little bit hard to use ChatGPT for just because of the the timing, but it was convenient to be able to let students work on things outside although Chegg and other places had made most of those solutions to those questions visible pretty much within hours after new sets of questions have been released. So, this perhaps just continues that trend of making online assessment tools in large classes more problematic.

Stan: Well, I mean, one of the strategies that I recommend is master quizzing. So master quizzing is building quiz that are 1000s of questions large and randomly drawn from it. And they get credit when they ace it. And then the next week, they have another one, but it’s also cumulative. So they get previous questions too. And you have to ace it to get credit. Sorry, that’s how it is, cheat all you want, but it’ll get old after a while.

John: And that is how my course is set up. And they are allowed multiple attempts at all those quizzes, and they are random drawings. And there’s some spaced practice built in too, so it’s drawing on earlier questions randomly, but, but again, pretty much as soon as you create those problems, they were very quickly showing up in the online tools in Chegg and similar places. Now, they can be answered pretty well, using ChatGPT and other similar tools. It’s an issue that we’ll have to address, and some of it is an ethics issue. And some of it is again, reminding students that they are here to develop skills, and if they don’t develop the skills, their degree is not going to be very valuable. I

Rebecca: Wonder if putting some of those like Honor Code ethics prompts at the beginning or end of blank bigger assessments would [LAUGHTER] prime their pump or just cause more ChatGPT to be used. [LAUGHTER]

John: That’s been a bit of an issue because the authors of those studies have been accused of faking the data. And those studies have not been replicated. In fact, someone was suspended at Harvard, recently, and is now engaged in a lawsuit about that very issue. So the original research that was published about having people put their names on things before beginning a test hasn’t held up very well. And the data seems to have been… at least some of it seems to have been… manipulated or fabricated. [LAUGHTER] So right now, ChatGPT allows you to do a lot of things, but they’ve been adding more and more features all the time. There’s more integrations, it’s now integrated into Bing on any platform that will run Bing. And it’s amazing how well it works, but the improvements are coming along really rapidly. Where do you see this as going?

Stan: November 2022, was ChatGPT built on GPT3 , we’re now into four. And this is only half a year later, basically, that we got into four. I mean, it’s everywhere. For example, in selling books, one of the things that you want to do is try to sell more books. So I went back to Amazon, pulled out all the reviews that I had, sent them into ChatGPT and said “Tell me what the top five issues are.” In seconds it told me it just assessed it where this would take large amount of time for me to do this and it just did it nice and neatly. Everything is going to have AI into it. Grammarly AI is being built into it. All the Microsoft products are going to have AI built in. We’re not getting away from it. We have to learn how to use this in our professions, in our disciplines. With ChatGPT4, it was said somebody had drawn a wire diagram of a website buttons and mastered and text and took a picture of it, gave it to ChatGPT4 and it wrote the code for that website. It’s gonna be exciting. Buckle up, and we had consternation about January, we’re gonna have a lot more coming up. It’s just part of what we do. We have to figure out how to stay relevant, because this is so disruptive. In the long line of technologies that has come out, this is really disruptive. We can’t fight against it, we have to figure out how to do it appropriately, how to use this tool.

Rebecca: The idea of really having to learn the tool resonates with me because this is something that we’ve talked about in my discipline for a long time, which is design. But if you don’t really learn how to use the tools well and understand how the tools work, then the tools kind of control what you do versus you controlling what you’re creating and developing. And this is really just another one of those kinds of tools.

Stan: Well, even in the design world, I’ve gone to Shutterstock. And there is something that allows you to create a design with AI. So the benefit for a designer is they have a certain language, tone, and texture. Their language is vast, and for them to craft a prompt would look entirely different from me, a snowman sticks for arms, it’d be entirely different. But getting the aspect ratio of 16 x 9, everything that you craft into this prompt and feed it in, somebody who does design and knows the language would get something then a mere mortal like me putting that information in. So for somebody who’s in economics, you have a whole language about economics. Somebody who is trying to craft a prompt related to that discipline has to know the foundationals, the language of that discipline, to even get close to being correct in what they’re gonna get back. And students have to understand this, they cannot bypass their learning because they will not have the language to use the tool effectively.

John: And emphasizing to students the role that these tools will be playing in their future careers, might remind them of the importance of mastering the craft in a way that allows them to do more than AI tools can. And at some point, though, I do wonder [LAUGHTER], at what point AI tools will be able to replace a non trivial share of our labor force.

Stan: It’ll affect the white collar force a lot quicker. And I look at it… a nice analogy for the AI was in the Marvel, you have Iron Man, Tony Stark. And it is the mashup of the human and the machine. He’s using this to allow himself to get further and faster in his design, and to do things that we hadn’t thought about before. And I see this tool, being able to do this, that we’re bringing so much information and data to this, it’s mind boggling that suddenly you see a spark of inspiration that you couldn’t get there by yourself without a lot of labor, and suddenly it’s there. And you can take that and run with it. For me. It’s tremendously exciting.

Rebecca: So we always wrap up by asking, what’s next?

Stan: Great question. Right now, I’m getting edits back from my editor for my next book, it’s Strategies for Success: Scaling your Impact as Solo Instructional Technologists and Designers. I’ve been doing this for about a quarter century and mostly as someone by myself, helping small colleges on how to do this, how do I keep my head above water and try to provide the best support possible? So sharing what I think I know .

Rebecca: Sounds like another great resource.

John: Well, thank you, Stan. It’s always great talking to you, and it’s good seeing you again.

Stan: Yeah, absolutely. And also, free book… I’mgonna give a 100, first 100 listeners, but I can go more. Yeah, so there’s a link it’s bit.ly/teaforteachinggpt . And so it’s in that set of show notes to share, but the first 100 gets a free copy of the book.

John: Thank you.

Rebecca: Thank you.

John: We’ll stop the recording. And, and we’ll put that in the show notes.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.

[MUSIC]

278. Google Apps and the LMS

Creating course content in an LMS can be time-consuming and tedious. In this episode, Dave Ghidiu joins us to discuss ways of leveraging Google Apps to simplify content creation, facilitate student collaboration, and to allow students to maintain access to their work after the semester ends.

Dave is an Assistant Professor of Computer Science and Coordinator of the Gladys M. Snyder Center for Teaching and Learning at Finger Lakes Community College. Previous to his time at FLCC, he spent a few years as a Senior Instructional Designer at Open SUNY, where he was a lead designer for the OSCQR rubric software.

Show Notes

Transcript

John: Creating course content in an LMS can be time-consuming and tedious. In this episode, we explore ways to leverage Google Apps to simplify content creation, facilitate student collaboration, and to allow students to maintain access to their work after the semester ends.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

Rebecca: Our guest today is Dave Ghidiu. Dave is an Assistant Professor of Computer Science and Coordinator of the Gladys M. Snyder Center for Teaching and Learning at Finger Lakes Community College. Previous to his time at FLCC, he spent a few years as a Senior Instructional Designer at Open SUNY, where he was a lead designer for the OSCQR rubric software. Welcome, Dave.

Dave: Thank you so much. This is so exciting to talk to you, too. I’m sure you don’t know this, but we have a very rich and robust parasocial relationship.

John: Today’s teas are: … Dave, Are you drinking tea?

Dave: I am. Halee, over at Saratoga Tea and Honey, makes tea bought from scratch and has made this Focus Pocus which is really, really good. I thought I would need my brain fog busting blend today.

Rebecca: I think I really needed that earlier in the week. [LAUGHTER] I just have English breakfast today, John.

John: And I have Tea Forte black currant tea with some honey from Saratoga Tea and Honey.

Dave: Their honey selection is out of this world.

John: It’s amazing, and they offer free samples, which is one of the reasons why I end up buying so much because there’s so many different flavors that taste so good.

Rebecca: And just for clarification, they’re not a sponsor. [LAUGHTER] It’s just a common choice lately.

John: That’s right, because that was [LAUGHTER] also in our last podcast with Jim Lang.

Dave: Oh, it was it really? Oh. that’s awesome. My wife, Katie went to high school with Halee, which is how we wound up shopping there.

Rebecca: Oh, that’s funny. So Dave, in January, you presented a workshop at SUNY Oswego, where you describe ways to use Google Apps to simplify repetitive tasks. I took many notes, started implementing some of these things. Could you provide a little bit of an overview of the basic strategy that you advocated for / continue to advocate for?

Dave: Yeah, that was a ball presenting, there was so much engagement and some really, really good questions during the presentation. Thank you for inviting me out. I think we’ll start with some level setting and I just want to let you know that everything we talk about today is using Google Docs in tandem with the LMS. At FLCC, Finger Lakes Community College, we use Brightspace. So everything that we do, and we’re talking about today will be irrespective of what software the campus uses. In fact, at FLCC, we’re a Microsoft house, and I just happened to have a better workflow with Google Docs. So I think it’s important for the audience to know that you don’t need to have Google Classroom, you don’t need to be a Google campus, you can do all this stuff today.

Rebecca: And a lot of it’s documented on won…. what’s the website?

Dave: As you know, SUNY migrated to Brightspace last year, and we in the computing science department at FLCC kicked the tires quite a bit on it. So much so that we were doing all these really niche, interesting things. So Carrie, who’s one of the professors in the department, says, “Hey, we should have a mini-Summit.” So we all got together and did a show and tell of all the things that we’ve been doing. And I chronicled all those, I wrote them down. And I just started a blog called LEARNBrightspace.com. So that will have a lot of these things that perhaps you didn’t learn in the Brightspace training, and it can potentiate your online classes. And this is also the home of where I’m putting these Google mechanisms, because I think it really blurs the line. I’m using Brightspace, just as much as I’m using Google Drive and Google workspace. So all these ideas and concepts will live at LEARNbrightspace.com.

John: And having noted that this is created for Brightspace, many of the tools that you’re referring to and the basic techniques could work with any LMS, Correct?

Dave: Correct. In fact, I started this project, working at MCC, 2010 – 2012. So I was using ANGEL at the time. And then we migrated to Blackboard. And I started ramping it up and now we’re using Brightspace. So this will work in any LMS. But it will also work just in a regular website. This is just all pure HTML.

John: And from what I recall, the basic principle is you try to reduce repetitive tasks within the LMS by leveraging Google Drive and Google Apps.

Dave: Sure, I call that the tyranny of repetition and software development, we have what’s known as the ground truth, and you want all the information to exist in one place and push it out in separate places. I have a twin brother who’s a software engineer, and he always talks about ground truth. I call it the single point of truth, but it’s the same thing. And a good example is your office hours. Since I’ve been listening to you, I’ve rebranded them as my student time, but your office hours, you want to have it in one spot, but push it out, maybe in your LMS. Maybe in your syllabus, perhaps you have three or four or five different syllabi, so you have it living in one spot and change it in that one spot. And you can push it out to all these other arenas. You don’t have to worry about updating it in 5, 6. 7 different spots.

Rebecca: And how do you do that?

Dave: Well, there’s a few different ways to do this. And I think the easiest way to explain it would be for this particular task. And this is going to be a concrete example of how I use this single point of truth. I use Google Sheets, Google Sheets is great for formatting tables. So I format my table. And then I highlight what I want to put, for instance, in my syllabus, and then I copy that, and I go into my syllabus, which lives in Google Docs, and I paste and when you paste, it says, “Hey, do you want to link this to the Google sheet so that if the Google Sheet ever changes or updates, we can see the changes here?” So I always click on yes, that’s exactly what I want. And the very first time I do that, I spent a little bit of time formatting it in Google Docs where I have to like bring the margins over, it’s not that hard of a lift, it goes pretty easy. Once that’s done, I’m done. I never have to reformat that table. So when I come the next semester, and I change my student time, or my office hours, I can just push those right to my Google Docs. The other place I do that is within the LMS. So within Brightspace, I have a page that says office hours or student time, and I actually embed a Google sheet right there. So when the students click on office hours or student time, they will see maybe a little blurb by me that says, “Hey, if you’re meeting in my office, here’s my office number, if you want a link to my virtual room, here it is, and look below, and you can see all my time.” And that’s really just the Google Sheet. Once I update that Google Sheet, I never touch that content in my course. And that’s semester to semester. Once I set up my course, I never touch that content again, because all the changes are live,

Rebecca: Does embedding a Google Doc or a Google sheet in an LMS present any accessibility concerns that we should be aware of?

Dave: As long as you’re using the iframe tag, they actually have a long description tag, which is not necessarily all tagged, but I think they’re screen-reader compliant, but iframes, use whatever accessibility is in the target site. So for instance, if I framed Tea for Teaching, if your site’s compliant than whatever my LMS, when I embed Tea for Teaching would use that compliancy, the screen reader just would actually be reading the Tea for Teaching website. So as long as your Google Doc and your Google Sheet or whatever you’re embedding is accessible, then this is also accessible.

Rebecca: So it’d be important to do things like have header rows and that kind of thing. And if you’re using Google Docs, you might want to know about Grackle, which is a great accessibility tool to check your accessibility of those files.

Dave: Oh, that’s interesting, because Microsoft does a really good job, with Word, of making things accessible, and they have their accessibility checker. I haven’t seen that in Google Docs. Are you using Grackle?

Rebecca: Yeah, it’s a third party tool that we have on our campus. And it works across Google Docs, Google Sheets, Google Slides, and will help improve the accessibility of all those documents and any PDFs you might need to export.

Dave: Oh, that’s awesome. I’m glad I came today. I’ll look that up as soon as we’re done. It is worth mentioning, and I’m glad you said that. In Google Docs, and I spend most of my time in Google Docs, not Google Sheets. In Google Docs, I’m always using headings, I like the textual hierarchy. And I always do the alt tags, there’s a lot of easy things to do that make my life easier and make it more accessible for anyone viewing the content. In fact, and I believe I demoed this in January, most of the assignments that I give my students are a Google Doc, and I have them make a copy of it. And then they paste images in for a lab, we’re doing computer science stuff. And I use those headings. So when they give me their Google Doc, I don’t have to scroll through 20 pages, I can use the document outline and click on the different headings where I know their answers are going to be so I can just skim it real real easily. So this is just another example of Universal Design writ large that it is better for the student, it’s better for me, it’s better for everyone.

Rebecca: That’s totally the strategy I use too, Dave.

Dave: Oh, really? Oh, that’s interesting. I’m so glad to hear that.

John: And so if you’re sharing templates with students for assignments, you can set them up to be accessible so that when you submit them, it’ll automatically have the heading structure.

Dave: Yeah, in fact, one of the things that I do, and I demo this in the video in the recording in January, but also at LEARNbrightspace.com, one of the tabs says “Tools,” and in this particular thing, you can paste in the URL of your Google Doc. But one of the things I do is: instead of making a template or saying to the students click in this Google Doc go to file, save as a copy, you can just change the URL of the link to your doc and chop off the last four characters that say “edit” and make it “copy.” So when the students click on the link, it’ll actually force a copy in their Google Drive. So that’s just one of the nice things you can do with those links. And one of the really, really cool ways to bend these URLs is you can make a link that goes to your Google Doc. You can also make a link by changing “edit” to “copy” that will force a copy. You can make a link instead of edit, you can do “export?format=PDF,” and that will take a carbon copy of your Google Doc and download it as a PDF. And that’s a great thing to do, for instance, for my course syllabus, because my course syllabus is embedded. Any change I make to my course syllabus in Google Docs gets pushed out automatically, but students like to print that out. So, I just put a link right on my page in my LMS, where the course outline is, or your syllabus, and it says “Click here, if you want to download it as a PDF,” and that download it as a PDF, “Click here if you want download as a Word,” and it just downloads as a Word. And it’s not that I’m hosting a PDF or Word, it’s converting my google doc at that moment in time to a PDF and downloading it. So that’s really, really slick. And that’s a great way to get your course materials into students’ hands.

Rebecca: And I’m pretty sure I saw the code to that button on your website.

Dave: You did, at LEARNBrightspace.com. You go to LEARNBrightspace.com, you paste in your URL of your Google Doc, and it gives you the code for those buttons, it gives you the code to in bet it gives you a code to do a thumbnail of your image, which is live, and it gives you the code to embed a QR code should you want that. You can just do some really crazy things. And I think that’s a perfect example of how to learn what you can do, is just go to that website and check out like, “Oh, I can download it as a Word file.” This is a great example of a single point of truth. I have one course outline and I might have it in three or four different classes, I have different sections. I only update it in my Google Docs, and all the courses where that embedded file lives, they get to see all those changes. And you remember in the good old days, or the Dark Ages, rather, of LMSs. So if I had a PowerPoint, for instance, and I changed my PowerPoint, I’d have to go into all those sections and take down that PowerPoint and upload a new one. But I don’t need to do that anymore. If I have Google Slides, I just change the slides. Because once it’s linked in your LMS, or once it’s embedded, you never have to touch that piece of content in your LMS again. You make as many changes as you want to your slideshow and it’s live immediately. And by the way, you can do the same thing with links to Google Slides, where it can download as a PDF, download it as a PowerPoin.,

John: The show notes file accompanying this podcast will contain links to the LEARNBrightspace.com website and to the recording of the workshop that you provided in January at Oswego for anyone who wishes to explore these options in more depth.

Dave: Awesome. I just started making this website, so I’ll be building it out. And again, it’s going to have a lot of Brightspace stuff, but also all this Google stuff that I’ve been doing for a few years.

John: One of the other things you demonstrated was the use of Google draw to automatically update images in an LMS, such as the ones that you used to signal whether a module was open or not. Could you tell us a little bit about how that might work?

Dave: Yeah, so I actually have a very unhealthy relationship and dependency on Google drawings. And I thought it was just like a throwaway tool. But once I started using it, I was like, “Oh, this is really, really slick.” So much like a Google Doc, my syllabus or Google Slides, you can embed a Google drawing. In Brightspace, we have the visual table of contents which Rebecca, I’m sure you had like a ball with, given your role in graphic design. The visual table of contents, if you don’t have an image in the description for your module, then it will just inherit whatever your course image is, in the main course. Using Google Drawings, I’ve created thumbnails for each of the chapters. So when you look at the visual table of contents, each chapter has a unique image that somehow intimates what we’re doing in that chapter. And then I put like a big one in a circle, or big two or big three, whatever chapter it’s on. Because it’s a live image, I can change the look and feel of it. So I have a gray overlay that I put over upcoming chapters. And then I have a banner that stretches across and says “upcoming.” So the students on day one can see all the chapters, and anything that’s upcoming, they can kind of see what the chapter is about with that thumbnail because it’s behind like a somewhat transparent gray rectangle, and they can see the big banner that says upcoming. But then when that module opens, I just go into Google Drawings, that gray rectangle on the banner, I send that to back, and you can’t see it anymore. So when the students come into my course, that’s how they know what the most recent chapter is. Anything that says upcoming and is gray, that’s in the future. Anything that’s bright and exposed, that’s what we’re doing now.

Rebecca: For those that are familiar with Google Draw, because, I don’t know many of us have thought of it as being kind of a junk app… [LAUGHTER]…

John: …spoken by someone who is used to that Photoshop stuff.

Rebecca: …can you talk to us a little bit about what it’s capable of doing and what it’s not capable of doing?

Dave: Yes, it is not capable of doing Photoshop. So you’re not going to have a lot of those high-end or even mid-range tools, such as the content aware or the lasso tool, it just doesn’t have all that. This is more for what I would say as graphic design if you’re doing almost illustrations or almost vector images. And you can put photographs up there. But I use it mostly as textbox, some colors, and I don’t want to undersell it, because right now I’m convincing myself that it is kind of a puny little web app, but it’s so potent in the ability to change the content of an image. So if you want to embed that image in your course you can change it and it does do some high-end things. You can crop in different shapes. But really, if you’re looking for Photoshop, this is not an adequate replacement.

John: But the ability to do layers offers some really nice capabilities as you described, because a lot of basic drawing apps will not allow you to introduce or to have layers.

Dave: That’s a good point. And there’s also the ability, much like Photoshop, it has a canvas, but then… and I don’t know what you call the area outside of the canvas, I call it the staging area. So I can put things that I might be using later in that staging area, and it’s not visible in the image. And that’s also where I put… and Rebecca, you’ve been asking about accessibility, I keep my alt text in the very first text box in this staging area so that if a screen reader is reading it, my alt text is right there. It also makes it very easy when I need to embed it later on. Because I don’t have to keep retyping the alt text, I keep it right there.

Rebecca: That leads exactly to what I was going to ask you about. It’s almost like a read my mind is whether or not, when you’re creating these Google Drawings, if it actually maintains text or not, because that indicates an accessibility issue, if it’s an image of text, or if it’s actually text.

Dave: For the listeners that want to pull this thread, when you publish the image it is, is for all intents and purposes, a jpg png. So it’s going to be a flattened image…

Rebecca: no SVG, huh?.

Dave: Actually, you can download it as an SVG, I believe, I don’t know if you can embed it as an SVG.

Rebecca: If I can’t embed, it’s no good to me. [LAUGHTER]

Dave: One of the ways that I use Google drawings, and I saw this at a conference, and the professor who was presenting was not presenting this aspect. This was just like a throwaway thing she was doing and I was blown away. I was like, “Woah, that is really cool.” So I stole this from someone I saw presenting. She doesn’t embed it as a JPEG or PNG, she actually embeds the Google Drawings website. So if you change the “/edit” or “/preview,” and you can do that in Docs as well, it gives you a more packaged… it doesn’t have the toolbar… and if you do it that way, you do have access to all the text. But one of the things that I really like about it, and I’m glad that we’re going down this road, is I make… think of it like a horizontal rectangle, and I have three squares side by side in that rectangle. So when it comes time to exemplary work, for instance, in my class right now, students are making infographics and they might not have ever made an infographic before. So I say, “Hey, here’s some work students have done in the past.” And I take a screenshot of three really good infographics and I make thumbnails all in this Google drawing. It’s just one Google drawing that has those three thumbnails. I make those hyperlinks. So now when I embed this Google drawings, not as an image, but this Google drawings in preview mode, students can click on these different hyperlinks and it pops up PDFs that actually live in my Google Drive of the actual infographic. So to answer your question, you can embed your Google drawings in such a way that text is retained. But, my caution is, I think with accessibility your mileage may vary if you’re actually embedding the Google drawing app as opposed to an actual image.

Rebecca: Proceed with caution when putting text and images. Yes,

Dave: I did work on OSCQR, when I was working with Alex Pickett and Dan Feinberg at Open SUNY, and I think Alex been a guest on your podcast before, right?

John: She has, a couple of times.

Dave: And by the way, I think working with Alex and Dan really helped me explore all this stuff in Google as we were working on that Google software. But in the OSCQR, I think that it says “Do not ever use text on an image as your primary way of conveying information.”

Rebecca: Indeed.

John: People can refer back to a discussion of an earlier iteration of the OSCQR rubric, because it’s continuing to evolve.

Dave: It sure is, and I’m not involved with that project anymore, but I’m always really impressed when I see the new things that they’re doing.

John: You’re at a Microsoft campus where students all have access to Microsoft Office apps. So why do you choose to have students work in Google Docs?

Dave: That’s an interesting question that oftentimes when I talk to other campuses about this, that that question comes up quite a bit. I like Google Docs for a number of reasons. One, more and more students are coming to college and they’re more comfortable with Google Docs. And to be clear, most of the stuff that I do in Google Docs you can do with the Microsoft suite. And when I say Google Docs, that’s just a proxy for Google workspace or Google Drive. There’s a number of ways that people have colloquially referred to it. I like Google Docs for a number of reasons. One of the reasons why I like having students use Google Docs is version control is so simple, because unlike Microsoft Word where you might have a version on your desktop, and then you go to another computer, and you have to download it, that’s just obviated with Google Docs. But I really enjoy having my students submit in the LMS, just a link to their Google Doc, and they’re sharing it with me. I actually have them share it so that anyone with the link can comment, because all I really want to do is comment on their work. One of the problems I had with uploading a PDF or Word file is I might spend some time annotating it, and I might spend some time just like highlighting it. And those tools in the LMS have been a little wonky and unreliable at times. And by the way, the students, when the semester ends, they might not ever get that file. So all the work that we’ve put into commenting and highlighting on their work, they might not ever see that if they didn’t even know they could go back and look at their work. So I like Google Doc, because if they own it, even when the semester is over, they still have access to it, which is not how it works if students upload a file. So that was my primary impetus. But I found recently that I’m really, really happy with their commenting feature. So when I leave a comment in their work, and I like to leave comments,that’s like footprints that I’ve been looking at their work. When I leave a comment, they get an email to their Gmail account, and it says “Hey, Dave left a comment,” but also says “Would you like to reply?” and they can just, right from their phone or wherever they get this email message, they don’t have to go into Brightspace, they can just reply to that comment. And I really think that’s an equilibrium I haven’t seen in LMS’s before. It’s really been teacher centric, where the student uploads a document, the teacher says, “Let me as the teacher make some comments,” and the conversation ends there. Whereas in Google Docs, now you can have these conversations that are bilateral. In fact, I was talking to my wife about this the other day, she’s the Director of the library at MCC, but she teaches a class at FLCC and she uses Google Docs. And she said, “You know, I was in there. And I was just leaving some comments on papers for the students, and one of my students got an email notification, she popped right into the document and we actually had a conversation in the comments.” So I think that’s really, really neat. And I like that the students can leave the semester and still have it. And Google also rolled out this feature a few months ago, that allows you to, in addition to commenting, there’s an emoji button and you hit that emoji button and you can just add an emoji on whatever you’re highlighted. And it’s really, really slick, because I might historically say, “Hey, I really liked what you did here.” But now I can just leave an emoji thumbs up or smiley face or whatever I’m going to do. So that’s a little bit better for me, because I can cruise through the work and let people know my sentiment without having to be very verbose.

Rebecca: And you can save that language or when you really need it…

Dave: Yeah.

Rebecca: …which may mean it might actually get read.

Dave: Oh, that’s an interesting point, yeah.

John: Another reason for using Google apps is that they are something that most students have worked with in elementary and secondary school, because Google Classroom is a really commonly used tool and students are already used to that environment. Another thing that I’ve liked about Google Apps is the ease of collaboration where students can collaborate in real time on Google docs, Google slides, or Google sheets and that just doesn’t always work quite as smoothly in other Office applications as it does in Google apps..

Dave: Yeah, I feel like Microsoft is always like six to 12 months behind Google when it comes to innovation in the collaboration. So it’s nice to have that ecosystem. I think it’s also worth noting for the collaboration, and I’m glad you said that, the LMS doesn’t have collaboration built in, at least at the faculty level. So if there are two or three other people in the department teaching the same class I am, but different sections, it’s a little cumbersome for us to ask to be in each other’s section. And you can easily screw things up. So if we have a lot of our content in Google Docs, again, we don’t need to be bothering Jeff Dugan, who’s the Assistant Director of Online Learning at FLCC and say, “Hey, Jeff, can you add me to the sections over here?” we can just manage that ourselves in Google Drive.

Rebecca: I really appreciate the collaborative nature of Google Docs and just the Google apps generally. I use that as well for peer feedback and evaluation. I like the flattening of the hierarchy between the faculty member and the student. And you can collaborate on things. It works really well when I’m working with upper-level students or graduate students where the process really is more of a mentorship or more collaborative in nature in the first place. And it’s happening more in real time. But I always am concerned about the ability to document and maintain copies of things. Because when the students own the thing, then you have to develop a system to back stuff up, when if they’re uploading a document or something to an LMS, that backup is kind of happening. Do you have any strategies for that?

Dave: I’m working on this software right now, and I had this working with Blackboard and then we switched Brightspace so I have to change the paradigm. But I actually create my content, all the Google Docs, all the Google Sheets, whatever I’m going to do in one folder, and then I get a roster for my students. And I effectively make a folder for each student, and I give them read/write access to that folder, but I’m still the document owner, and then I can push out, I can copy all the documents into each of their folders. That is, I think, the golden standard. And then at the end of the semester, I can turn that work over to the student and I can make copies if I so choose. So I had all that infrastructure written and it was working great in Blackboard, so I’m back to square one with Brightspace. But that’s not stopping me because I still think it’s valuable even if I don’t necessarily have access to their work after the semester ends. There is a revisions feature in Google Docs too, which a lot of people don’t know about. And this is why I keep using the same syllabus that I’ve been using since 2019. I make changes whenever I need to, if there’s a typo, or if I change it from fall to spring. And if a student comes back to me, and they say, “Hey, I’m transferring, I need the syllabus from two semesters ago. Can you give that to me?” I just go to that same document, I go to file revisions, and I can pick a date and it rolls back to that date. I print it, give it to the student, and then I go back. So it’s not a perfect system, but it’s a good enough system that I think covers all my bases for right now.

Rebecca: In the past, I just had a folder that I had students submit their work in. [LAUGHTER] It was like “Put your copy here, please.”

Dave: And I think one of the things we didn’t really talk about and I think you were talking about it, but I hadn’t really considered this, is the group work. It’s just so much easier. You don’t have to make the different groups in your LMS and then articulate who has access to what, it’s all built in. To be clear, I love the LMS because it can have those features, I would never just use Google Docs alone. I need the LMS to distribute the content to say who can see what and when to manage the grades and have the assessments. But the Google docs are kind of the meat of what I use. So the skeleton is the LMS and the Google docs are the meat.

John: For quite a few years, I’ve had students do some open pedagogy projects, where they’re working in groups for their own components, but they also are working on some shared materials. And I just download a spreadsheet from our registration system with all their email addresses in it, and use that to share that class folder with them, and then just create subfolders for each group and let them work in there. And then when they’re ready to share it with the rest of the class for peer review, we just copy it from those sub folders, and they have access to it as long as they don’t remove the access after the semester is done. And it’s worked really nicely, because the basic issue Rebecca was talking about is sometimes when students would share a URL to a file, they forgot to change the access so that other people could view it. But if they start in a document that you already have access to, all those problems just go away and that makes it a lot easier.

Dave: It sure does. And in fact, Rebecca, that problem I have all the time. So my very first assignment in all my classes is: make a copy of this Google Doc, share it appropriately, and send me the link. And because there’s these interactive checkboxes that you can do in Google Docs, I have a checklist. So I kind of get rid of that problem immediately. But I’m glad you talked about also open pedagogy. It was either one of your podcasts, or maybe it was Teaching in Higher Ed, but I was listening to some stuff about open pedagogy. And also, I’m married to the director of the library at MCC. And so that might have had something to do with it. And we created at FLCC our Java textbook. It is all open pedagogy. The faculty kind of did the bones, but the students, and this textbook’s been around for like three or four years now, the students, even today I’m getting email messages like such and such made this change… that even today, someone’s like, “Hey, it would be really cool if you thought about talking about this.” So we get hundreds of comments every semester, because our one open textbook is commented on by all the students… we share it so that they can comment. And likewise, in my principles of information security class, we have the students every semester look at a recent cybersecurity issue, debrief what happened, and give an analysis of like this could have been avoided if you’ve done a, b and c. So they create them in Google Docs, I aggregate them all into a PDF, and then that PDF lives in Google Drive, and I actually embed it in the course. So students next semester can see all the work that the students have done this semester. And there is a conversation that students have about Creative Commons, so they know what they’re getting into.

John: One of the things you shared in that workshop presentation was a tool that would allow you to use markup to create documents outside of the LMS that could then be embedded in the LMS. And that’s specifically in Brightspace, so this may not be as generally applicable. But could you talk a little bit about that tool and why you might want to do that.

Dave: I would love to talk about this tool. And I would love to take credit for it, but I can’t. Aaron Sullivan, who’s a professor in the department, came up with it and he was suffering a different tyranny, not the tyranny of repetition. He was the tyranny of formatting your texts in Blackboard only when you hit the submit button, it wouldn’t render how you thought it would. He’s like “There’s got to be a better way.” So he started this project in Blackboard, and then when we switched to Brightspace, he tweaked it for Brightspace. It takes someone like me, who is very good at math, computer science, but I have no eye for design, and my students think that I’m a graphic design luminary. What his software does is just… I can’t even describe it, you’ll have to go back and watch the video, but it thrives on markdown, which is a very simple language. In Microsoft Word you might highlight text and then hit the bold button to make it bold. Or if you’re savvy you might hit Ctrl-B or Command-B if you’re on a Mac to make it bold. Markdown is even lower level than that. So you would say for bold, I think, you put an asterisk before the words, before the text and after the text, and italics will be an underline. But Aaron’s gone bonkers with this. And he’s come up with all sorts of ways with very, very easy to apprehend markdown codes, make your course in Brightspace just eye-poppingly delicious. It is unbelievable. I can’t say enough good things about it. And you kind of have to know a little bit about markdown, but it’s the kind of thing where you can easily digest and be like today I’m going to work on bullet points, and bullet points, by the way, it’s just an asterisk. I’m going to work on bullet points, and then it converts it to all the HTML and just paste the HTML into your course. And then maybe the next day like “Oh, I really want to do the accordions because Brightspace has the accordions built in, and also the tabbed interfaces. But it’s really hard with the way Brightspace is setup, at least at Fingerlakes, to have both of those on the same page. So Aaron has distilled everything to be a very easy language where you can just do like ^acc, and that creates an accordion. It’s awesome. It just saves so much work. And then we actually save all our text files. We use GitHub and I make it public but you could use it in OneDrive or in Google Drive. You just host these text files. So when it comes time next semester and you want to tweak things, you just tweak this text file, run it through Aaron’s software, which is at LEARNBrightspace.com You run it through the software and it translates everything to the HTML with the JavaScript and the CSS and you just paste it in Brightspace and it works and it looks gorgeous and it’s responsive. It has heightened my aesthetic game by about 1,000,000,000%.

John: And the way the accordions, for example, in Brightspace work is there’s an accordion template that you can use as a style sheet. The default page has 6 blank accordion templates on it corresponding to 1, 2, 3, 4, 5, or 6 item accordions, and then you delete the ones you’re not using. And then you just paste or type your content into that accordion template’s contents. But this tool, you don’t have to do any of that. And you’re not limited to a six-item accordion, if I recall correctly.

Dave: You can do more than six if you’re really brave with your HTML. But like, who knows what happens. And the other issue is you can’t do accordions and anything else, because the templates don’t have accordions and tabs, I don’t think. So if you want to have both those in one piece of content, then you kind of have to maybe open up another tab that’s like your sacrificial tab, and it’s just really, really wonky. So Aaron’s software, we’ve been calling it “Markdown to Brightspace.,” that’s the working name, anyhow. You can see that at LearnBrightspace.com. We will be building that out with videos and things to help people understand how to use the tool because it is more power than one person should wield.

Rebecca: That, at least, reduces high levels of irritation.

Dave: Yeah, Brightspace and Blackboard both have much better text editors than they did two years ago. The one and Brightspace, it’s just so unreliable, which is I think, what drove Aaron mad enough to make the software. And to your question earlier, John, I think that it did work for Blackboard and it can work for HTML, but we need to add some tweaks to it just to make sure that it’s universal. So right now, the predominant version is for Brightspace.

John: It would be nice if there was a universal translator type application that would generate the code in Blackboard, or Brightspace, or Canvas or any other commonly used LMS.

Dave: And Aaron’s is very, very close, it would really be just some small, small tweaks. But I will tell you, just last night, in my role at the Center for Teaching and Learning, I send out a newsletter every week. And I’ve been doing it in Outlook and you can do some things. And last night, I was like, I wonder if I can use Aaron’s software to do this. So I use the markdown I created an email, like a template, and then instead of copying the HTML code, I literally highlighted all the HTML, not the code, but the actual like images and graphics and stuff. And I just pasted it into Outlook, and it is gorgeous. It is absolutely beautiful. So perhaps that might be a way to have that work in other LMS’s, I would encourage people to look at that video. Because I think that having the how to of how to do all the things we’ve been talking about might help. And that was the video from your professional development in January. And just keep coming back to LEARNBrightspace.com. By the way, it’s not monetized and I don’t track you or anything… purely putting out there for the benefit of the world. But we’ll be publishing more stuff on Brightspace, some really cool, wacky things you can do there, we’re going to be putting out some really cool, wacky things you can do with Google Docs. And we’re going to be doing some really cool wacky things that you can do with Aaron’s markdown software.

Rebecca: Sounds exciting, and it’s interesting that you’ve said all these things that are coming. But our last question is always: “What’s next?” And now you have to come up with something else.

Dave: Well, I knew that was going to be your question, I thought I could preempt that. But I would say some of the things that I’m working on right now is first of all accessibility. So if anyone out there is listening to this and is an accessibility expert and wants to team up with me to refine my processes, and I can see you waving to me so maybe we can chat offline and really just check this and make it more accessible. The other thing is, I’m working on a few other projects that I think would be of benefit to educators and not of benefit to anyone else. And that all uses Google Docs. So for instance, I have a spreadsheet where I keep comments that I might use for different assignments. But then one of the reasons I really like Google Docs, being a computer nerd, is every Google Doc and Google sheet has a JavaScript component to it. So you can build software off spreadsheets. So some of the software I built is this comment generator. So you manage the comments in your Google Sheet, and then you hit a button and it pops up this nice window with all the comments and it’s tabbed for the different classes you teach. And you can just click on the comment, it copies it into your clipboard, and then you can just paste it. So when you’re grading, and assessing other students’ work, it just makes it go a little bit faster. And I’m also working on rebuilding that tool that I was talking about earlier, where it can spin off Google Docs for all the students in your class, and then you own them until the end of the semester where you can turn custody over to them. So I’m constantly building tools to help me be faster and better at what I do. And hopefully sharing those with other people along the way.

Rebecca: Yeah, if you need a user tester for backing up documents. [LAUGHTER]

Dave: Yeah, I would happily swap that experience with some accessibility knowledge.

Rebecca: That sounds fair. Well, thanks so much for joining us. It’s always a pleasure, and we always take away something new.

John: Thank you. It’s great talking to you, as always,

Dave: Yeah, it’s great seeing you too. And thank you so much. I really appreciate it. I just listened to, I think I told you this, but I listened to the most recent episode about Chet GPT and it blew me away. So keep doing what you’re doing because every episode is better than the previous, with the possible exception of this one. This one might be one of the low points, but the rest of them I really enjoy listening to.

John: Well, thank you

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

276. Teaching at its Best

New faculty often start their faculty roles without training in teaching. In this episode Linda Nilson and Todd Zakrajsek join us to talk about the evolving roles and expectations of faculty and explore the new edition of a classic teaching guide.

Now Director Emeritus, Linda was the Founding Director of the Office of Teaching Effectiveness and Innovation at Clemson University. Todd is an Associate Research Professor and Associate Director of the Faculty Development Fellowship in the Department of Family Medicine at the University of North Carolina at Chapel Hill. Linda and Todd are each individually the authors of many superb books on teaching and learning and now have jointly authored a new edition of a classic guide for faculty.

Shownotes

  • Zakrajsek, T. and Nilson, L. B. (2023). Teaching at its best: A research-based resource for college instructors. 5th edition. Jossey-Bass.
  • Nilson, L. B., & Goodson, L. A. (2021). Online teaching at its best: Merging instructional design with teaching and learning research. John Wiley & Sons.
  • Nilson, Linda (2021). Infusing Critical Thinking Into Your Course: A Concrete, Practical Guide. Stylus.
  • McKeachie, W. J. (1978). Teaching tips: A guidebook for the beginning college teacher. DC Heath.
  • POD
  • Betts, K., Miller, M., Tokuhama-Espinosa, T., Shewokis, P., Anderson, A., Borja, C., Galoyan, T., Delaney, B., Eigenauer, J., & Dekker, S. (2019). International report: Neuromyths and evidence-based practices in higher education. Online Learning Consortium: Newburyport, MA.’
  • Padlet
  • Jamboard
  • Eric Mazur
  • Dan Levy
  • Teaching with Zoom – Dan Levy – Tea for Teaching podcast – May 26, 2021

Transcript

John: New faculty often start their faculty roles without training in teaching. In this episode we talk about the evolving roles and expectations of faculty and explore the new edition of a classic teaching guide.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guests today are Linda Nilson and Todd Zakrajsek. Now Director Emeritus, Linda was the Founding Director of the Office of Teaching Effectiveness and Innovation at Clemson University. Todd is an Associate Research Professor and Associate Director of the Faculty Development Fellowship in the Department of Family Medicine at the University of North Carolina at Chapel Hill. Linda and Todd are each individually the authors of many superb books on teaching and learning and now jointly have authored another superb book. Welcome back, Linda and Todd.

Linda: Thank you very much.

Todd: Really appreciate the opportunity to be here.

Rebecca: Today’s teas are: … Linda, are you drinking tea?

Linda: I’m drinking a tea called water. It’s rather dull, but I enjoy it.

Rebecca: It’s very pure.

Linda: Yes, very pure. Very pure.

Rebecca: How about you Todd?

Todd: Oh, I’ve got myself a Lemon Detox because I’ve spent most of my day getting all toxed and now I’m getting detoxed. [LAUGHTER] Wait a minute, that sounds bad. [LAUGHTER] But that will be all right. [LAUGHTER]

John: Especially at Family Medicine.

Todd: Well, we can fix it. [LAUGHTER] In general, life is good.

John: I am drinking pineapple green tea.

Rebecca: Oh, that’s a new one for you, John.

John: I’ve had it before, just not recently.

Rebecca: Okay. I’m back to the very old favorite, English afternoon. Because I stopped by the Center for Excellence in Learning and Teaching and grabbed a cup before I came.

John: And we are recording together in the same room, which has been a fairly rare occurrence for the last several years. We’ve invited you here to discuss your joint endeavor on the fifth edition of Teaching at its Best: a Research-Based Resource for College Instructors, that Linda originally developed and now you’ve collaborated on this new edition. How did the collaboration on this edition come about?

Linda: Well, let me talk about that. Because it was pretty much my idea. Jossie-Bass contacted me and said “let’s put out a fifth edition” and I said “let’s not.” [LAUGHTER] I was not in the mood to do it. I’ve been retired six and a half years now and I’m loving it. I mean, I’m really loving it. And while retired, I was still writing the second edition of Online Teaching at its Best. And then I was writing a book, Infusing Critical Thinking Into Your Course, and I guess I had had it. I mean, I wanted to really make a change and I wanted to get specifically into working at an animal shelter. So I was all occupied with that. So I thought I remember Wilbert J. McKeachie, when he was doing Teaching Tips that he came to a certain point after I don’t know how many editions that he brought other people on to really do the revision work. And so I decided I’m going to do that. So Jossey-Bass said “Okay, fine.” They wanted three names. Okay, I gave him three names, but my first choice was Todd Zakrajsek, because 1. I knew he’d finish it. [LAUGHTER] I knew he’d finish it fast. I knew he do a great job. He knows the literature like the back of his hand, I wouldn’t have a worry in the world. And guess what? Todd accepted. Hip hip hurray. I was so happy. I couldn’t tell you.

Todd: Well, this is great because I said no when they asked me. [LAUGHTER]

Rebecca: Like any smart person would, right? [LAUGHTER]

Todd: Well, I did end up doing it, of course. But the reason I said no was I knew that book very well and I know Linda very well. And I said, “There is no way. I don’t know anybody who can step in and pick this thing up. She knows so much about so much that it’s just not possible.” And they said, “But she really wants you to do this.” So I went back and forth a couple times and I finally decided to do it. And I will tell you, Linda, because I haven’t mentioned this to you. The first three chapters, I had to go back and redo those when I got done with it, because I was so scared of the first three chapters [LAUGHTER] that it was really rough. And then finally it’s like, okay, I hit my rhythm and I walked into it with impostor syndrome a little bit, and I finally caught my footing, but it’s a good book to start with.

Linda: Thank you. Thank you very much. [LAUGHTER] Yeah, I know, the plot thickens, right? It becomes more interesting as you go from chapter to chapter, right. And before you know it, there’s a happy ending after all.

Rebecca: So Linda, Teaching at its Best has been around for a long time with a first edition published in 1998. Can you talk a little bit about how that first edition came about?

Linda: Yes, that was…I can’t believe… 1998. That’s 25 years ago. It’s almost scary how time flies. But anyway, the actual seed of the book came about in about 1994… 95. But I need to give you some background because I had been writing TA training books since like, the late 1970s when I was first given the task of putting together a TA training program. So back then, I was putting out weekly mimeos,[LAUGHTER] remember mimeograph machines. Some of you don’t know, what is she talking about? But anyway, that was technology then. But anyway, smetl great, though… it really did. [LAUGHTER]

Rebecca: That’s the second time today someone has made a reference about the smell of those.

Linda: Yeah, oh yeah.

John: The dittos are what I remember having the stronger smell

Todd: The ditto did, yeah. yeah, and I’ll tell you before we move on, when I was a graduate student, we had a ditto machine. I just have to say this, Linda, because you liked the smell and all there.

Linda: Yeah, Yeah.

Todd: But they had a ditto machine. And below the ditto machine, I noticed that the floor tiles were kind of eaten away by the ditto fluid. [LAUGHTER] And then here’s the best part is that one day I was rooting around in the closet looking for something and I found the extra tiles in a box and the side of the box said “reinforced with long-lasting asbestos.” [LAUGHTER] So the ditto fluid was eating through asbestos lined tile, but that’s how strong that stuff is. So yeah, we all enjoyed the smell of that stuff back in the day..

Linda: Yeah, yeah. I guess it’s a good thing for all of us they invented something else, like copying machines. So anyway, so I started doing that at UCLA. And then that turned into like a booklet of sorts. And then I was at UC Riverside, and I was writing books there. And I sort of revised it every couple of years. And I was also writing these with my master teaching fellows. So we were doing that. And then I came to Vanderbilt, and I decided, well, I’m going to do this, pretty much on my own, I’ll get some help from my master teaching fellows. But anyway, it turned into an actual book. I mean, it turned into a happy monster. And I was very pleased with it. Well, along about 94-95, my husband recommended that I turn it into a regular book, and talk to a publisher about it. So anyway, I said, “Oh, great idea. Great idea and just sort of didn’t think about it much. Then in 1996, he died. And I thought, “Well, how am I going to pull myself through?” I bet it would be a great idea and a great tribute to him if I took Teaching at its Best, the Vanderbilt edition, and turned that into a general book. And I decided to do that and kept my mind off of bad things. And it turned into Teaching at its Best, the first edition. That’s why I dedicated the book to him, by the way, because it really was his inspiration that got me to do it. And so anyway, tribute to him. So that’s where the first edition came from. I mean, it really grew out of tragedy. But it’s been a comedy ever since, right? [LAUGHTER] So anyway, it’s been a wonderful thing.

John: And it’s been a great resource.

Rebecca: It’s interesting that it pulled you through, but then has pulled many teachers through. [LAUGHTER]

Linda: And I’ve gotten such feedback from faculty members who said, “I saved their lunch,” you know, if they were really in big trouble, and some of them said, “I was in big trouble with my teaching and you got me tenure.” Yeah, like, right. But anyway, the book helped a lot of people. And I guess maybe something in me when I first published this book said, “Gee it would really be great to be the next Wilbert McKeachie, right, which is a very pretentious thing to think. But then they wanted the second edition, I was thinking, “Hey, maybe I’m on the road to something.” And then there was a third, and then there was the fourth. And it didn’t get any easier to write the subsequent editions really, it was just a matter of keeping up with the literature. And so right now, I’m off into another corner of the world. So I just didn’t want to immerse myself in that again.

John: So that brings us to the question of what is new in the fifth edition?

Todd: Well, that’s my question. I’ve known Linda for the longest time. By the way, I do want to mention before we go on, I can’t remember, Linda, if it’s been that long ago, but it might have been the second edition. When at POD, I said, “You need to do a second edition of this book” …or second or third. But I was using the book. I mean, I learned so much from it. So for the new edition. Number one, of course, the research has been updated only because the research is always changing. And it had been a few years. So that’s number one. In terms of changing the book, though, we only have a leeway of about 10,000 words. Now, for those out there listening 10,000 words sounds like a lot of words until you’ve got a 200,000 word book, it was about 190. And they said, you can’t go over 200 Because the book just gets too big then. So it is 10,000 words longer than it was in fact, I think it’s 10,003 words longer. So it’s right in there. [LAUGHTER]

Rebecca: So you snuck an extra 3 words in.

Todd: It could have been a squeeze to put three words in there. And it’s always hilarious because when they say there’s just a few too many words I just start hyphenating things so yeah, it kind of all works. [LAUGHTER] Yeah, just any words at all. So you can do “can you” as just a hyphenated word. It works. [LAUGHTER]K So is that terminology, the terminology does change and I find this fascinating. One of the things I love to write about books is learning. I mean, Linda, the same thing what as we write, we read a ton of stuff. And as we read stuff, we learn stuff. So this one in particular, for example, is that I grew up with PBL as problem based learning. And I had done workshops on it, I had worked on everything else, but I hadn’t looked at it for quite a while. And in this particular book, as I started looking at PBL, I couldn’t find anything on problem based learning. And it was fascinating because I was doing some digging, and then I called Claire Major, who was an early person who had a grant on problem based learning and everything I ran into was about 2002, it just started to drop off a little bit, and there was some, but it started to tail off. And when I talked to Claire, she says, “Oh, yeah, I used to do quite a bit about that, it was back around 2002-2003.” And now, and the reason I’m saying this is, every time I saw the letters PBL, it was project based learning. And project based learning sounds a lot like problem based learning, but they’re different concepts. And so anyway, going through and finding some of the terminology, so it was consistent with what’s being done right now has changed. There is now a chapter on inclusive teaching, because over the last three or four years, we finally realized that there’s a whole lot of individuals who haven’t been successful in higher education, partly because of the way we teach. And so I’ve been making an argument for a few years now that teaching and learning, the classroom situation has always really been based for fast-talking, risk-taking extroverts. And we’ve suddenly realized that if you’re not a fast-talking, risk-taking extrovert, you may not get a chance to participate, classroom and other things. So I looked at some different things with inclusive teaching. There’s a whole another chapter on that. And then just the language throughout, we talk a little differently now, just even over the last three or four years than we did five, six years ago, I was pretty surprised by that. But there’s some pretty significant changes in language. So the book has a slightly different tone in language, and those are the biggest changes. Oh, I should say, before we move on, one of the biggest other changes, and I did this one, Linda put a section in there that said learning styles had changed significantly from the previous edition. And so she had pointed out that there was no longer a section on learning styles. And I put the learning styles right back in there, I told Linda and she gasped just a little bit. And then I explained that I put it back in there, and then said exactly how terrible it was to basically teach according to learning styles, because it’s the myth that will not die. So that’s back in there.

Linda: People love it. I know. [LAUGHTER]

John: We have that issue all the time, students come in believing in them and say, “Well, I can’t learn from reading because I’m a visual learner.” And I say “Well, fortunately, you use your eyes to read,” and then I’ll get them some citations.

Todd: Well, I’ll tell you, and before we move on, these are the types of things we learned. I couldn’t figure out why the thing is so hard to die. What is it that’s really doing this because other myths we’ve been able to debunk. And part of the reason is licensing exams, when you are in pre-service and you want to become a teacher, the exams you take to become a teacher, a large portion of those exams, have learning styles questions on there. So you have to answer about visual learners and auditory learners and kinesthetic. And so until we get those out of teacher education programs, we’re teaching teachers to believe this. So anyway, there you go. Public service announcement. Be careful about meshing. And if you don’t know what meshing is, look it up and then stop it. [LAUGHTER]

John: We have had guests on the podcast who mentioned learning styles, and then we edit them out and explain to them later why we edit out any reference to that. And I think most of them were in education, either as instructors, or they’ve been working as secondary teachers. It is a pretty pervasive myth. In fact, Michelle Miller and Kristen Betts, together with some other people, did a survey. And that was the most commonly believed myth about teaching and learning. It was done through OLC a few years back, about three or four years ago. Yep,

Todd: Yeah, I saw that survey. Yes, it’s pretty amazing. Michelle’s an amazing person.

Rebecca: The experience of the pandemic has had a fairly large impact on how our classes are taught. Can you talk a little bit, Todd, about how this is reflected in this new edition?

Todd: Things have changed pretty significantly because of the pandemic. There’s a couple things going on. Again, the inclusive teaching and learning, which I’ve already commented on, is really different now. And it’s interesting, because it goes back to the 1960s. We’ve known that, for instance, African Americans tend to flunk at twice the rate of Caucasians, in large machine-scored multiple choice exams. So we know it’s not the teaching, and we know it’s not the grades, it has to be something else. And it turns out that it was you put students into groups and those differences start to disappear. So I mean, even more so the last couple of years, it’s a lot of engaged learning, active learning. I’m still going to pitch my stuff that I’ve been ranting and raving about for years. And there’s no data out there that says that lecturing is bad. What the data says is that if you add active and engaged learning to lecture, then you have much better outcomes than lecture alone. But we’re learning about those types of things in terms of active and engaged learning, how to pair it with and mix it with other strategies that work, looking at distance education in terms of systems and how we can use technology. So a quick example is I used to have a review session before exams. And oftentimes, it’s hard to find a place on campus to have that. And so you might be in a room off in one hall or the library or something. And if the exam was on Monday, I’d have the review session at like six o’clock, seven o’clock on a Sunday night. And there are students who couldn’t make it. I would simply say, you can get notes from someone else. And we’ve known for the longest time, if a student misses class, getting notes from somebody else doesn’t work. Well, now I do review sessions on Zoom, we don’t have to worry about finding a place to park, we don’t have to worry about some students finding babysitters, if they’re working, it’s recorded, so they get the exact same thing. So things like Zoom have really changed teaching in a sense that you can capture the essence in the experience of teaching and use it for others, and it has helped with some equity issues. You can’t do it all the time. And teaching over Zoom is different than face to face. But there are now ways of using different technologies and using different modalities to help to teach in ways that were not really used before the pandemic.

John: Speaking of that, during the pandemic, there was a period of rapid expansion in both the variety of edtech tools available and in terms of teaching modalities themselves. In the description of your book, it indicates that you address useful educational technology and what is a waste of time? Could you give us an example of both some useful technologies that could be used and some that are not so useful? And also perhaps a reaction to the spread of bichronous and HyFlex instruction?

Linda: Yeah, I’ll take this one. And I’m drawing a lot of stuff from another book that I co-authored, with Ludwika Goodman. We were writing about Online Teaching at its Best, okay. And she was an instructional designer. And I came from teaching and learning and we put our literature’s together. And we were talking about modalities a great deal, especially in the second edition with the pandemic. Well, one thing I found out, not only from reading, but also from watching this happen was that this Hyflex or bichronous, whatever you want to call it, is a bust, if there ever was a modality, that’s a bad idea it’s that one, even though administrators love it because students can choose whether to come to class and do the things they would do in class, or to attend class remotely? Well, yeah, it sounds like “oh, yeah, that could be good.” But the technological problems, and then the social problems, especially the in-class social problems are enormous. And in-class social problems, like small group work, how do you hear what’s going on in the classroom over this low roar of small groups? Okay, so how can you help? How can the students that are learning remotely, what can they do? Now, the way this was invented, by the way, was for a small graduate class, and then okay, like, makes sense, because you’re only dealing with six students in this room and six students who are remote. But other than that, it’s so bad, the logistics, the sound logistics, the coordination that the instructor has to maintain, the attempt at being fair to both groups, at bringing in both groups, when the groups can’t even hear each other well. Now, if we had Hollywood level equipment in our classrooms, we might be able to make this work a little better, but we don’t, and we’re never going to have that. So there are just a lot of technological and social reasons why HyFlex, that’s what I called it in Online Teaching at its Best, what it was called at the time is a complete bust. Now, not to be confused with hybrid or blended learning, which we found has worked exceedingly well. So bringing in some technology, but into a face-to-face environment and that being the base of the class. Now, remote’s nice, but you might not want to do remote all the time for all things. It’s not quite the next best thing to being there. But it’s something and as long as you don’t just stand there and stare at the camera and lecture for an hour. You’ll get complaints about that quickly. And particularly with students today when they really need to be actively involved, actively engaged. So yeah, sure, fine, talk for three minutes, maybe even push it for five, but then give them something to do and you really, really must in remote because otherwise, you’re just some talking head on television.

Todd: I agree completely. In fact, it was funny because I happen to have a digital copy of the book here. And so I typed in a ctrl F and I typed in HyFlex and there’s one comment to the preface that said there’s many different formats out there and then I will tell the listeners, if you’re expecting to learn about HyFlex, the word never shows up again in the book. [LAUGHTER] So, it’s not in there. I mean, you look at the literature that’s out there. And I think it’s fair to say that maybe there are people who can do it. I haven’t really seen it done well and I think Linda’s saying she hasn’t either. And it’s so difficult, especially for a book like this. That’s not what we’re all about. I mean, again, if it even works well, which I’d love to hear that it would be a very advanced book and that’s not what this is. So we do have a lot in there about technology in terms of edtech tools, though. There are those in there, I would just say real quickly, for instance, Padlet’s one of my favorites, I’ll throw that out there. I like Padlet a lot. But there are tools out there, if you want to do a gallery walk, which for instance, if you happen to be in a face-to-face course, you’d set up maybe four stations with big sheets of paper, you put your students into groups, and then they walk from sheet of paper to sheet of paper, and they move around the room. And they can do what’s called a gallery walk. You can do the same thing online with a jamboard, you can set up jamboards so that there’s different pages, and then each group is on a page. And then you just say it’s time to shift pages, they could shift pages. So I’ve done gallery walks, and it’s worked well. I’ve used Padlet for brainstorming. And one of the things I love about Padlet, I’ll have to say is if you are doing some digital teaching in a situation, you can watch what each group is developing on the page for all groups at the same time. I can’t hear all groups at the same time when I walk around the room. So there are certainly some technologies coming out that can really do things well. There’s also things that don’t work very well, though. And I think one of the things you want to keep in mind is just learning theory. Does the technology you’re using advance students, potentially, through learning theory? Does it help with repetition? Does it help with attention? Linda was just mentioning attention, if you lecture too long, you lose their attention. If you do something ridiculously simple or not… I was gonna say stupid, but that sounds rude. But if we do something as a small group that makes no sense, you don’t get their attention either. So using clickers, I have to say, I watched a faculty member one time because they were touted as a person who was very engaging. And this is at a medical school, so I really wanted to see this. And the person used clickers, but used it in a way that asked the students a question, they responded, and the instructor looked up at the board and said, “Here’s how you responded, let’s move on.” And then moved on to the next thing. And about five minutes later gave another question said, “How do you respond?” and they clicked the clicker, and then they moved on again. That had no value at all, and in fact, there was no actual interaction there. So afterwards, I say, can’t you just ask a rhetorical question and just move on? We got to be careful not to use technology just because it’s being used, it should advance the learning process.

John: However, clickers can be effective if it’s combined with peer discussion and some feedback and some just-in-time teaching. If it’s just used to get responses that are ignored, it really doesn’t align with any evidence-based practice or anything we know about teaching and learning. But those per discussions can be useful and there’s a lot of research that show that does result in longer-term knowledge retention when it’s used correctly, but often it’s not.

Todd: Right. And I think that’s a really good point. I’m glad you said that, because Eric Mazur, and his concept tests, for a large extent, that’s where active and engaged learning really took off. And that is a clicker questions. And they can be used as great tools. But again, if you’re using it for the right reason, which is what you just said, My comment is, there’s technology out there, that is a waste of time, and not a good thing to have, because it’s just not being used in a way that’s conducive to learning. So good point, that’s fair.

Rebecca: Can you talk a little bit about who the audience of the book is?

Linda: Sure. It’s actually for anybody who teaches students older than children, I suppose, because it isn’t really designed for teaching children. But other than that, it’s really for people who teach but don’t have the time to read a book. The nice thing about Teaching at its Best is you can go to the table of contents, you go to the index, you could find exactly what you need for your next class. And it’s very oriented towards how to, so it could be for beginners or for experienced people who simply haven’t tried something specific before, or want a twist on it, or just want some inspiration. Because there there are a lot of different teaching techniques in there. And they’re all oriented towards student engagement, every single one of them. But I wanted to comment too, on just how the job of instructor or professor has changed over the past, I don’t know, 40 years, I suppose. I know when I started teaching it was a completely different job. And I started teaching in 1975, when I was 12, of course, but no and I was young to start teaching because I was 25 and there I was 180 students in front of me. So oops, my goodness, what have I done? But that’s exactly what I wanted to do. But you’d go in there, you’d lecture and you’d walk out. You were in complete control of everything. Like, you might throw out a question and you might get a discussion going. But it wasn’t considered to be essential. In fact, there were two teaching techniques back then: there was lecture, and there was discussion. And nobody knew how to do discussion. Now, I had to find out a few things about it when I was doing TA training, because TAs were supposed to be running discussions. But there wasn’t a lot out there. Thank God for Wilbert McKeachie’s book Teaching Tips, because that was about the only source out there you could go to. So anyway, but now the job, I mean, oh, it’s mind boggling what faculty are now expected to do. And they are supposed to, like, learning outcomes. Okay, I love learning outcomes. They’re wonderful. But I didn’t have to do that when I started, I just had to talk about my subject, which I dearly loved. And so, that was nothing. But you’ve got learning outcomes. So you’ve got to be like, a course designer, you have to deal with a student’s mental health problems, right? It’s part of the job, and you’re expected to respond to them. You’re supposed to give them career counseling in careers that you might not know much about, and possibly for good reason, because you’re in your own career. It’s so time consuming, not to mention fair use, oh, yes, fair use has changed, fair use has changed radically. And when you’re dealing with anything online, the rules are totally different. And you’re highly restricted as to what you can use, what you could do. When you’re in a face-to-face classroom, it’s a little bit easier. So yeah, so you got to be a copyright lawyer to stay out of trouble. And then you get involved in accreditation, you get involved in that kind of assessment. So you have to all of a sudden be totally involved in what your program is doing, what your major is doing, where it’s headed. There’s just too much to do. And there are more and more committees and oh, there’s a lot of time wasted in committees. Of course, you’re supposed to publish at the same time and make presentations at conferences. It was like that back then, too. But now, the expectations are higher, and it’s on top of more time in teaching, and more courses. I was teaching four courses a year, and you can’t find that kind of job anymore.

Rebecca: So Linda, you’re saying the animal shelter is going really well now?

Linda: Yes. Yes. Yes. Yes.

Todd: That’s hilarious. Well, I want to point out too, and I think Linda’s said it very, very well is that we are expected to do things we never had to use before. Never worried about before. And I love the fair use is great, because when I first started teaching, and I’ve been teaching for 36 years, when I first started teaching, you’d videotape something off TV and show it in class and then put it on the shelf. And I knew people who showed the same video for 10 years. Right now you better be careful about showing the same video for 10 years. But these are things we need to know. I would say also, by the way, this is a really good book for administrators, anybody who would like to give guidance to faculty members, or better understand teaching and learning so that when promotion and tenure comes along, you get a sense of this. And so if you’re saying to the faculty, they should use a variety of teaching strategies. It’s not a bad idea to know a variety of teaching strategies. And so I think it’s good for administrators as well, and graduate students. But I want to take a second and tell you, one of the reviews of the book, I guess, came in just yesterday or the day before from Dan Levy. He’s a senior lecturer at Harvard University. And what he put was Teaching at it’s Best is an absolute gem. Whether you are new to teaching in higher education, or have been doing it for a while, you will find this book’s evidence-based advice on a wide range of teaching issues to be very helpful. The style is engaging and the breadth is impressive. If you want to teach at your best you should read Teaching at its Best. And I love what he put in there because it doesn’t matter if you’re a new teacher or you’ve been doing it for a while, this book’s got a lot of stuff in it.

John: And Dan has been a guest on our podcast, and he’s also an economist, which is another thing in his favor. [LAUGHTER]

Todd: That is good.

John: I do want to comment on lenders observation about how teaching has changed because I came in at a very similar type of experience. I was told by the chair of the department not to waste a lot of time on teaching and to focus primarily on research because that’s what’s most important, and that’s the only thing that’s really ultimately valued here or elsewhere in the job market. But then what happened is a few people started reading the literature on how we learn And then they started writing these books about it. [LAUGHTER] And these books encouraged us to do things like retrieval practice and low-stakes tests, and to provide lots of feedback to students. So those people…[LAUGHTER]

Rebecca: I don’t know any of them.

John: …but as a result of that many people started changing the way they teach in response to this. So some of it is you brought this on to all of us by sharing… [LAUGHTER]

Linda: I apologize.

Todd: Sorry about that.

Linda: I apologize.

Todd: We apologize and you know, I will say too is, so yeah, sorry. Sorry about doing that. But I’m glad you said that.

Linda: We made the job harder didn’t we?

Todd: We did, but you know to just be fair for Linda and I as well as I still remember a faculty member calling me, It must have been about 20 years ago, and I just started doing a little bit of Faculty Development, she was crying, she had given her first assignment in terms of a paper. And she said, I’m sitting here with a stack of papers, and I don’t know how to grade them. And it got me thinking a little bit, how many of the aspects of the job that we’re required to do, were we trained to do? And that’s the stuff that Linda was mentioning as well, is nobody taught me. I’m an industrial psychologist. And so nobody taught me the strategies for delivering information to a group of 200 people. Nobody taught me how to grade essay tests. Nobody taught me how to grade presentations, I didn’t know about fair use and how I could use things. I mean, you go through and list all of the things that you’re required to do. And then look at all the things you were trained to do. And this is tough. And that changed. So I have one quick one I’ll mention is I was hired as an adjunct faculty member before I got my first tenure-track job. And I was teaching 4-4. So I had four classes in the fall, four classes in the spring. And about halfway through the spring, I ran into the department chair, and I was interested to see if I was going to be able to come back and I said, “Hey, Mike, how am I doing?” And this was at Central Michigan University, a pretty good sized school. He said, “You were fantastic.” And I said, “Excellent. What have you heard?” He said, “absolutely nothing.” So when it comes to teaching, what I learned was: research, you had to do well, and teaching, you had to not do terribly. And that is what you were mentioning has changed is now you’re kind of expected to do teaching as well.

Rebecca: And there’s a lot more research in the area now too. So sometimes it’s hard to keep up on it. So books like this can be really helpful in providing a lot of that research in one place.

John: And both of you have written many good books that have guided many, many faculty in their careers, and eliminated that gap between what we’re trained to do and what we actually have to do.

Rebecca: So of course, we want to know when we can have this book in our hands.

Todd: Good news for this book, which is exciting because we really cranked away on this thing and it’s listed in Amazon as being due on April 25. But it actually went to press on January 23. So it’s already out and about three months ahead of schedule.

John: Excellent. We’re looking forward to it. I’ve had my copy on preorder since I saw a tweet about this. I think it was your tweet, Todd, a while back. And I’m very much looking forward to receiving a copy of it.

Todd: Excellent. We’re looking forward to people being able to benefit from copies of it.

Rebecca: So we always wrap up by asking what’s next.

Todd: It’s hard to tell what’s next because I’m exhausted from what’s been [LAUGHTER] ever moving forward, as I’m working on and just finishing a book right now that’s to help faculty in the first year of their teaching. So it’s basically off to a good start. It’s what specifically faculty should do in the first year of getting a teaching position. And aside from that, probably working on my next jigsaw puzzle, I like to do the great big jigsaw puzzles. And so I just finished one that had 33,600 pieces. It is five feet….

Rebecca: Did you say 33,000 pieces?

Todd: No, I said 33,600 pieces.It was the 600 that…

Rebecca: Oh, ok.

Todd: …was difficult. [LAUGHTER]

Rebecca: Yeah.

Todd: When the puzzle is done, it has standard sized pieces, and it is five feet by 20 feet. So I just enjoy massively putting something together. It’s very challenging. So quite frankly, for those about and listening to this is if you imagine 33,600 puzzle pieces, that’s about as many studies as Linda and I have read to put this book together. [LAUGHTER]

Linda: Nothing to it. [LAUGHTER]

Todd: So that’s it for me. [LAUGHTER] Linda, what are you up to these days?

Linda: Oh, well, I live in la la land. So I’m still doing workshops and webinars and things like that mostly on my books of various kinds, various teaching topics. But I think what I want to do is retake up pastels and charcoals. My father was a commercial artist. And so he got me into pastels and charcoals when I was in high school. Well and then I dropped it to go off to college. Well, I want to get back into it in addition to working at the animal shelter. I know. It’s la la land and I wish la la land on everybody that I like.[LAUGHTER] I hope you all go to la la land and enjoy being a four year old all over again, because that’s the way I feel. I adapted to retirement in about 24 hours. That’s pushing it… you know, it’s more like four. But anyway, I slept on it. [LAUGHTER] That was the end of it. But I know I eased into it. I eased into it. I was still writing. I was still doing, especially before the pandemic, a lot of speaking. So then the pandemic hit and it just turned into online everything. And now I’m back on the road again, to a certain extent. I love it. So anyway, it’s a nice balance. So yeah, I wish you all la la land too.

Todd: That’s great.

Rebecca: That’s something to aspire to.

Todd: Yeah, it is. But you know, since you mentioned the speaking things, I just have to do the quick plug here. Linda, I think you and I, years and years ago, were joking around at POD about who would be the first one to get to the 50 states and have done a presentation in every state. And so I gotta tell you, I’m not even sure where you’re at in the mix, but I am at 49 states. And if any of your listeners are in North Dakota, [LAUGHTER] I could certainly use a phone call from North Dakota.

Linda: Well, I want to go to Vermont. I have not been to Vermont…

Todd: Oh, you haven’t.

Linda: …to give a presentation. So I would enjoy that. But I’ll go to Hawaii. I’ll do anything in Hawaii for you. Absolutely anything. [LAUGHTER] I’ll do gardening, [LAUGHTER] I’ll do dishes, your laundry. I don’t care.

Todd: That is good. Yeah, Linda and I had this gig. It was a long, long time ago. And I don’t know, it must have been 20 years ago we talked about it even. And there was some rules too. You had to be invited. And there had to be some kind of an honorarium or just I mean, it didn’t have to be much, but the concept was you just couldn’t show up at a state and start talking. [LAUGHTER] Otherwise, we’d have both been done a long time ago. But yeah,

Linda: Yeah.

Todd: … it was fun. This is the way nerds have fun. [LAUGHTER]

John: Well, that’s a competition that’s benefited again, a lot of people over the years.

Rebecca: Well, thanks so much for joining us. It’s great to see both of you again, and we look forward to seeing your new book.

Linda: Thank you for this opportunity. It was a pleasure.

Todd: It was so much fun. Thank you

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]

274. ChatGPT

Since its release in November 2022, ChatGPT has been the focus of a great deal of discussion and concern in higher ed. In this episode, Robert Cummings and Marc Watkins join us to discuss how to prepare students for a future in which AI tools will become increasingly prevalent in their lives.

Robert is the Executive Director of Academic Innovation, an Associate Professor of Writing and Rhetoric, and the Director of the Interdisciplinary Minor in Digital Media Studies at the University of Mississippi. He is the author of Lazy Virtues: Teaching Writing in the Age of Wikipedia and is the co-editor of Wiki Writing: Collaborative Learning in the College Classroom. Marc Watkins is a Lecturer in the Department of Writing and Rhetoric at the University of Mississippi. He co-chairs an AI working group within his department and is a WOW Fellow, where he leads a faculty learning community about AI’s impact on education. He’s been awarded a Pushcart Prize for his writing and a Blackboard Catalyst Award for teaching and learning.

Show Notes

Transcript

John: Since its release in November 2022, ChatGPT has been the focus of a great deal of discussion and concern in higher ed. In this episode we discuss how to prepare students for a future in which AI tools will become increasingly prevalent in their lives.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.

[MUSIC]

John: Our guests today are Robert Cummings and Marc Watkins. Robert is the Executive Director of Academic Innovation, an Associate Professor of Writing and Rhetoric, and the Director of the Interdisciplinary Minor in Digital Media Studies at the University of Mississippi. He is the author of Lazy Virtues: Teaching Writing in the Age of Wikipedia and is the co-editor of Wiki Writing: Collaborative Learning in the College Classroom. Marc Watkins is a Lecturer in the Department of Writing and Rhetoric at the University of Mississippi. He co-chairs an AI working group within his department and is a WOW Fellow, where he leads a faculty learning community about AI’s impact on education. He’s been awarded a Pushcart Prize for his writing and a Blackboard Catalyst Award for teaching and learning. Welcome, Robert and Mark.

Robert: Thank you.

Marc: Thank you.

Rebecca: Today’s teas are:… Marc, are you drinking tea?

Marc: My hands are shaking from caffeine so much caffeine inside of me too. I started off today with some I think it’s Twinings Christmas spice, which is really popular around this house since I got that in my stocking. My wife is upset because I’m in a two bag per cup person. And she’s like saying you got to stop that, so she cuts me off around noon [LAUGHTER] and just to let me just sort of like dry out, for lack of a better word from caffeine withdrawal.

Rebecca: Well, it’s a great flavored tea. I like that one too.

John: It is.

Rebecca: I could see why you would double bag it.

Marc: I do love it.

Rebecca: How about you, Robert?

Robert: I’m drinking an English black tea. A replacement. Normally my tea is Barry’s tea, which is an Irish tea….

Rebecca: Yeah.

Marc: …but I’m out, so I had to go with the Tetley’s English black tea.

Rebecca: Oh, it’s never fun when you get to go to your second string. [LAUGHTER]

John: And I am drinking a ginger peach black tea from the Republic of Tea.

Rebecca: Oh, an old favorite, John.

John: It is.

Rebecca: I’m back to one of my new favorites, the Hunan Jig, which I can’t say with a straight face. [LAUGHTER]

John: We’ve invited you here today to discuss the ChatGPT. We’ve seen lots of tweets, blog posts, and podcasts in which you are both discuss this artificial intelligence writing application. Could you tell us a little bit about this tool, where it came from and what it does?

Marc: I guess I’ll go ahead and start, I am not a computer science person. I’m just a writing faculty member. But we did kind of get a little bit of a heads up about this in May when GPT3, which is the precursor to ChatGPT was made publicly available. It was at a private beta for about a year and a half when it was being developed, and then went to public in May. And I kind of logged in through some friends of mine social media to start checking out and seeing what was going on with it. Bob was really deep into AI with the SouthEast conference. You were at several AI conferences too during the summer as well, Bob. It is a text synthesizer, it’s based off of so much text just scraped from the internet and trained on 175 billion parameters. It’s just sort of shocking to think about the fact that this can now be accessed through your cell phone, if you want to do it on your actual smartphone, or a computer browser. But it is something that’s here. It’s something that functions fairly well, that you make things up sometimes. Sometimes it can be really very thoughtful, though, in it’s actual output. It’s very important to keep in mind, though, that AI is more like a marketing term in this case. There’s no thinking, there’s no reasoning behind it too. It can’t explain any of its choices. We use the term writing when we talk about it, but really what it is, is just text generating. When you think about writing, that’s the whole process of the thinking process and going through, being able to explain your choices and that sort of thing. So it’s a very, very big math engine, with a lot of processing power behind it.

Robert: I completely agree with everything Marc’s saying. I think about it is, and I believe it’s true, Marc, as far as we know, it’s an open AI, but it’s still using GPT3, so it’s really the same tool as Playground. I think it’s really interesting that when openAI shifted from their earlier iteration of this technology, which was Playground and there were some other spin offs from that as well, but that was basically a search format where you got an entry, and you would enter a piece of text and then you would get a response, that when they shifted it to chat, it seemed to really take it to the next level in terms of the attention that it was gathering. And I think it’s rhetorically significant to think about that, because the personalization, perhaps, the idea that you had an individual conversation partner, I think is exceptionally cute. The way that they have the text scroll in ChatGPT so as to make it look like the AI is “thinking” to maybe push this out when it’s immediately available. I think all of that reminds me a little bit of Eliza, which is one of the first sort of AI games that you could play where you play the game to try to guess whether or not there was another person on the other side of the chat box. It reminds me a bit of that. But I can certainly see why placing this technology inside of a chat window makes it so much more accessible and perhaps even more engaging than what we previously had. But the underlying technology, as far as I can see, is still GPT3, and it hasn’t changed necessarily significantly, except for this mode of access.

Rebecca: How long has this tool been learning how to write or gathering content?

Marc: Well, that’s a great question. So it is really just a precursor from GPT3. And again, we don’t really know this because open AI isn’t exactly open, unlike their name. The training data cuts off for this model for ChatGPT about two years ago. And of course, ChatGPT was launched last year at the end of November. So, it’s very recent, pretty up to date with some of that information, too. You can always kind of check the language model and see how much it actually, as we say, knows about the world by what recent events it can accurately describe. It’s really interesting how quickly people have freaked out about this. And Bob’s, I think, building off of that, I think he’s very right that this slight rhetorical change in the user interface to a chat, that suddenly people are able to actually interact with, set off this moral panic in education. You guys know this through the state of New York, New York City schools have now tried to ban it in the actual classroom, which I think is not going to work out very well. But it is certainly the theme we’re seeing not just in K through 12, but also higher ed too… seeing people talk about going back to blue books, going to AI proctoring services, which are just kind of some of the most regressive things you could possibly imagine. And I don’t want to knock people for doing this, because I know that they’re both frightened, and they probably have good reason to be frightened too, because it’s disrupting their practice. It’s also hopefully at the tail end of COVID, which has left us all completely without our capacity to deal with this. But I do want to keep everyone in mind too, and Bob’s really a great resource on this too, from his work with Wikipedia, is that your first impression of a tool, especially if you’re a young person using this and you have someone in authority telling you what a tool is, if you tell them that that tool is there to cheat or it is there to destroy their writing process or a learning process, that is going to be submitted in them for a very long time. And it’s gonna be very hard to dissuade people of that too. So really, what I’ve just tried to do is caution people about the fact that we need to be not so panicked about that. That’s much easier said than done,

Robert: Marc and I started giving a talk on our campus through our Center for Teaching and Learning and our academic innovations group in August. And we’ve just sort of updated it as we’re invited to continue to give the talk. But in it, we offer a couple of different ways for the faculty to think about how this is going to impact their teaching. And one of the things that I offered back in August, at least I think it still holds true, is to think about writing to learn and or writing to report learning. And so writing to learn is going to mean now writing alongside AI tools. And writing to report learning is going to be a lot trickier, depending on what types of questions you ask. So I think it’s going to be a situation where, and I’ve already seen some of this work in the POD community, it’s going to be a situation where writing to report learning has to maybe change gears a bit and think about different types of questions to ask. And the types of questions will be those that are not easily replicated, or answered in a general knowledge sort of way, but they’re going to lean on specific things that you, as instructor, think are going to be valuable in demonstrating learning, but also not necessarily part of a general knowledge base. So, for instance, if you’re a student in my class, and we’ve had lots of discussions about… I don’t know… quantum computing, and in the certain discussion sessions, Marc threw out an idea about quantum computing that was specific. So what I might do on my test is I might cite that as a specific example and remind students that we discussed that in class and then ask them to write a question in response to parts of that class discussion. So that way, I could be touching base with something that’s not generally replicable and easily accessible to AI. But I can also ask a question that’s going to ask my students to demonstrate knowledge about general concepts. And so, if both elements are there, then I probably know that my short answer question is authentically answered by my students. If some are not, then I might have questions. So I think it’s gonna be about tweaking what we’re doing and not abandoning what we’re doing. But it’s really a tough moment right now. Because, as soon as we say one thing about these technologies, well then they iterate and they evolve. It’s just a really competitive landscape for these tool developers. And they’re all trying to figure out a way to develop competitive advantage. And so they have to distinguish themselves from their competitors. And we can’t predict what ways that they will do that. So it’s going to be a while before, I think, this calms down for writing faculty specifically and for higher education faculty generally, because, of course, writing is central to every discipline and what we do, or at least that’s my bias.

Rebecca: So I’m not a writing faculty member. I’m a designer and a new media artist. And to me, it seems like something that could be fun to play with, which is maybe a counter to how some folks might respond to something like this. Are there ways that you can see a tool like this being useful in helping or advancing learning?

Robert: So, we’ve talked about this a bit, I really think that the general shape to the response, in writing classes specifically, is about identifying specific tools for specific writing purposes in specific stages. So if we’re in the invention stage, and we’re engaging a topic and you’re trying to decide what to write about, maybe dialoguing with open AI with some general questions, it’s going to trigger some things that you’re going to think about and follow up on. It could be great. You know, Marc was one of the first people to point out, I think it was Marc said this, folks who have writer’s block, this is a real godsend, or could be. It really helps get the wheels turning. So we could use in invention, we can use it in revision, we can use it to find sources, once we already have our ideas, so identify specific AI iterations for specific purposes inside of a larger project. I think that’s a method that’s going to work and is going to be something that gets toward that goal that we like to say in our AI Task Force on campus here, which is helping students learn to work alongside AI.

Marc: Yeah, that’s definitely how I feel about it too, and to kind of echo what Bob’s saying, there’s a lot more than you could do with a tool than just generate text. And I think that kind of gets lost in this pipe that you see with ChatGPT and everything else. I kind of mentioned before Whisper was another neural network that they launched just quietly back in the end of September start of October of last year, that works with actually uploading speech. It’s multilingual. So you can actually kind of use that almost like a universal translator in some ways. But the thing that’s, like outstanding with it is when you actually use it with the old GPT3 Playground… I say the old GPT playground like it’s not something that’s still useful right now… it uploads the entire transcript of a recording into the actual Playground. So you actually input it into AI. If you think about this from a teaching perspective, especially from students who have to deal with lecture, and want a way to actually organize their notes in some way, shape, or form, they’re able to do that then by just simply issuing a simple command to summarize your notes, to organize it. You can synthesize it with your past notes, even come up with test questions for an essay you need to write or an exam you’re going to have. Now from a teaching perspective, as someone who’s like try to be as student-centric as possible, that’s great, that’s wonderful. I also realized those people who are still wedded to lecture probably going to look at this, like another moral panic. I don’t want my students to have access to this, because it’s not going to help them with their note taking skills. I don’t want them to be falling asleep in my class as if they were staying awake to begin with. So I’m going to ban this technology. So we’re going to see lots of little areas of this pop up throughout education, it’s not just going to be within writing, it’s going to be in all different forms, the different ways… that I’m right there with you using this tool to really help you begin to think about in designing your own thought process, as you’re going through either a writing project, some people using it for art, some people use it for coding, it’s really up to your imagination of how you’d like to do it. The actual area that we’re looking at has a name, I don’t even know it has a name until the developers we’re working with, guys at Fermat. So there’s this article from a German university about beyond generation is what they call the actual form of that. So using your own text as sort of the input to an AI and then getting brainstorming ideas, automatic summaries, using it to get counter arguments to your own version notes. They use it also for images and all different other types of new generations too. So it’s really out there and like I think ChatGPT is just kind of sucking all the air up out of the room because likely so it’s it’s the new thing. It’s what everyone is talking about but so much has gone on, it really has, in this past few months. The entire fall semester I was emailing Bob like two or three times a week and poor Bob was just like “Just stop emailing me. Okay, we understand. I can’t look at this either. We don’t have time.” But it really was just crazy. It really is.

John: What are some other ways that this could be used in helping students become more productive in their writing or in their learning?

Marc: It really is going to be up to whatever the individual instructor and also what the student comes up with this too. If your process is already set in stone, like my process is set in stone as a writer, I think most of us are too as we’ve matured, it’s very difficult to integrate AI into that process. But if you’re young, and you’re just starting out, you’re maturing, that is a very different story. So we’re going to start seeing ways our students are going to be using this within their own writing process, their own creative process, too, that we haven’t really imagined. And I know that’s one of the reasons why this is so anxiety producing, because we say that there is a process, we don’t want to talk about the fact that this new technology can also disrupt that a little bit. I’ll go and segue to Bob, too, because I think he’s talked a little bit about this as well.

Robert: Yeah, one of the things that we’ve come together in our group that Marc’s co-leading is, we’ve come together to say that we want to encourage our students to use the tools, full stop. Now, we want to help them interpret the usage of those tools. So really being above board and transparent about engaging the tools, using our systems of citation, struggling to cope as they are, but just saying at the beginning, use AI generators in my class. I need to know what writing is yours and what writing is not. But, then designing assignments so you encourage limited engagements, which are quickly followed with reflection. So, oh Gosh, who was is Marc, a colleague, that was, I think, was at NC State in the business class where last spring he had students quote, unquote, cheat with AI.

Marc: Paul Fyfe, Yes.

Robert: Yes, thank you. And so he, in so many words, he basically designed the assignment so that students would have AI write their paper and almost uniformly they said, “Please, let me just write my paper, because it’d be a lot simpler. And I would like the writing a lot more.” So that type of engagement is really helpful, I think, because they were able to fully utilize the AI that they could access, and then try a bunch of different purposes with it, a bunch of different applications with it, and then form an opinion about what its strengths and weaknesses were. And then they pretty quickly saw its limitations. So, I mean, to specifically answer your question, John, I do think it can be helpful with a wide range of tasks. Again, invention stage, if I just have an idea, I can pop an idea in there and ask for more information and I’ll get more information. Hopefully, it will be reliable. But sometimes I’ll get a good deal of information and it’ll encourage me to keep writing. There are AI tools that are good about finding sources, there are AI tools that will continue to help you shift voice. So we’ve seen a lot of people do some fun things with shifting voice. Well, I can think of a lot of different types of writing assignments where I might try to insert voice, and people would be invited to think about the impact of voice on the message and on the purpose. And let’s not forget, so one of the things that irks Marc and myself is that a lot of our friends in the computer science world think of writing as a problem to solve. And we don’t think of writing that way. But, as I said to Marc the other day when we were talking about this, if I’m trying to write an email to my boss in a second language, writing is a problem for me to solve. And so Grammarly has proven to us that there are a large number of people in our world who need different levels of literacy in different applications with different purposes and they’re willing to compensate them for some additional expertise. So I had tried to design a course to teach in the fall, we were to engage AI tools, specifically in a composition class, and I had to pull the plug on my own proposal because the tools were evolving too quickly, Marc and Marc’s team solved the riddle because they decided that they could identify the tools on an assignment basis. So it would be a unit within the course. And so when they shrank that timeline, they had a better chance the tools they identified at the beginning of the unit would still be relatively the same by the time they got to the end of the unit. So getting a menu or a suite of different AI tools that you want to explore, explore them with your students, give them spaces to reflect, always make sure that you’re validating whatever is being said if you’re going to use it, and then always cite it. Those are the ground rules that we’re thinking about when we’re engaging the different tools and then, I don’t know, it can be fun.

Marc: You mean writing can be fun? I’ve never heard such things.

Rebecca: It would be incredible. One of the things that I hear you underscoring related to citations, it was making me think about the ways that I have students already using third party materials in a design class, where we use third party materials when we’re writing a research paper, because we are using citations. So we have methods for documenting these things and making it clearer to an audience, what’s ours and what’s not. So it’s not like it’s some brand new kind of thing that we’re trying to do in terms of documenting that or communicating that to someone else. It’s just adapting it a bit, because it’s a slightly different thing that we’re using, or a different third party tool that we’re using or third party material that we’re using, but I have my students write a copyright documentation for things that they’re doing, like, what’s the license for the images that they’re using that don’t require attribution? I go through the whole list, the fonts that they’re using and the license that they’re using for that? So for me, this seems like an obvious next step or a way that that same process of providing that attribution or that documentation would work well in this atmosphere.

Robert: I think the challenge, and Marc and I’ve talked about this before, the challenge is when you shift from a writing support tool to a writing generation tool. So most of us aren’t thinking about documenting spell checker in Microsoft Word, because we don’t see that as content that is original in some way, right? But it definitely affects our writing, nor do we cite smart compose, Google’s sentence completion tool. But how do you know when you’ve gone from smart compose, providing just a correct way to finish your own thought, to smart compose giving you a new thought. And that’s an interesting dilemma. If we can just take a wee nip of schadenfreude, it was interesting to see that the machine learning conference recently had to amend its own paper submission, Marc was pointing this out to me, their own papers submission guidelines to say: “if you use AI tools, you can’t submit.” And then they had to try to distinguish between writing generators and writing assistance. And so that’s just not an easy thing to do. But it’s just going to involve trust between writers and audiences.

Marc: Yeah, I don’t envy the task of any of our disciplinary conventions trying to do this. We could invest some time in doing this with ChatGPT or thinking about this. But then it’s not even clear if ChatGPT is going to be the end of the road here. We’re talking about this as just another version of AI and how he would do that. I’ve seen some people arguing on social media about the fact that a student or anyone who is using an AI should then track down that idea that the AI is spitting out. And I think that’s incredibly futile because it’s trained on the internet, you don’t know how this idea came about. And that’s one of the really big challenges with this type of technology is that it breaks the chain of citations that was used to actually, for lack of a better word, to generate text. I was gonna say to show knowledge, but it can’t really show knowledge, it’s just basically generated an idea, or mimicked an idea. So that really is going to be a huge challenge that we’re going to have to face too and think about. It’s going to be something that will require a lot of dialogue between ourselves, our students. And also thinking about where we want them to use this technology. I think for right now, it’s something that you want to use a language model with your students, or invite them to use it too, tell them to reflect on that process, as Bob mentioned earlier too. There are some tools out there, LEX is one of them, where you could actually track what was being built in your document with the AI, which sort of like glow and be highlighted. So there are going to be some tools on the market that will do this. It is going to be a challenge, though, especially when people start going wild with it, because when you’re working with AI, when it just takes a few seconds to generate a thing and keeping track of that is going to be something that will require a great deal of not only trust with our students, but you really are going to have to sit down and tell them, “Look, you’re gonna have to slow down a little bit, and not let the actual text generations sort of take over your thinking process and your actual writing process.”

Robert: Speaking a little bit of process right now, I’m working on a project with a colleague in computer science. And we’re looking at that ancient technology, Google smart compose. And much to my surprise, I couldn’t find a literature where anyone had really spent time looking at the impact of the suggestions on open-ended writing. I did find some research that had been done on smaller writing. So, for instance, there was a project that asked writers to compose captions for images, but I didn’t see anything longer than that. So that’s what we did in the fall, we got 119 participants, and we asked them to write an open-ended response, an essay essentially, a short essay in response to a common prompt. Half of the writers had Google smart, compose enabled, and half didn’t. And we’re going through the data now to see how the suggestions actually affect writers’ process and product. So we’re looking at the product right now. One of our hypotheses is that the Google smart compose participants will have writing that is more similar, because essentially they will be given similar suggestions about how to complete their sentences. And we expect that in the non-smart compose enabled population we’ll find that there was more lexical and syntactical diversity in those writing products. On the writing process side, we’re creating, as far as I know, new measures to determine whether they accept suggestions, edit suggestions, or reject suggestions. And we all do some of all three of those usually, but the time spent. And so we’re trying to see if there’s correlations between the amount of time spent, and then again, the length of text, the complexity of text, because if you’re editing something else, you’re probably not thinking about your own ideas, and how to bring those forward. But overall, what we’re hoping to suggest, and, again, because we’re not able to really see what’s happening in smart compose, we’re having to operate with it as a black box. What we’re hoping to suggest is that our colleagues in software development start inviting writers into the process of articulating our writing profile. So let’s say, for instance, you might see an iteration in the future of Google smart compose that says, “Hey, I noticed that you’re rejecting absolutely everything we’re sending to you. Do you want to turn this off?” [LAUGHTER]

Rebecca: Yes. [LAUGHTER]

Robert: Or “I noticed that you’re accepting things very quickly. Would you like for us to increase the amplitude and give you more more quickly?” Understanding those types of interactions and preferences can help them build profiles and the profiles can then hopefully make the tools more useful. So, I know that they, of course, do customize suggestions over time. So I know that the tool does grow. I think as John you might have said, you know, how long is it learning to write, well, they learn to write with us. In fact, those are features that Grammarly competes with its competitors on. It’s like our tool will train up or quickly. At any rate, what does it mean to help students learn to work alongside AI? Well, what I believe, when it comes to writing, part of what it’s going to mean, is help them to understand more quickly what the tool is giving them, what they want, and how they can harness the tool to their purposes. And until the tools are somewhat stable and until the writers are invited into the process of understanding the affordances of the tool and the feature sets. That’s just not possible.

John: Where do you see this moral panic as going? Is this something that’s likely to fade in the near future? And we’ve seen similar things in the past. I’ve been around for a while. I remember reactions to calculators and whether they should be used to allow people to take square roots instead of going through that elaborate tedious process. I remember using card catalogs and using printed indexes for journals to try to find things. And the tools that we have available have allowed us to be a lot more productive. Is it likely that we’ll move to a position where people will accept these tools as being useful productivity tools soon? Or is this something different than those past cases?

Marc: Well, I think the panic is definitely set in right now. And I think we’re going to be in for some waves of hype and panic. We’ve already seen it from last year. I think everyone kind of got huge dose of it with ChatGPT. But we were kind of getting the panic and hype mode when we first came across this in May, wondering what this technology was, how would it actually impact our teaching, how would it impact our students too. There’s a lot of talk right now about trying to do AI detection. Most of the software out there is trying to use some form of AI to detect AI. They’re trying to use an older version of GPT called GPT2 that was open source and open release before openAI decided to sort of lock everything down. Sometimes it will pick up AI generated text, sometimes it’ll mislabel it. I obviously don’t want to see a faculty member take a student up on academic dishonesty charges based on a tool that may be correct or may not be correct, based off of that sort of a framework. TurnItIn is working on a process where they’re going to try to capture more data from students that they already have. If they can capture big enough writing samples, they can then use that to compare your version of your work to an AI or someone who’s bought a paper from a paper mill or contract cheating because of course, a student’s writing never changes over the course of their academic career. And our writing never changes either. It’s completely silly. We’ve been sort of conditioned, though, when we see new technologies come along to have it’s sort of equivalent to mitigate its impact on our lives. We have this new thing, it’s disruptive. Alright, well give me the other thing that gets rid of it so I don’t have to deal with it. I don’t think we’re going to have that with this. I’m empathetic to people. I know that that’s a really hard thing for them to hear. Again, I made the joke too about the New York City school districts banning this but, from their perspective, those people are terrified. I don’t blame them. When we deal with higher education, for the most part, students have those skills set that they’re going to be using for the rest of their lives. We’re just explaining them and preparing them to go into professional fields. If this is a situation where you’re talking K through 12, where a student doesn’t have all the reading or grammatical knowledge they need to be successful and they start using AI, that could be a problem. So I think talking to our students is the best way to establish healthy boundaries, and getting them to understand how they want to use this tool for themselves. Students, as Bob mentioned too, and what Paul Fyfe was doing with his actual research, students are setting their own boundaries with this, they’re figuring out that this is not working the way the marketing hype is telling them it is, too. So, we just have to be conscious of that and keep these conversations going.

Robert: Writing with Wikipedia was my panic moment or my cultural panic moment. And my response then was much as the same as it is now. Cool. Let’s check it out. And Yochai Benkler has a quote, and I don’t have it exactly right in front of me, but he says something like all other things being equal, things that are easier to do, are going to be more likely to get done. And the second part, he says is all of the things are never equal. So that was just like the point of Wikipedia, right? Like people really worried about commons based peer production and collaborative knowledge building and inaccuracies and biases, which are there still, creeping their way in and displacing Encyclopedia Britannica and peer-reviewed resources. And they were right, if they were worried because Benkler is right. It’s a lot easier to get your information from Wikipedia and if it’s easier, that’s the way it’s going to come. You can’t do a Google search without pulling up a tile that’s been accessed through Wikipedia. But the good news is is now the phrase about Wikipedia that she’s is that Wikipedia is known as the good grown up of the internet, because the funny thing is that the community seems so fractious and sharp elbowed at first about who was right in producing a Wikipedia page about Battlestar Galactica. Well, so that grew over time, and more and more folks in higher education and more and more experts got involved and the system’s improved, and it’s uneven, but it is still the world’s largest free resource of knowledge. And it’s because it’s free, because it’s open and very accessible, then it enters into our universe of what we know. I think the same thing holds here, right? If it’s as easy to use as it is now, the developers are working on ways to make it easier still. So we’re not going to stop this, we just got to think about ways that we can better understand it and indicate, where we need to, that we’re using it and how we’re using it, for what ends and what purposes. And so your question, John, I think was around or at least you used productivity. So I don’t agree with his essay, and I certainly don’t agree with a lot that he’s done, but Sam Altman, one of the OpenAI co-founders, does have this essay, his basic argument is that in the long run, what AI is doing is reducing the cost of labor. So that will affect every aspect of life, that it’s just a matter of time before AI is applied to every aspect of life. And so then we’re dropping costs for everyone. And his argument is we are therefore improving the lives and living standards of everyone. I’m not there. But I think it’s a really interesting argument to make if you take it that long. Now, as you mentioned earlier about earlier technologies… the calculator moment, for folks in mathematics. My personal preference would be to have someone else’s ox get gored before mine is, but we’re up, so we have to deal with it. And our friends in art, they’re dealing with it as well. It’s just a matter of time before our friends in the music, obviously our friends in motion capture are dealing with it, I think you’re handling it in design as well. So it’s just a matter of time before we all figure it out. So that we have to sort of learn from each other in terms of what our responses were. And I think there’ll be sort of these general trends, we might as well explore these tools, because this is the world where our students will be graduating. And so helping them understand the implications, the ethical usage, the citation system purposes, it’d be great if we had partners on the other side that would telegraph to us a little bit more about what the scope and the purpose and the origins of these tools are. But we don’t have that just yet.

Marc: I agree completely with what Bob said, too.

Rebecca: One of the things that’s been interesting in the arts is the conversation around copyright and what’s being input into the data sets initially, and that that’s often copyright protected material. And then therefore, what’s getting spit out is derivative of that. And so there becomes some interesting conversations around whether or not that’s a fair use whether or not that’s copyright violation, whether or not that’s plagiarism. So I’m curious to hear your thoughts on whether or not these similar concerns are being raised. over ChatGPT or other systems that you’ve been interacting with.

Marc: Writing’s a little bit different, I think there are some pretty intense anti-AI people out there who basically say that this is just a plagiarism generator. I see what they’re saying, but any sort of terminology with plagiarism, it doesn’t really make sense. Because it doesn’t really focus on the fact that it’s stealing from one idea. It’s just using fast and massive chunks of really just data from the internet. And some of that data doesn’t even have a clear source. So it’s not even really clear how that goes back to it. But that is definitely part of the debate. Thank God I’m not a graphic artist, ‘cause I don’t know, I’ve talked to a few friends of mine who are in graphic arts and they’re not dealing with this as well as we are, I can say that, to say the least too. And you can kind of follow along with some of the discourse on social media too. It’s been getting intense. But I do think that we will see some movement within all these fields about how they’re going to treat generative text or generative image, generative code, and all that way. In fact, openAI is being sued now in the coding business too, because they’re copilot product was supposedly capable of reproducing an entire string of code, not just generating, but reproducing it from what it was trained on too. So I think it is an evolving field, and we’re gonna see where our feet land, but for right now, the technology is definitely moving underneath us as we’re talking about all this in terms of both plagiarism and copyright in all the things.. And I’m with Bob, I want to be able to cite these tools and be able to understand it. I also am kind of aware of the fact that if we start bringing in really hardcore citation into this, we don’t want to treat the technology as a person, right? You don’t want to treat the ideas coming from the machine necessarily, we want to treat this as “I use this tool to help me with this process.” And that becomes complicated, too, because then you have to understand the nuance of how that was used and what sort of context it was used in too. So yeah, it’s it’s going to be the wild west for a while.

Robert: I wanted to turn it back on our hosts for a second if I can and ask Rebecca and John a question. So I’ve could remember the title of Sam Altman’s essay, It’s Moore’s Law for everything. That really, I think, encapsulates his point. What do y’all think as people in higher education? Do you think this is unleashing a technology that’s going to make our graduates more productive in meaningful ways? Or is it unleashing a technology that questions what productivity means?

Rebecca: I think it depends on who is using it.

John: …and how it’s being used.

Rebecca: Yeah, the intent behind it… I think it can be used in both ways, it can be used to be a really great tool to support work and things that we’re exploring and doing and also presents challenges. And people are definitely trained to use it to shortcut things in ways that maybe it doesn’t make sense to shortcut or undermines their learning or undermines contributions to our knowledge.

John: And I’d agree pretty much with all of that, that it has a potential for making people more productive in their writing by helping get past writer’s block and other issues. And it gives people a variety of ways of perhaps phrasing something that they can then mix together in a way that better reflects what they’re trying to say. And I think it’s a logical extension of many of those other tools we have, but it is also going to be very disruptive for those people who have very formulaic types of assignments that are very open ended, those are not going to be very meaningful in a world in which we have such tools. But on the other hand, we’re living in a world in which we have such tools, and those tools are not going to go away, and they’re not going to become less powerful over time. And I think we’ll have to see. Whenever there’s a new technology, we have some people who really praise it, because it’s opening up these wonderful possibilities, such as television was going to make education universal in all sorts of wonderful ways and the internet was going to do the same thing. Both have provided some really big benefits. But there’s often costs that are unanticipated, and often benefits that are unanticipated, and we have to try to use them most effectively.

Robert: So one of the things I‘ve appreciated about this conversation it’s that you guys have made me think even more, so I want to follow up on what you’re saying, and maybe articulate my anxiety a little better. So Emad Mostaque, I think is his name, is the developer or the CEO of Stability AI, and he was on Hard Fork. And I listened to the interview and he basically said, “Creativity is too hard and we’re going to make it easy. We’re going to make people poop rainbows.” He did use the phrase poop rainbows [LAUGHTER] but I don’t remember if that was exactly the setup. And so I’m not an art teacher, but I’m screaming at the podcast. No, it’s not just about who can draw the most accurate version of a banana in a bowl, it’s the process of learning to engage the world around you through visual representation, and I’m not an art teacher. So that’s my fear for writing. I guess my question for everybody here is, do you think these tools will serve as a barrier, because they’ll provide a fake substitute for the real thing that we then have to help people get past? Or will that engagement with the fake thing get their wheels turning and help them find that as a stepping stone and a reduction to the deeper engagement with literacy or visual representation.

Rebecca: I think we already have examples that exist, that the scope of what someone might do so that it appears, looks, feels really similar to something someone already created. So templates do that, any sort of common code set that people might use to build a website, for example, they all then have similar layouts and designs, these things already exist.That may work in a particular area. But then there’s also examples in that same space, where people are doing really innovative things. So there is still creativity. In fact, maybe it motivates people to be more creative, because they’re sick of thinking the same thing over and over again. [LAUGHTER]

John: And going back to issues of copyright, that’s a recent historical phenomenon. There was a time when people recognized that all the work that was being done built on earlier work, that artists explicitly copied other artists to become better and to develop their own creativity. And I think this is just a more rapid way of doing much of the same thing, that it’s building on past work. And while we cite people in our studies, those people cited other people who cited other people who learned from lots of people who were never cited, and this is already taking place, it’s just going to be a little bit harder to track the origin of some of the materials.

Marc: Yeah, I completely agree. I also think that one thing that we get caught up in our own sort of disciplinary own sort of world of higher education is that this tool may not be really that disruptive to us, or may not be as beneficial to us as it would be somewhere else in some other sorts of context. You think about the global South, that is lacking resources, a tool like this, that is multilingual, that can actually help under-resourced districts or under-resourced entire countries, in some cases. That could have an immense impact on equity, in ways that we haven’t seen. That said, there’s also going to be these bad actors that are going to be using the technology to really do lots of weird, crazy things. And you can kind of follow along with this live on Twitter, which is what I’ve been doing. And every day, there’s another thing that they’re doing. In fact, one guy today offered anyone who’s going to argue a case before the Supreme Court a million dollars if they put in their Apple Air Pods and let the AI argue the case for them. And my response is, if you ever want the federal government to ban a technology in lightning speed, that is the methodology to go through and do so. But there’s going to be stunts, there’s already stunts. And Annette Vee was writing about GPT4chan, which is a developer that used an old version of GPT2 on 4chan, the horrible toxic message board, and deployed that bot for about three days where it posted 30,000 times. In 2016, we had the election issues with the Russians coming through, now you’re going to have people with chat bots do this. So it can help with education, definitely, I think that we’re kind of small potatoes compared to the way the rest of the world is going to probably be looking at this technology. I hope it’s not in that way, necessarily, I hope that they can kind of get some safety guardrails put in place. But it’s definitely gonna be a wild ride, for sure.

John: Being an economist, one of the things I have to mention in response to that is there a lot of studies that found that a major determinant of the level of economic growth and development in many countries is the degree of ethno-linguistic fractionalization, that the more languages there are and the more separate cultures you have within the society, the harder it is to expand. So tools like this can help break those things down and can unleash a lot of potential growth and improvement in countries where there are some significant barriers to that.

Marc: Absolutely. I just really want to re-emphasize the point that I brought up at the beginning too, especially now in the wake of what Bob said too. I was not introduced to Wikipedia in a way that would be interesting or anything else. I was introduced to this as a college student with a professor saying to me, “This is a bad thing. This is not going to be helpful to you. Do not use this.” Keep that in mind, the power that you have as an educator when you’re talking about this with your students too, that you are informing their decisions about the world too, about what this tool actually is, when you’re introducing talking about this with them, when you’re actually putting the policy in place of yourself of saying “This is banned.” And I just kind of want to make sure that everyone is really kind of thinking about that now with this because we do actually have a lot of power in this. I know we feel completely powerless in some ways. It’s a little odd that the discussions have been about this. But we actually have a lot of power in how we shape the discussion of this, especially with our students.

Robert: Yeah, that’s a great point and I’m glad you raised it. My question is, I wonder, John, as an economist, and also what you think Rebecca as well, do you guys by the Moore’s Law for Everything argument? So 20, 30 years from now, does generative AI increase the standard of living for people globally?

John: Well, I think it goes back to your point that if we make things easier to do, it frees up time to allow us to do other things and to be more creative. So I think there is something to that.

Rebecca: Yeah. And sometimes creativity is the long game. It’s something that you want to do over a period of time and you have to have the time to put into it. I think it’s an interesting argument.

John: I have been waiting for those flying cars for a long time, but at least now we’re getting closer to self-driving cars.

Robert: I was about to say they gave you a driverless car instead. [LAUGHTER]

John: But, you know, a driverless car frees up time where you could do other things during that time, which could be having conversations or could be reading, it could be many things that might be more enjoyable than driving, especially if there’s a lot of traffic congestion.

Rebecca: …or you could take a train, in which case, you’re also not driving, John

John: …and you’re probably not in the US, [LAUGHTER] or at least not in most parts of the US, unfortunately.

Rebecca: Well, we always wrap up by asking what’s next?

Marc: What’s next? Oh, goodness. Well, again, like I said, there are going to be waves of hype and panic, we’re in the “my students are going to cheat phase.” The next wave is when educators actually realize they can use this to actually grade essays, grade writing, and grade tests, that’s going to be the next “Oh, wait” moment that we’re going to have to see too. That will be both on hype and panic too. And to me, it’s going to be the next conversation we need to have. Because we’re gonna have to establish these boundaries, kind of in real time, about what we want to actually do with this. They are talking about GPT4, this is the next version of this. It’s going to be supposedly bigger than ChatGPT and more capable. We know all the hype that you can kind of repeat about this sort of thing too. But 2023 is probably going to be a pretty wild year. I don’t know what’s gonna go beyond that. But I just know that we’re going to be talking about this for the next, at least,12 months for sure.

Robert: I agree with Marc, I think an discipline at least, the next panic or I don’t know, jubilee, will be around automated writing evaluators, which exists and are commercially available. But the big problem is the research area known as explainable AI, which is to me tremendously fascinating, that you can build neural nets that will find answers to how to play Go, that after I don’t know how many hundreds of years or even 1000s of years that humans have played Go, find winning strategies that no one has ever found before, but then not be able to tell you how they were found. That’s the central paradox. I would like to say I hope explainable AI is next. But I think, before we get explainable AI, we’re gonna have a lot more disruptions, a lot more ripples when unexplainable AI is deployed without a lot of context.

John: One of the things I’ve seen popping up in Twitter is with those AI detectors that apparently ChatGPT, if you ask it to rewrite a document so it cannot be detected by the detectors, will rewrite it in a way where it comes back with a really low score. So it could very well be an issue where we’re gonna see some escalation. But that may not be the most productive channel for this type of research or progress.

Rebecca: Sounds like many more conversations of ethics to come. Thank you so much for your time and joining us.

Marc: Well, thank you both.

John: Well, thank you. Everyone has been talking about this and I’m really glad we were able to meet with you and talk about this a bit.

Robert: Yes. Thank you for the invitation. It’s been fun to talk. If there’s any way that we can add to the conversation as you go forward, we’d be happy to be in touch again. So thank you.

John: I’m sure we’ll be in touch.

Marc: The next panic, we’re always available. [LAUGHTER]

John: The day’s not over yet. [LAUGHTER]

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

[MUSIC]