319. AI in the Curriculum

In late fall 2022, higher education was disrupted by the arrival of ChatGPT. In this episode, Mohammad Tajvarpour joins us to discuss his strategy for preparing students for an AI-infused future. Mohammad is an Assistant Professor in the Department of Management and Marketing at SUNY Oswego. During the summer of 2023, he developed an MBA course on ChatGPT for business.

Show Notes


John: In late fall 2022, higher education was disrupted by the arrival of ChatGPT. In this episode, we discuss one professor’s strategy for preparing students for an AI-infused future.


John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer…

Rebecca: …and features guests doing important research and advocacy work to make higher education more inclusive and supportive of all learners.


Rebecca: Our guest today is Mohammad Tajvarpour. Mohammad is an Assistant Professor in the Department of Management and Marketing at SUNY Oswego. During the summer of 2023, he developed an MBA course on ChatGPT for business. Welcome Mohammad.

Mohammad: Hello, and thank you for having me here.

John: Thanks for joining us. Today’s teas are:… Mohammad. Are you drinking tea?

Mohammad: Yes. So I love tea. And from where I’m coming from, originally from Iran, tea is a big thing. So we have a big culture around tea. And it’s very interesting because we go to a coffee shop and we drink tea there. So we call it a coffee shop, but the most we get was tea. So I love brewed tea, and it’s kind of a time-consuming process, and it needs devices, tools that they don’t have here. So for a while, I tried tea bag, but I couldn’t connect well with that, so I decided to switch to coffee. But when we drink tea, we have rock candy. So we try to sweeten it with rock candy instead of sugar, because I love tea, and I’d love to drink my tea with rock candy. Now I drink coffee with rock candy, [LAUGHTER] which is a very funny mix, but it works for me. And time to time when I go to a restaurant that has middle eastern food, I get tea there and I really enjoy it. So that is a luxury for me. So it happens once a month when I get brewed tea, but I also like herbal tea, so I like mint tea and other types of herbal tea. I tried to get them mostly before bed.

Rebecca: Today I have blue sapphire tea, brewed fresh this morning.

John: And I have an English breakfast tea, but after a conversation we had earlier, I have some rock candy with saffron in it as a sweetener. So it is very good. So thank you for that suggestion.

Mohammad: Good. Good. Yeah, I have a big mix of rock candy with different flavors with different taste so I will bring you some, so you can try a different one. Good that you have the saffron one. I will bring a different version to you.

Rebecca: So we invited you here today to discuss the course that you offered last summer on ChatGPT for Business. Can you tell us a little bit about how this course came about?

Mohammad: So this course had a very interesting story. It was spring semester 2023, and ChatGPT was out from, I think, end of 2022, November, December. I was using that, and I really enjoyed how powerful the system is. I was following AI even before ChatGPT, and I was expecting such a thing to happen, but to be honest, I wasn’t expecting it to happen in 2022. I was thinking like 2027, 2030. But it happened, and I was so fascinated by the technology, by the quality of the answers that it provided. I was using it every day, to be honest, and I was trying different things with it, trying to find biases in it, trying to find how it can help me. And then it was the break, we had a week of break, spring semester, because reading break or spring break. And I made the first modules of the course event without discussing it with my department. I was so interested, I said, “Okay, let’s try,” and I said “worst scenario is I’m going to put it online for everyone to enjoy it. If the school doesn’t approve this course, then I will put this on YouTube.” So I’ve made the first module, and then we had a faculty gathering at this Italian restaurant in Liverpool, New York, called Avicolli. We were there and the director of our MBA program was there as well, Irene. So I told Irene, I have this idea of ChatGPT for Business, and I have worked this much on it. And she was so supportive, said: ”That’s a wonderful idea, let’s go for it.” So I sent her a proposal, and everything worked very well. And the school was so open to try new things, which I was very happy about. And then we made the course and submitted the proposal. It was approved, and we offered it in summer. That was the story, actually.

John: Could you tell us a bit more about the course? How many students were enrolled in it? What was the modality?

Mohammad: So, for our MBA program, most of our MBA students are professionals. They have a career already, they’re working full time, and then they’re getting their master’s degree, their MBA, actually, to move forward with their career. Many of them already have master’s degree, they may be doctors, they may be nurse supervisors, so the modality that we use for summer courses is mostly asynchronous online, which means we record the session, we put it online, they take online exams, and we go that way, we communicate online. For this course, I designed it in three modules. In the first module, we discuss the ethics and foundations of AI. We discuss how ChatGPT was trained, what was the data that they use? What are the biases that can happen? How can we use this system ethically because there are so many things that we can do with AI, which are very good things. And there are so many not right things that people can use AI for. So we wanted to make sure about the ethics first. And every course that I want to design on AI, I will start with ethics and foundations, because I think that’s the most important element. So we discussed the biases in AI, for example, gender biases, racial biases that may happen if we solely rely on these systems that are trained on biased data from internet, let’s say. So we discussed that. The second module was on prompt engineering. So as we know, prompt is the query that we sent to the AI, that’s the ChatGPT or Bard. So the quality of question that we ask is directly related to the quality of answer that we get from the system. So we want to make sure we ask questions that give us the best answers. And most of the time it’s not one question or one prompt, it’s a sequence of prompts. So we call it a prompt flow. So, at the first round, you may not get the best answer. But as you improve it, you will get closer and closer to what you want. And that’s what we did in the second module. So we designed an eight-step method for prompt engineering. And there are different stages actually in it. So for example, in one step, you have to anonymize the data to make sure that privacy of your client is considered. You want to set the context for the system, so it understands its role in helping you do the job, etc, etc. So we call it the Kharazmi prompt engineering method, which is named after the person who developed the algorithm, actually. So we made that 8-step method, and it worked very well for my students. In the third module, we went one step further. So as you know, these large language models are very efficient and very effective in writing code in different languages. So one of the things that I tested ChatGPT for in 2022, early 2023, was writing codes with it. So I gave it a task and asked it to write the code for me in R, Python, Stata. And it was so good at writing efficient code in these languages. I even used it to optimize my code. So I intentionally, for example, gave it a for loop in R, to see if it can optimize it. And as you know, in R, we can use sapply(), or lapply() to optimize for those. And it was so good at getting it. So I found that it’s very helpful with coding, with programming. And we made the third module actually on data analytics, which requires a lot of coding. And many of the MBA students, because of their background, they’re coming from degrees, or fields that have nothing to do with programming or coding. They have to use it time to time, they have to read the output, but they may not have written their own code. So in my class, I had a student who said, the last time I wrote the code was 20 years ago, that was like the diversity of my class. And I had the students who had taken economics, and they did a lot of coding. So we made the third module on data analytics and how we can use ChatGPT to write us the code and help us with data analytics. And it was wonderful to see that the students with no background in programming tin either R or Python, were able to write code, they’re able to debug code. So I intentionally gave them codes that had some intentional error. So I removed a part or I removed a small comma there, and they were able to debug it in a couple of seconds. And that was one of the fascinating parts of this course. And interesting, I had a student who told me that our company was moving actually from one software to another. And they used ChatGPT and what they learned in that class to migrate their code from one language to another. So with regards to enrollment, we had a lot of interest. So we had so many people who registered for the course and we had so many who were in the waitlist, but we had to make it small cohorts because we wanted to give very personal attention to each student to make sure that everything goes well. So we limited the enrollment to 12. And we promised the rest that we will offer this course again, and you will have a chance to take it. So we had a cohort of 12 MBA students, and understand the MBA students, as I mentioned, they’re professionals. So in class we had a very high profile journalist, three times Emmy Award winner journalist, we had a neurosurgeon, we had a CFO, we had an activist who was running for office. They had so many different backgrounds that helped actually enrich the learning for everyone. I was learning from how they are using the system for their own specific niche. And that was wonderful, I would say, learning process for everyone.

Rebecca: With the diversity of students that you had in your class, can you talk about some of the kinds of activities that they did individually or together?

Mohammad: As I mentioned, the course was asynchronous, because of the course that we have at SUNY-Oswego, most of our MBAs are professionals. So we intentionally try to make, especially summer courses, asynchronous online. But the level of enthusiasm in this class was so high. So we set up weekly meetings. And most of the time we did it during lunchtime, because everybody was working, that was the best time. In my situation, I think we set the time for 6pm, so 6pm we were on Zoom discussing the module that you have learned that week. So there was a lot of interesting discussions in those sessions. I think one of the best discussions that we had was about ethics of using AI. People from different areas were talking about how these biases can affect, let’s say, patients it has, how these AI tools can be used for fake journalism, making fake news, and what are the dangers of that. And then we discuss the inherent biases in the system. So ChatGPT was trained on data that was on internet, data on internet was created by human beings, human beings are prone to biases, those biases will be transferred to the system. So we discussed that. And we had a very healthy discussion about the need for diversity in data, and diversity on the teams who work on this data to train the models. Because if the team members are diverse and sensitive to different issues that may happen, they will make an effort to fix it. So I think the most interesting part for me was the discussion of ethics, and the wrong and right ways that we can use AI and how we can mitigate those biases or harmful uses of AI.

John: Many people in academia are talking about AI and the need to train students in the use of AI. Could you talk a little bit about some of the ways in which AI tools are already being used in business applications.

Mohammad: I will go from academia point of view and how students are using it day to day. And then some of the uses of AI in industry. So in academia, the very basic things that students use AI for are about, let’s say, summarizing a big text. And that’s what I teach them actually, in any course that I have. I’m teaching marketing research, I’m teaching principles of marketing, any course that they teach, I remind them that, okay, you have this big article, and you want to read that, you don’t have time for it, ask ChatGPT to summarize it for you. It helps us read more and more articles, more and more books. So that’s one of the things that people can use it for. The other thing that I have seen many of our international students actually use AI for, to improve their writing skills. So you’re an international student, you have wonderful idea, but you don’t have the best writing skills, writing experience in English. You can write wonderful articles in your own language, but when it gets to English, your vocabulary is limited, you may make grammar errors. So they use it to improve their writing. And in all my courses, I tell them, I’m more than happy to see you use AI to improve your grammar, to improve the flow of your writing,and to check for any writing errors in your text. So that’s totally fine, If they use it for. And there are many other things that the students use it for, for example, they use it to generate individualized examples. So let’s say you’re a student, you have a small problem with one of your courses, let’s say calculus. There is no good example in the textbook, let’s say. But you can ask AI to generate an example that will help you understand that specific niche research problem that you have. So that’s what I see from different areas, use AI for their coursework. When it comes to industry, it’s an abundance of AI use. So many marketing teams are using AI to generate content, especially a start. Because then you’re a startup and you’re a small business, you don’t have a marketing department. You’re one person, you’re the CEO, you’re the CFO, you’re the HR, you’re the marketing manager, you have to do all those jobs, and these LLMs, these large language models, these AI systems, help entrepreneurs to do the marketing and many other aspects of their business on their own. If you want to create content for your social media, ChatGPT can do that for you. You want to make a job posting, ChatGPT can take care of that for you. And then you can focus on improving and developing your business.

Rebecca: I want to circle back to some of the ethics questions that you were grappling with in class. I’m hoping that you can share some more details about the kinds of conversations that you had with students around ethics? Because this is a topic that I think comes up a lot for faculty, in particular, in thinking about how they might want to encourage or discourage students from using tools like ChatGPT.

Mohammad: Definitely. So what we did at SUNY Oswego was we set up an AI committee, I’m talking about the School of Business, I’m sure other schools are doing the same. So we set up an AI committee to make sure that we have a certain policy or certain plans on how we want our students to be trained and use AI. Because it’s the new computer, it’s the new calculator, it’s the new Wikipedia. We cannot stop people from using it. So we want to train them on the use of AI with integrity, we want to make sure that they are using it in an ethical way. So what we did was, we developed three different policies for courses. For some courses, very fundamental courses, we don’t want the students to use AI, because we want them to learn the tool. For example, in calculus, we want them to learn the mathematics behind doing the calculation. Or let’s say in marketing, we want them to understand the fundamentals of what’s the target market, how we can pick the target market, how we can make a fit between our business offering, and what the target market needs and wants. For those fundamental courses, we either ban use of ChatGPT, or we make it very limited to certain purposes, for example, you can use it to fix the grammar in your writing, you can use it to improve the writing of your assignment. Then we have a second level use of AI. Some courses, we are fine if a students uses it to generate some ideas for them to help them do assignments, create examples for them. And then we have a third layer, which is we ask them to use AI. So we tell them in the syllabus that you’re not only are allowed to use AI, you are expected to use AI, text to text AI, text to image AI, text to voice AI, all of that to improve the quality of assignment that you submit, to improve the quality of the projects that you do for this course. For example, for ChatGPT for Business, in the syllabus, it said that you’re learning text to text AI, but you’re expected to use other types of AI when you do your assignment. And many of my ChatGPT for Business students actually use that and they develop logos and many visuals for the assignments totally generated by AI.

Rebecca: Can you talk a little bit about what came up in those conversations in class about the ethics and how they’re using it in different ways. So if they’re using it for images, or they’re using it to write code, or all these other varieties of uses that you’ve outlined.

Mohammad: So one of the discussions that we had around biases, we discussed how gender bias may be inherent in those AI systems. And when we talked about it, it’s not just ChatGPT, any AI system can be prone to those biases. For example, our facial recognition systems, they’re mostly trade on Western pictures, faces from Western people. So they may not do well when it comes to let’s say, African Americans. And they may cause a lot of bias. We have cases actually of that in the news. So that was one of the things that we discussed. And one of the conclusions that we had in those discussions was that it’s not just about the data to train the model it’s about the team that is working on that. The team needs to be diverse enough. If you have African Americans, if you have different ethnicities, if you have different genders in the team, then we’ll be more sensitive to these biases, and we make sure that these are not happening. The other thing was about gender bias. So let’s say the system was trained on data that we had on the internet, go check the Fortune 500 list, the CEOs of Fortune 500 list, the majority of them are male CEOs. So if you train the system on that type of data, it will assume that males are better at doing those jobs, which is wrong. We had a very healthy discussion about that, or different ethnic backgrounds. So if you check the top 100 US companies, only eight of them have African-American CEOs. So when you train your system on that data, you are making inherent biases in the system. The bias is in the DNA of that system, let’s say. So we want to make sure that we at least have those biases in mind, so we are not solely relying on AI for any purpose that you’re using it for. So AI is now being used and ChatGPT… companies are using that, but sooner or later, governments will start using AI. They will use it for let’s say immigration purposes. Just imagine how those biases can affect people’s lives actually. Health care will start using that. So there are so many dangerous decisions that doctors can make. There’s so many things can go wrong with solely relying or blindly relying on AI. And that was one of the biggest things that we discussed. So we want to use it to be more efficient, and sometimes be more effective. But we want to use it with supervision, somebody should check the output, someone should read the output carefully. That person should be aware that these systems are prone to many errors, many biases. So that was one of the discussions that we had. The main thing, I think, that we discussed regarding biases and errors was gender biases and ethnic biases in AI. And then we discussed the wrong ways of using AI. One of the main things that we discussed was fake news. So somebody can make fake news, make a fake Twitter account, and keep posting with the same language that a certain politician is doing. And, as we know, it’s not just text to text AI, you have text to voice AI. So we can give it a sample of a person’s voice, and it can generate the same voice. So just type the speech for the AI and we’ll read it with the same voice. So there’s so many things that can go wrong, especially when it comes to disinformation and fake news.

Rebecca: it seemed like one of the other ethical areas that you talked about, based on what you had said previously, is about data, the data inputs that train the systems, and also the data that you’re putting into the system that you might be analyzing. So there’s privacy issues, copyright issues, etc. Can you share a little bit about how those conversations unfolded as well.

Mohammad: So, for example, one of the ways that people are using it, especially many doctors are actually using ChatGPT to ask it questions. For example, what are the side effects of this new medicine that I’m using. So sometimes you’re inserting private information to the system. So in the prompt engineering session that we had, one of the steps was anonymize, we write the prompt for the system, then we check it for any private information. It can be a name, it can be an address, it can be even a vehicle plate number. All of those should be removed from your prompt, before you submit it to the AI, because you never know what happens to that data. So one of the things that we did was to make sure that no personal or private data is being inserted into the system, at least for the systems that we have right now. In future, we may have private GPTs. So your organization may have an institutional GPT, that makes sure that all the data is private, it may change then. But the systems that are general purpose right now, Bard, ChatGPT, any other system, we want to make sure that the data that we insert into the system is totally anonymized, no private information is being sent to the system, even an email address. We use placeholders for that in our course, to make sure that even emails are not being fed to the system. The other important question that you raised was about copyright. So there are two things with corporate. First, the systems were trained on content that was generated by a person. So what if I asked AI to generate content similar to that? So write me a Harry Potter story, for example, exactly use the same language that JK Rowling was using? What happens then? That’s a big question. The other concern is who owns the copyright for the output that we get from Ai? For example, in my courses, I’m redesigning all my PowerPoints. And I’m removing all the images that I was using before with images that AI has generated. So when AI generates those images for me, who owns the copyright? Is it ChatGPT? Is it is Dall-E? Is it Midjourney? Or is it the person who directed the system to make those content? So at least for ChatGPT, based on what they wrote on their website, they don’t assume any copyright for themselves. The person who’s generating or giving the prompts will own the content. So at least we know that’s the answer to that question for one system, but what happens in future? There should be lots and lots of discussions on copyright, who owns the copyright of the output? And if the system was trained on somebody else’s writing, somebody else’s art, who owns the output? If I prompted it to write a JK Rowling Harry Potter for me, do I own the copyright or do the original writer usually will get the copyright of something that I’ve prompted to ChatGPT? So I think one of the biggest questions that we have had is regulations. How do we want the regulations to evolve in a way that accommodates all these questions that we have today? I think the pace of change is very fast. So policy makers, those who are setting the rules, should be very fast in responding. The technology’s not waiting for anyone, they have to be as fast as these changes in the system are, otherwise there will be chaos, there will be a lot of unanswered questions, and it will go in any direction that we cannot expect. So one of the big things that should happen, I would say, is regulation. We need to regulate the system in a way that fosters improvement, but at the same time, protects people.

John: In addition to all discussions of regulations that are going on globally, there’s also quite a few lawsuits going on in terms of potential copyright violations, which could have some really devastating implications on the development of AI. So a lot of this, I think, we’ll have to just wait and see, because it’s going to be challenging.

Rebecca: A number of interesting cases too of folks trying to register things with the copyright office that were generated by AI that have been denied. So lots of interesting things to be watching for sure.

Mohammad: Definitely.

John: Another area of a lot of concern, and a lot of research that’s beginning to take place is to what extent AI tools will enhance the productivity of workers, and to what extent it may end up replacing workers. And there are some studies now that are finding both of those. Were your students very concerned about the possibility that some of their potential jobs might disappear, or substantially alter, as a result of AI tools.

Mohammad: So I think the best saying with regards to jobs is that nobody will take your job, let me say it in different words.The CEOs who can use AI will take place of CEOs who cannot use AI. So it’s not, “you’re going to lose your job to AI,” it’s mostly about those who are not equipped, those who don’t know how to use AI, will be replaced by the ones who know how to use AI. In short term, there may be some changes in the job market, some of the jobs may be automated, but new jobs will be created. For example, now we have a lot of companies looking for prompt engineers, something that wasn’t there before, like a year ago we didn’t have such a need in the market. So the other thing that will happen is that we need to train people to use AI. But at the same time, the pace of change is so fast. So we train people for a year to take AI jobs. And by the time they finish their education, the system has changed. Now you have to retrain them. So that’s one of the things that is happening and educational institutions should find a way. They should keep updating and updating their curriculum, I would say every day, to keep up with the changes in technology. The other thing that I personally expect and hope to happen in the long run is that we will work less. In the Industrial Revolution, our working hours were reduced, we could do the same amount of productivity with less work. Same thing may happen 10 years from now, five years from now. Instead of working nine hours, we may two hours, three hours, a day, and then be even more productive than what we are right now. Because this system can make us be more efficient. There is a good metaphor that people use for AI, they call it human algorithm centaurs. So in Greek mythology centaurs are half human, half horses. They can be as fast as a horse and they can have the human intelligence and human capabilities. Now we have half human, half algorithm, we can do so many things much faster, much more effectively than before, and will increase the productivity manyfold. So I’m expecting a better life actually for human beings, morat the same time being more productive than before.

Rebecca: It’s interesting, some of the kinds of conversations I’ve had with my students who are design students about AI, have really been about is it going to replace a designer? Well, maybe in some contexts, people are going to use AI to create designs or visual elements, it’s not going to have the same thought [LAUGHTER] and strategy necessarily behind them that a designer might use. But what they’re mostly discovering is that AI is really helpful in making the process faster. So generating more ideas, finding out what they don’t want to design [LAUGHTER] and getting just a place to start and moving forward and developing their work more rapidly. And so that really gets to that efficient idea that you were just talking about.

Mohammad: That’s very true. And I agree with you, sometimes you are just thinking and you cannot start. AI can give you an idea to start with. And then you come up with ideas that you wanted. So regarding the design jobs or any job. I have students who will come to me and say, “Should I change my field to AI?” I said, “No, do what, whatever you’re interested in, if you’re doing design, keep doing design; if you’re into, let’s say, marketing, keep doing marketing; if you’re in finance, keep doing finance; but use AI in your field. If you’re doing design, see how you can use AI to design better. If you are doing marketing, see how you can use AI to make better content, to make better decisions.” So I think it’s not AI replacing people, it’s AI enhancing people. So in any field, we have to equip ourselves with the skills of using AI to do our jobs better.

Rebecca: From an experience I’ve had with my students, we’ve definitely discovered that if you don’t have the right language around the thing that you’re trying to make, it doesn’t do a good job. [LAUGHTER] So you need some disciplinary background or some basic knowledge of the thing that you’re trying to do for it to come out successful.

Mohammad: That’s very true. So one of the limitations of AI that we discussed in our classes was about different languages. So most of the content that was used to train ChatGPT was written in English. So think of other languages that didn’t have that much content on the internet. AI is not as capable in those languages. So that’s one of the things that we need to think of. So this is a system that is super capable in English language, but when it comes to languages that don’t have that many speakers, then it falls behind. So I tried it and I learned that sometimes the system tries to think in English and then translate it in the other language, and it makes so many mistakes in that process. So that was one of the things that came to my mind of what you mentioned..

John: We’re recording this in the middle of November. And in just the last few weeks, we’ve seen a lot of new AI tools come out, we’ve seen ChatGPT expand the size of the input that it’s allowed, and we now see this market they’re offering for GPTs, as they’re calling them. And the pace of change here is more rapid than in pretty much any area that I’ve seen, at least since I’ve been working in various tech fields. It would seem that this would be a challenging course to teach in that the thing you’re studying is constantly changing. Will you be offering this again? And if so, how will the course be different in your next iteration of the course?

Mohammad: That was a very good question, actually. So yes, the course is being offered in January 2024. And as you mentioned, one of the biggest challenges with this course, I would say the biggest challenge with teaching AI, is to keep the content current. So that’s not just what happened today. When I was teaching this course in summer, I made the second module, and then open AI announced the plugins. Now I had to redo the content to make sure that I can use those plugins because they were so powerful. The plugins that ChatGPT introduced were so powerful, and there are so many companies who were making different plugins. So I remember, for the second module, I had to start and re-record my content. I updated my content. I recorded everything 1am, 2am before the session in the morning, because everything was changed. So I had to incorporate that into my class. Same thing is happening with new developments. So what I learned is that every day I have to update my content, I have to update my course. So ChatGPT API was one of the things that I was thinking of as the fourth module and was working on that. Now. I think GPTs is one of the modules that needs to be there. That’s like the app store of Open AI. So, that’s a big game changer. As you mentioned, it has a larger memory right now we can provide it larger context. So that’s another capability that AI has, and it changes the way that we prompt it the way that we ask it questions. So keeping the curricular updated, I think is the biggest challenge. And this is something that we should have in mind. Every week, every day I see something new. I update my slides, update my content to make sure that everything is correct. Because if you don’t do that, let’s say two months, three months, if you don’t update your content, then you have to redo it, you h ave to start over. So that’s definitely one of the things that I do and GPTs is one of the things that I will definitely incorporate into my course for January 2024.

Rebecca: Iterative change definitely seems like a good way to go to manage that, for sure.

Mohammad: We don’t know what will be announced in December. [LAUGHTER] So, I always count on a big change.

Rebecca: But yeah, buckle up and be ready, right?

Mohammad: Yeah.

John: And we welcome our new AI overlords…

Rebecca: Yeah.

John: ///n case, by the time this is released that they have taken over.

Rebecca: Can you talk a little bit about how your colleagues in the School of Business have responded and whether or not more faculty in the School of Business are incorporating AI.

Mohammad: I see that many of my colleagues are super interested in this new technology. So what I like most about SUNY Oswego in general is that everyone is so open to accept new technology, accept new things, accept innovation, and everybody’s trying to absorb the new innovation that we have seen and incorporated one way or another into their work or into their courses. So as I mentioned, we have the AI committee, and in our meetings we have very good discussions about how we should update our curricula. I know that some of my colleagues are already doing that, are already using AI to generate let’s say, visuals for their content, or teaching or talking with the students about the ethical users of AI. So I think at least the ecosystem that they see at SUNY Oswego is very open to accepting innovation, and is very fast to incorporate it into their curricula and educate the students, or at least have discussions with the students about how to use that and how to equip themselves with the skills that they need for future.

John: Just a few weeks ago, your department scheduled a symposium on AI, could you talk a little bit about that?

Mohammad: So we wanted to take a lead in AI education at SUNY-Oswego. So we’re very focused on teaching the students and equipping them with the skills that they need to take future jobs. And we are making a big move toward AI. So we wanted to make sure that our students are exposed to the new developments in this field and understand the importance of this area. So we set up an AI symposium, Bridging Bytes and Business to show them how technology, how AI, how computer, is changing the way that we do business. So we set up a hybrid conference or symposium. They had two panels. The first part was online with scientist discussing the new technology, discussing how AI is evolving. What are the biases, what are the errors that we have in this AI? And they were discussing what is the next big thing that will happen in AI? So in the first round, we had Suroush Saghafian from Harvard. He has a lab that works on developing AI, we had Diane, Diane is a three times Emmy Award journalist, and she was one of our MBA students, actually. And she talked about how AI is used in journalism, what are the challenges of, let’s say, disinformation generated by AI, how journalists need to address those concerns. And we had Saeideh who is a computer scientist. Saeideh worked for Yahoo, Meta, and Google. And she gave us her knowledge, her experience with what these big companies are working on for the next big thing that is happening. So we had a very healthy discussion about the science part of AI. And then we had the business leaders from upstate. We had Michael Backus from Oswego Health, we had John Griffith from insurance, and we had Mohamed Khan from Constellation Energy. So they were discussing how their companies, how their industry is using AI, and what they expect students to know about AI before they go to the job market. What are the skills that they need to have? So we had this very successful symposium, and since it was a hybrid symposium, we’re broadcasting it online. It was kind of a webinar. So we had many attendees from all over the country. So we had attendees from all over U.S. I think we had California, we had Texas, we had Arkansas, New York, obviously, we had people from Canada joining us, Ireland, United Kingdom, France, Germany, and interestingly, we had attendees from Australia. It was 2 am there, I think, but they joined us, and they stated very last minute of the symposium. And that made us very happy and very proud of SUNY Oswego on taking the lead in providing this type of discussions actually around AI. And we’ll keep doing that. We’ll keep having more and more symposiums and panel discussions to keep our students current and to encourage our students to learn more and educate themselves more about AI.

Rebecca: So we always wrap up by asking: “What’s next?”

Mohammad: So we have big plans. One of the things that we’re doing is ChatGPT for Business, it will be offered again in January 2024, and hopefully summer, but aside from that, we are going one step further. We are designing a new course, more advanced than ChatGPT for Business. That course is Prompt Engineering for Artificial Intelligence. So in that course, we’ll focus on different ways that students can use prompt engineering for different purposes: for HR, or marketing, for finance, for different fields. So that course will be an advanced level to ChatGPT for business. And we are going to offer degree in our MBA program on strategic analytics and artificial intelligence. So we are incorporating AI into actually all courses that we offer in that program. And then we will have a micro credential on prompt engineering, because that’s what industry is looking for. They want somebody who is good at asking the right questions from ChatGPT, Bard, and any other AI that you’re using. So they need somebody who is good at writing good prompts for them. So that’s what we are focusing on right now, to equip our students with those skills, with the knowledge that they need to be effective and efficient prompt engineers. And I believe we will be among the very first institutions in North America to offer those courses and those degrees, actually.

Rebecca: Well, thank you so much for joining us and sharing the work that you’ve been doing.

John: We’re always curious about where this is going, and I’m sure we’ll be back in touch with you again in the future. So thank you.

Mohammad: Thank you very much. I really appreciate the wonderful podcast that you have. I time to time listen to your podcast, and I actually bought a book on ChatGPT, based off of one of your podcasts, one of the guests that you had, they wrote a book on ChatGPT, 80 Ways that ChatGPT can help you with your courses, I think. And I’m still reading that book and I’m enjoying that. So thank you for the wonderful podcast that you have.

John: And we’ll include a link to that book by Stan Skrabut, and we’ll also include a link to the recording of that symposium as well in the show notes for this episode.

Mohammad: Thanks so much.


John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

Ganesh: Editing assistance by Ganesh.