110. Fostering a Growth Mindset

Some students with fixed mindsets enter our classes expecting to be unsuccessful while others believe that they have a natural talent in the discipline. In either case, these students often get discouraged when they experience challenging tasks. In this episode, Sarah Hanusch and John Myers join us to discuss how they have revised their classes and used metacognitive exercises to help students develop a growth mindset and to recognize the benefit of learning from mistakes. Sarah and John are both Assistant Professors in the Department of Mathematics at SUNY Oswego.

Show Notes

Transcript

John K.: Some students with fixed mindsets enter our classes expecting to be unsuccessful while others believe that they have a natural talent in the discipline. In either case, these students often get discouraged when they experience challenging tasks. In this episode, we examine how two faculty members have revised their classes and used metacognitive exercises to help students develop a growth mindset and to recognize the benefit of learning from mistakes.

[MUSIC]

John K.: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John K.: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

Rebecca: Our guests today are Sarah Hanusch and John Myers. Sarah and John are both Assistant Professors in the Department of Mathematics at SUNY-Oswego. Welcome, John and welcome back, Sarah.

Sarah: Thank you.

John M.: Thank you.

John K.: Our teas today are?

Sarah: None today

John M.: Yeah, imaginary tea. No tea for me.

Rebecca: The imaginary tea…that’s what my daughter likes to drink. That kind.

John M.: Yeah, I’m in good company there&hellp;

Rebecca: I have English afternoon.

John K.: And I have a ginger tea.

Rebecca: We invited you here today to talk a little bit about how you’ve introduced a project on metacognition in some of your mathematics courses. Can you tell us a little bit about the project?

John M.: Sure, this began, I believe, in the spring of 2018 in a Calculus I course. And the idea was that, Calculus I is known across, basically the entire country…every school in the country…as being a very difficult course. So, you have a lot of students who are coming in, especially in the spring semester, who had bad experiences with calculus in the past. And in particular, I’ve been told by some colleagues that there’s going to be some students in there that more support than I suppose you would imagine. The situation was that on the very first day of class, I had students coming in who have had bad experiences with it in the past. And then at the same time, I have the students that are typically high performing. And they have difficult times also with perfection, you know, being obsessed with 4.0s and grades and that type of stuff. So the idea was that I wanted to simultaneously address failure with the students and perfection at the same time. And I was sort of led to think about this metacognition project, actually, funnily enough, on a flight back from San Diego. I was at what are called the joint meetings for mathematicians, and a lot of progressive newer teaching techniques are talked about at this conference. And I’m flying back from the conference on the airplane and I’m getting really introspective and I’m thinking like, I really need to do something to talk to my kids about failure and perfection. And then it occurred to me that there was this blog post that I had just read a couple weeks before by a mathematician by the name of Matt Boelkins at Grand Valley State University. And he had this idea for a metacognitive project that addressed all sorts of things like growth mindset, fixed mindset, productive failure, and all these different things. And I decided about a week before classes started that this is what I was going to do.

Rebecca: That’s when all the best ideas happen.

John M.: I know…right before class and on an airplane. I get really introspective when I’m on airplanes and staring out the window and thinking of all the big things in life and stuff.

Sarah: And essentially, John came to me and said, “I’m thinking about doing this project.” And I said “Well, that sounds cool. And let’s see if we can measure if it has any positive effect or not.” So, I sort of came in on the research side of it…of “let’s see if this is effective for changing attitudes towards mathematics.” And since then, I’ve stolen the project to use in my own classes. But, it really started as I came in sort of more on the research side of things

John M.: I think stolen might have been a strong word, but…

Sarah: I didn’t ask…I just took it. [LAUGHTER]

John K.: For the research project did you do pre- and post-tests on attitudes?

Sarah: We did a pre- and post-test, we use an assessment called MAPS which is the Mathematics Attitudes and Perceptions Survey. It’s a 31-item survey. It assesses, I think, it’s seven different dimensions. Some of them are growth mindset. Do they view mathematics as being answer focused or process focused? The categories were growth mindset, the applicability of mathematics to the real world, their confidence in mathematics, their interest in mathematics, their persistence in mathematics, their ability to make sense of mathematics, and do they view mathematics as being answer focused or process focused?

John K.: Sounds like a good instrument. Before we talk about the results, let’s talk a little bit more about how you implemented it. How was the project structured in terms of what activities did the students do during the class?

John M.: So the idea was that over the entire semester, they would have a selection of articles online to read, they would have a selection of YouTube videos to watch and it was essentially experts that are addressing these various topics. So, like for example, there is a clip by Carol Dweck, one of the originators of the theory of growth and fixed mindsets, and they were to watch these clips and read these articles across the semester. And then I think it was probably with two weeks or three weeks left in the semester, they’d have to write a reflective essay. It was an attempt to sort of shift the culture in the classroom towards viewing mistakes and failure as productive and as opportunities for learning. Because I think in wider culture, everybody believes that math is just about the right answer. And that if you can’t get the right answer, then there’s no worth in whatever effort it was that you put in to get to that point. And I wanted to provide sort of a counterpoint to that, so a counter narrative. Being honest about how many times per day mathematicians actually do fail, you know, that type of thing. So yeah, the main component was this essay that was reflecting on the stuff that they read and watched over the semester, and then there was sort of like daily conversations.

John K.: Were the conversations online or were they in class conversations?

John M.: In class…in office hours, just kind of whenever they popped up. I remember a couple conversations that happened after I gave back exams, for example, or rather right before I gave back exams. So for example, I would say, you know, I’m about to hand back exams. And I want you when you see the score, when you put the paper over and see your score, I want you to immediately think how are you going to frame this result in your mind. Are you going to look at that score and be happy with it and chalk it up to just your natural talents? Or are you going to say, “Oh, this is a result of hard work?” And then if you’re not happy with your score, are you going to put it away and never look at again, or are you going to engage with your mistakes and make them productive mistakes? It was sort of intervention through conversation that happened on an almost daily basis.

Rebecca: Did you notice a difference in the kinds of conversations you were having in class because they were doing these readings and watching these videos, maybe conversations you hadn’t experienced before in the classroom?

John M.: Yes. In particular, I had students come into office hours and they were relentless with trying to understand the material because they knew that they were going to have another shot to get it right. And I had never experienced that before. In fact, in one of my student’s essays, I had a student tell me that when she’s not done well on exams in the past, she would just take the exam and stuff it into her book bag and never look at it again. And she told me that just because of because of how I was structuring the course that she doesn’t do that anymore. She actually pulls it out and engages with the mistakes and the comments that I put on the exam and comes and talks to me about the exam and everything. So I did see a change in the students.

John K.: Was some of it based on the reflections or was it also partly based on a restructuring of a course to give students more opportunities to redo things or to try things again?

John M.: I believe the latter had something to do with it. Because the idea was that I could say these things out loud to them. But I wanted to actually build components into the course in addition to the essay that sort of reflect the themes that I’m trying to communicate to them.

John K.: Telling them that they can learn from mistakes, if you don’t give them the opportunity…

John M.: Right.

John K.: …to learn from mistakes might not be as productive. I think both components are really valuable. I just want to make sure we were clear on that, too.

John M.: I think that you risk sounding like a cliche motivational poster, if you don’t actually put some meat on the bones with it.

Rebecca: Can you talk about some ways that you actually built that into the course?

John M.: I did test corrections. I don’t remember exactly, I think it was get back half the credit they missed or something like that. So, the idea was that they had to engage with the mistakes on their exams and correct them. And it had to be perfect. So they had a week to turn in their test corrections, and then I would re-grade them. This was very time consuming, as you might imagine, but the students I believe, really responded to it. It really sort of hooked in with the theme that I was trying to send.

Sarah: And since then, we’ve both moved to more mastery based grading. John before I did, but a system where students keep trying things until they get it right. And that really helps sort of drive that “learn from your mistakes” message home.

John K.: Are you able to do some of that in an automated way? Or is this all involving more grading on your part?

Sarah: The way I’m doing it, unfortunately, it’s more grading on my part. Although I will say this semester I’m doing these mastery based quizzes, but I’m not collecting homework. So, it’s kind of a toss up in terms of how much…it isn’t really extra grading. I’m just grading more things in another category.

John M.: Right, I would not do test corrections again. Not only was it a lot of time to grade, but then I had issues with academic honesty. The mastery based thing I have found is, I believe, much more effective.

John K.: Another thing you may want to consider that we’ve talked about in a couple of past podcasts is having a two-stage exam, where in the first stage, they do it themselves. And then you have them break up into groups and do either all the questions or a subset of those as a group. So, you’ve got some peer instruction going on as well…and that way it’s done right in class and it can be done, if the exam is short enough or the class period is long enough you can do both of it. A common practice is to do two-thirds say individual and then one-third for the group activity, which has many of the same things. They don’t know what they’ve gotten wrong, but when they’re sharing with their peers, they’re talking it over and it means you only have to grade the group exams on the second stage, which makes it a whole lot easier than individual ones.

John M.: Right. Yeah, I have a friend I believe he has done that stuff like that. So yeah,

John K.: The Carl Wieman Science Education Institute, I believe, has a lot of information on that. I’ve been doing it the last couple of years, and it’s been working really well. Doug Mckee was a guest on an earlier podcast, we talked about that as well. Are there other things we want to talk about in terms of what you’ve done in the courses?

Sarah: One thing that we’ve both done since this initial project is we’ve taken some of the ideas of this project, but interspersed it more throughout the course. One thing I know at the time that John observed was that he felt like a lot of the students started the projects in the last week, right? And so what I’ve done instead of doing a big project of these topics is I’ve taken these articles and done the second week of class, you have to read one of them and respond on it. And then the fourth week, you have to do another one, and so on. So it’s a little bit of it throughout the whole course instead of all loaded at the end. I think it helps having some of those conversations with the students as well because they’re not just seeing the ideas in the conversations. They’re not just seeing the ideas in the paper. They’re kind of seeing both and it just helps intersperse it a little bit throughout the semester. I know I’ve done that a couple times now. I think you’ve done that since as well.

John M.: I did a pre-semester sort of essay and then I did a post-semester essay. But it was in response to the first time we did that, which is referred into the paper, and one of my students actually told me in their essay, he was like, ‘Hey, I wish I had this at the beginning of the semester.” So yeah, it’s definitely like a “duh” moment. Like, I probably should have done something earlier in the semester, instead of waiting all until the end. But, you learn as you do these things, so. But the essays that the students wrote… I provided them with prompts just to alleviate any sort of writer’s block that they may have. But, the students who basically ignored my prompts and told me their personal stories were the essays essentially that I still remember. I had students that were straight A students that were telling me exactly what I thought was going to happen: that they’ve been the smart person their entire life, and they kind of feel trapped by being a smart person. They don’t want to take any risks because if they risk something and fail, then that’s their identity as a smart person, right? They’re not smart anymore. I’ve had students from the other end of the grading spectrum who basically told me that the first day they walked into the class before I even said anything, they were already convinced that they were going to fail the class. I had students tell me about mental health problems. I had adult learners talking about balancing life and school issues. I mean, it’s just absolutely amazing what they told me, they opened up basically. That made a big impression on me.

John K.: Tying into an earlier podcast, Judie Littlejohn and I had introduced something really similar where we have weekly discussion forums. And I also noticed the same sort of thing, that I got to know the students much better because when they were talking about some of the barriers or the issues they face, they were sharing a lot of details about their life. And you get to know them better and they also seem to form a little bit more of a tighter classroom community because they also got to know each other a little bit more.

Rebecca: It is kind of interesting how when students are talking about their process or who they are as learners, is very different than talking about the subject matter. And it does get them to open up and may be engaged with faculty in a way that they wouldn’t otherwise.

John M.: And I have found being honest about my own failures in the past has been a catalyst for conversation, right? Because they view us as professors, they view us as the authority figures, the experts in that we never fail. And basically telling them how many times I fail on a daily basis in my own mathematical research. It goes a long way, I think… finding common ground with them. And acknowledging how difficult the subject material is. I mean, there’s a reason that calculus has a high failure rate because it’s a hard course, among other reasons. Yeah, just having the humility with the students and kind of stepping down off of the pedestal in front of them, I think that it helps.

Rebecca: So do you want to share some of the results that you got from your study?

Sarah: We saw some very significant quantitative results. I mentioned the MAPS instrument is what we use. It’s a 31-point scale. Its reliability and validity has been established pretty well, especially in calculus classes. One of the things that they did was they looked to see if the items were consistent with expert consensus…. So, with how mathematicians view it and all of the items were valid with the attitudes of mathematicians except some of the growth mindset scales. Research says that that’s an important scale as well. And on this 31-point scale, we saw an almost 4-point improvement from pre-test to post-test…of the students becoming more aligned with the expert opinions, which is a really significant amount…I mean, almost 10% improvement, which is even more remarkable, because when this assessment was first validated, they found that there was usually a negative result from taking a Calculus I class. So, the attitudes get worse pre-post in a calculus class and ours had statistically significant improvement. In addition, we saw statistically significant improvement among all of the sub scales. Now some of them were better than others. Some were just barely below .05 in terms of significance and others were much more significant. I mean, we really saw that over the course of this semester, they really did change their attitudes. We also had some evidence, as John’s already talked about, from their essays…where they said how they started to view mistakes as productive, and they started to feel like there was value in making mistakes and learning from them.

John K.: You mentioned alignment with an expert scale, can you explain that for our listeners?

Sarah: Essentially, what the original authors and it was Code et. al. that did this paper and develop this instrument. They gave this survey to students and they gave it to mathematicians and looked for alignment. Particularly they were looking for whether or not the mathematicians agreed on the items. And the idea was our goal is to get math students to have attitudes more like mathematicians, because that’s our goal, right? …is to develop future mathematicians. And so we would like those attitudes to get closer to how mathematicians view mathematics. They had high agreement among the mathematicians on every item, like I said, except one or two of the growth mindset questions. So, in other words, this survey reflects how mathematicians view mathematics. And that was how they determined the right answers on the survey, whether a particular item is something you should agree with or something you should disagree with. They went with the expert consensus.

John K.: So now, I may be misconstruing this, but are you suggesting that perhaps a lot of mathematicians had adopted a fixed mindset? So, there was a bit more variance there on that?

Sarah: I will say that was what the results of their validation showed.

John K.: Okay.

Sarah: And leave it at that. [LAUGHTER]

John K.: It does remind me of that study a few months ago, that found that when instructors had a growth mindset, the achievement gap narrowed and the drop-fail-withdrawal rate was much lower in courses, then for those instructors who had a fixed mindset. I think that maybe even more of an issue in the STEM fields than it is in humanities and social sciences, but I think it’s not uncommon everywhere.

Rebecca: I say it’s a common problem everywhere.

John M.: I’ll say it…mathematicians suffer from fixed mindsets. I’ll just say it, right? [LAUGHTER]

John K.: Many academics do.

Sarah: Yeah.

John M.: Yes, of course.

Sarah: I mean, the people who choose to become academics are often the people that were successful in school and they decide to continue with it. I mean, it is less likely that people who felt unsuccessful decide to keep going and to go into academia.

John K.: Selectivity bias there and that reinforces a belief in a fixed mindset, perhaps.

Sarah: Precisely.

Rebecca: What kind of response have you seen from students from…I mean, it sounds to me like this one study lead to good results, and then that changed many classes in that you’ve taught or the way that you’re teaching, how have students responded?

Sarah: Generally positively. I think doing the projects at the end of the semester wasn’t the best idea because they just feel so overwhelmed at the end of the semester with exams and projects and everything coming due. So, I did get some responses of “W hy do I have to do this now.” But generally, I think they appreciated learning about learning.

John M.: I think that given the opportunity to talk about their past experiences, I think they appreciated that. For the most part, I’ll agree with Sarah. I think that the message landed with an awful lot of students like I wanted it to. Some of my favorite essays were students who told me that they thought I was crazy on the first day. I mean, you go into a math class to learn math, you don’t go into a math class to study metacognition, or whatever it may be. I had one student the first time around, who basically told me it was all a load of crap, like why this is not working at all. And I had a student the last time that I did this, she was very skeptical towards the end even. Basically, aliken it to just some cheesy self-help stuff. I think that most students responded positively.

Rebecca: Have you seen the response impact other faculty in your area? For example, if they really liked having those techniques and things introduced in your class, have they asked other math faculty to do that in future classes or are you finding that its not many math students who were actually in that particular class?

Sarah: We haven’t done any tracking, so I don’t know where his students have gone. I mean, I’m sure some of them went on to Calc II…I’m sure some of them did not. Right. I mean, I guess most of them would have had Jess the following semester, right? Did she say anything?

John M.: No, she didn’t say anything. I’m teaching Calc III right now, and I have some of my former calculus students that were in this and they’re doing well.[LAUGHTER] Small sample size, but yeah, they’re doing well.

John K.: That could be an interesting follow up though to see how successful they were in the subsequent classes.

Sarah: Yeah.

Rebecca: Sometimes we’ve heard anecdotes, of departments and things when there’s been change that if students really respond well to whatever the techniques are, that they will demand it of other faculty members, and John’s talked about this before in economics.

John K.: Yeah, when you can show results…

Rebecca: Yeah.

John K.: …that there’s been some gain, and especially if it comes from students at the same time, it often puts pressure on other people in the department because if you’re able to show people that your technique has been successful and students are coming in and saying, “G ee, I wish you would consider doing this. I did this in my intro classes, and it was really helpful.” That sometimes helps make change much easier.

Sarah: Yeah, so one of the things that we did look at was we compared the final exam scores of John’s sections to the other sections of calculus that semester. Now, there was some other issues that clouded that data a little bit. His scores were a little bit lower than the other instructors. But what was really surprising, essentially, if you look at, I don’t remember if it were just the final exams or the semester grades. The DF rates were the same among the sections, but the withdrawal rates were significantly different. And that almost no one withdrew from John’s sections. I think there were two if I remember the data correctly, whereas there was like five or six on average from the other sections. And so the DFW rates were different, but the DF rates weren’t. So I just thought that was an unusual circumstance. So, it seems like the students were sticking with his class… and pushing through.

John K.: And if there is a larger portion of students staying with the class, then perhaps a slightly lower average grade is not necessarily a bad sign…

Sarah: Exactly.

John K.: …because student success is partly measured for persistence to completing the course.

Sarah: Exactly. I think because there were more students who stuck it through to the final exam, then his final exam scores ended up being a little bit lower. But again, if you looked at like overall course grades, they ended up being pretty consistent, other than the W rates. I wanted to make sure that there weren’t significant differences in the rates and I think it was just shy of being statistically significant. Like, if you had one more student that would’ve been significant. But just to make sure that, especially like adding the test corrections in wasn’t substantially making the class too easy, right? Because that’s often a critique that, you know, “Well you make these changes, but is that just making the class too easy and people who aren’t really prepared, are they passing?” And so I just did this analysis of the, like I said, it was really just a t-test analysis, but just to see whether or not it was significantly lower and it wasn’t significant. It was lower, right, just not significantly. And then like I said, I looked at retention rates just more as an explanation for why the average was lower.

John K.: In a lot of studies of interventions, the dependent variable is the drop-fail-withdrawal rates, because that’s a measure of success in completing the course. That by itself could be an interesting focus of a study. I’ve been running this metacognitive cafe in my online classes for a while and I did have a student in the class who wrote a few times about the metacognitive development that was introduced in one of your classes. They didn’t specify who but they said, we’re also doing some work on metacognition in the math class, and they said it was really useful and it was nice to see it in two classes.

Sarah: Yay!!

John M.: Good.

John K.: So there’s at least one positive data point there or one additional data point there. So are you going to continue this in the future? And if so, what might you do differently?

Sarah: Well, I think we’ve mentioned already that we’ve worked on including some of the ideas at the beginning of the semester and throughout the semester, rather than one project at the end. For the reason that it really benefits them most at the beginning of the semester when things are getting started. I think we’ve also both changed different things about our grading systems to incorporate more opportunities for growth.

John M.: The last time I did this, I introduced some articles that were a little bit more rigorous with the data and the science, because I sort of wanted to counter that kind of criticism that all this “Oh this is just a bunch of TED Talks…” that kind of thing. So, I really wanted the students to see some of the science behind it, the science of learning, because I really wanted to send that message that “No, this is not me just standing up here saying, ‘Oh, this is going to help you or anything, right?’ This is actually stuff that researchers have thought about before.”

John K.: I had a very similar response the first time I did this. I had a video I posted which was a TED talk by a cognitive scientist who talked about research that showed that learning styles were a myth. And some students had come to believe in the existence of learning styles because they’ve heard of them and often been tested, multiple times in multiple years, on their learning styles. Sometimes even through college and that’s rather troubling. The students said, “Well, this is just one researcher, I’m sure there’s lots of other studies. I don’t believe it because it’s not consistent with what I’ve always been told or what I’ve heard.” So I decided to modify it then and I added to that discussion, five or six research studies. In case you don’t believe this TED talk by someone who’s done a lot of research on this, here’s a number of studies, including some meta analyses of several hundred studies of this issue, and that has cut much of that discussion. They’re less likely to argue against it when it’s not just a talking head or not just a video when they can actually see a study even if they don’t understand all the aspects of it.

Sarah: Yeah. So I think that’s one thing we’ve tweaked what articles and what videos are we showing. I know the semester I gave my students a article that had just come out this September, that students perceive active learning as being less efficient, even when they’re learning more. In some physics classes at Harvard, they gave two weeks at each thing… two weeks of active and two weeks of lecture, and then they had them switch. And the students learned more with the active learning, but felt they learned less. And my students have been feeling frustrated because they feel like they’re not learning enough and that I’m not telling them what to do.

Rebecca: You’re not “teaching” them.

Sarah: I’m not teaching them. And we spend the class period, letting them vent. So all their feelings were out in the open. But, then I sort of countered with this article saying, “Look, I promise you really are learning things. You just don’t feel like you are. But you really, really are. And you’re actually learning it better than if I were using a different style.” So, that’s one way that we’re tweaking the articles because sometimes the research comes out that’s pertinent.

John K.: We refer to that Harvard study in a few past podcasts. We touched on it in a podcast that will release on October 9th. I haven’t shared it with my class yet, but I’ve been tempted to.

Rebecca: What was the discussion like talking about that particular article? Given that they were frustrated?

Sarah: I mostly was just trying to acknowledge that I understand their frustrations…and that, yes, the way I’m teaching this class can be frustrating. I agree. Sometimes I get frustrated about it. But I know that ultimately, they are learning things and that they are going to be stronger writers and stronger students of mathematics by using this structure. And so I kind of use it as evidence of I’m not changing.

Rebecca: So I hear you…

Sarah: Yeah.

Rebecca: …nut…

Sarah: I hear you, but…

John K.: I had this very conversation with my class today. They’re coming up for an exam very shortly. And I asked them, how did they review before an exam and the most common answer was they like to reread the material over and over again. And I mentioned some of the research on that. And I said, the best way to review is to work on problems with this. And I gave them several ways in which they could do that, that are built into the course structure. And I said, “But that doesn’t feel as effective. Why?” And one of the students said, “Well, I get things wrong.” And I said, “And when would you rather get things wrong, when you’re reviewing for an exam, or when you’re taking exams?” And I think some of them got that message. So I’m hoping we’ll see when they take the test next week.

John M.: Right? It seems like anytime you do anything that’s just not a standard straight lecture, there’s a certain amount of buy in that you need to get from the students. And sometimes that can be very difficult. There’s almost a salesmanship that you have to do throughout the semester to make sure that everybody’s on the same page and to kind of fight those feelings where the students give you a lot of pushback. Yeah, that’s the great fear is that when you innovate or you experiment that’s going to go horribly wrong. And sometimes it does, but, you know, we still keep going.

John K.: Because students are creatures of habit. They’ve learned certain things and they want to keep doing things the same way. And anything new can seem troubling, especially if they’re getting feedback along the way that says they need to work more on things…that’s not as pleasant as rereading things and having everything look familiar.

John M.: Right

Rebecca: Passively sitting in a lecture when things all seem like it makes perfect sense to you, because an expert is describing it who knows what they’re talking about, right? Always feels easier than trying to apply it yourself. And I think that students, even though the lecture might feel better, and learning is hard…over time…at the end, when they’ve seen how much they’ve accomplished, and you do have them reflect…many of them appreciate or come around. Sometimes, it’s not in that same semester, sometimes it’s emails, months or years later.

John K.: Yes.

John M.: Right. Right, right.

Sarah: If only if we could do course evals, you know, a whole year later,

John K.: Or five years later. That may not work too well in my tenure process, though.

Rebecca: We always wrap up asking what’s next?

Sarah: Well, the first thing is we’re hoping our article gets published. It’s been submitted. We’re waiting for reviewers. I’m going on maternity leave next semester…that’s really what’s next.

Rebecca: Sounds like a new adventure.

Sarah: It is a brand new adventure.

John M.: Wow, I don’t think that far ahead, I guess. Yeah, I guess I’m that unoriginal, huh. But, yeah, no I’m just trying to…

Sarah: We’re moving to a new building.

John M.: Yeah, moving to a new building, and getting a new department chair. Yeah, that’s right.

John K.: A new desk to go with the chair?

John M.: No. Ah… Yeah, funny, funny, funny.

Sarah: if only…

Rebecca: Well, thanks so much for joining us, this has been really interesting.

[MUSIC]

John K.: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.

John K.: Editing assistance provided by Brittany Jones and Kiara Montero.

108. Neuromyths

Faculty design their classes based on their perceptions of how students learn. These perceptions, though, are not always consistent with the science of learning. In this episode, Dr. Kristen Betts and Dr. Michelle Miller join us to discuss the prevalence of neuromyths and awareness of evidence-based practices in higher ed.

Kristen is a clinical professor in the online Ed.D. program in Educational Leadership and Management in the School of Education at Drexel University. Michelle is the Director of the First-Year Learning Initiative, Professor of Psychological Sciences and the President’s Distinguished Teaching Fellow at Northern Arizona University. She’s also the author of Minds Online: Teaching Effectively with Technology and a frequent guest on this podcast.

Show Notes

  • Miller, M. D. (2014). Minds online. Harvard University Press.
  • Online Learning Consortium
  • Betts, K., Miller, M., Tokuhama-Espinosa, T., Shewokis, P., Anderson, A., Borja, C., Galoyan, T., Delaney, B., Eigenauer, J., & Dekker, S. (2019). International report: Neuromyths and evidence-based practices in higher education. Online Learning Consortium: Newburyport, MA.
  • Mariale Hardiman
  • Tracey Noel Tokuhama-Espinosa
  • Dekker, S., Lee, N. C., Howard-Jones, P., & Jolles, J. (2012). Neuromyths in education: Prevalence and predictors of misconceptions among teachers. Frontiers in psychology, 3, 429.
  • Alida Anderson
  • Macdonald, K., Germine, L., Anderson, A., Christodoulou, J., & McGrath, L. M. (2017). Dispelling the myth: Training in education or neuroscience decreases but does not eliminate beliefs in neuromyths. Frontiers in psychology, 8, 1314.
  • Universal Design for Learning,” CAST website
  • Mchelle Miller, “65. Retrieval Practice” – Tea for Teaching podcast, January 23, 2019.
  • Vygotsky, L. (1987). Zone of proximal development. Mind in society: The development of higher psychological processes, 5291, 157.
  • Michelle Miller, “86. Attention Matters” – Tea for Teaching podcast, June 19, 2019.

Transcript

John: Faculty design their classes based on their perceptions of how students learn. These perceptions, though, are not always consistent with the science of learning. In this episode, we examine the prevalence of neuromyths and awareness of evidence-based practices in higher ed.

[MUSIC]

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

[MUSIC]

Rebecca: Our guest today are Dr. Kristen Betts and Dr. Michelle Miller. Kristen is a clinical professor in the online EDD program in Ed.D. Educational Leadership and Management in the School of Education at Drexel University. Michelle is the Director of the First-Year Learning Initiative, Professor of Psychological Sciences and the President’s Distinguished Teaching Fellow at Northern Arizona University. She’s also the author of Minds Online: Teaching Effectively with Technology and a frequent guest on this podcast. Welcome, Kristen and welcome back, Michelle.

Kristen: Thank you so much for having us.

Michelle: Hi, it’s great to be here again.

John: Were really pleased to talk to you. Our teas today are…

Kristen: I’m drinking Apricot Oolong, a green Tea. Nice for the afternoon.

Michelle: And, I have a wonderful hibiscus tea.

Rebecca: And, I have… big surprise… English Afternoon tea.

John: And, I have ginger peach black tea.

We invited you here to talk about the study that you both worked on together on neuromyths and evidence-based practices in higher education. Could you tell us what prompted this study?

Kristen: Sure. As a lifelong learner, I decided I would enroll in a wonderful program being offered at Johns Hopkins University several years ago in mind, brain, and teaching led by Dr. Mariale Hardiman. In one of the courses, I read several articles that looked at the high prevalence of neuromyths in K through 12 education. And, one of the things that caught me by surprise was: One, I was a K through 12 teacher early in my career. I was, at the time, a professor in the School of Education, and in looking at some of the neuromyths, they actually looked like things that I had studied as part of professional development. And, I had not assumed they would be neuromyths. And, so it really intrigued me in terms of: Why is there this high prevalence and why are we not more aware of some of the evidence-based practices that are out there? Not just in the United States, but clearly these were studies that were taking place internationally. So, I decided to start looking at this through the lens of higher education, because that’s where I work and it’s my area of expertise, and I reached out to Dr. Michelle Miller. I was at the Online Learning Consortium conference. Her focus is on cognitive psychology. So, I approached her after the session and told her about this interest in looking at neuromyths within the field of education… really, across disciplines, in trying to see was it similar to what the findings were in K through 12 education, and what was really being done to integrate evidence-based practices into pedagogy or even andragogy. So, we decided to connect and start looking at this. I had a wonderful PHD student who I was working with at the time as well, who is from Armenia, very interested in this topic, and we quickly grew our small group to include a total of ten researchers from the total of seven different institutions nationally and internationally across three countries. And, everybody brought different expertise, everyone from two-year colleges, four-year colleges, public, private. And, we also were very fortunate because we were able to find, really some of the seminal researchers in the area of mind-brain education science, such as Tracey Tokuhama-Espinosa. And, we reached out to the researchers who actually conducted the studies looking at neuromyths like Sanne Dekker, and we reached out to a Alida Anderson who worked with McDonald et. al. in their 2017 publication. So, it quickly grew from a point of interest in trying to identify what was happening in higher education, to really a much broader international study.

Michelle: Oh, and just echoing what Kristen has said here, we first did meet through the Online Learning Consortium, first at a conference and then they set up calls where we got to talk to each other and realize that even though we came from somewhat different academic backgrounds and published in some different areas, we really had this common ground of interest in how do we bring more evidence-based teaching to faculty in higher education and really throughout the world. And, to me, as a cognitive psychologist, it’s just an inherently fascinating question of, even though we live in our own minds, why do we not sometimes understand some basic principles of how the mind and how the brain works? So, that’s just an intellectually interesting question to me. But then it does take on this tremendous practical importance when we start to look at teaching practices throughout the world and bringing that really quality evidence-based design of teaching and learning experiences for our students.

Rebecca: Can you talk a little bit about how, once all of these researchers are now together, how did you put the study together and how was it conducted?

Kristen: I have to say it was not easy. Thank goodness, we reached out to some of the original authors. The survey instruments that looked at neuromyths and general knowledge about the brain. And, what was so interesting is almost all the studies were truly K through 12 focus, so the questions were very different. Even looking at lexicon, “girl and boy,” where we would want to look at male/female. So, we had a look at absolutely every question and make sure that we were able to revise that question within the framework for the lens of higher education. So, it was not an easy process, just in terms of time, because we had to go through so many iterations. And, I think that really helps with the integrity of the research. We had two pilot studies, even down to looking at the Likert scales that we used. One of the things that really stood out was the primary study that we looked at, which was a 2012 study by Sanne Dekker and several other researchers. They had a Likert scale that looked at correct, incorrect and I don’t know. There was a study by McDonald and colleagues in 2017 and they changed it to true and false. So, we decided early on, we would go with true and false. And, when we did that pilot, we ended up with half the participants stopping midway and simply putting, “I’m not sure if it’s true or false…” and they just didn’t complete the survey. And, I think, just looking at how we phrase the questions, it really affected the participation of our respondents. So, we went back, we modified some of the questions based on that, and we change the Likert scale. And, I think being able to have the ability to say whether it was correct, incorrect, or you didn’t know took away from saying it was true or false, because you can base it on knowledge or what you perhaps had been exposed to. And, we ended up having a wonderful pilot making some additional changes. And the feedback that we got, even after sending out the survey, we had a flood of emails saying “Can you please send us a copy of the study, we’re really interested?” So, we really looked at everything. And, I would say one thing that stood out most; and again I go back to the time we spent over two years on this study from point of inception to where we actually send out the survey, collected this study and then published it, was when we looked at the neuromyths, what we quickly realized was we needed to examine evidence-based practices as well. And, we looked at all of this from a metacognitive perspective. The prior studies that were done, looked at what they called “endorsing neuromyths,” and we weren’t so much looking at endorsing, we wanted to look at awareness, because all of us were involved in teaching… professional development. And, so it was a matter of trying to identify what the gaps were, what were instructors, instructional designers, and professional development administrators aware of and, if there is that gap, how could we develop a study where people would say “Wow, I also thought that was correct, but it’s incorrect… but, I would love to find out what the response is and how I can change my knowledge or understanding.” And, so we looked at absolutely everything and wanted to create a study that people would pick up and say, “This is where I am now. Gosh, after going through this in reading the report, this is where I am and my circle of knowledge needs to continue to expand, as things continue to expand through mind-brain education science.”

Michelle: As a collaborative effort, I haven’t been involved really in a study of this scale and scope. And, it’s simply the level of collaboration. You just heard about one of the iterations of the survey instrument that we put together and just how that piece of the study came about. But all the way through the analyses, the writing, it was such an opportunity, even apart from what we were able to share with the rest of the world, just from my own niche piece of the study as well. The opportunity, as a cognitive psychologist, to start infusing what I feel is more attention that needs to be paid to cognitive psychology and learning sciences. The opportunity to infuse that into this field in this area of thinking was also really exciting as well.

Kristen: So, in terms of how it was conducted, we sent the survey out for the Online Learning Consortium. When we originally started, we were just going to look at instructors, we were looking at neuromyth prevalence in instructors because all of the other studies that had been done were primarily K through 12 teachers and pre-service teachers. (although the McDonald study looked at a wider range). Once we started to bring together our team, then we started thinking, “Gosh, well, it’s not simply the instructors. It’s going to be the instructional designers, it’ll be anybody conducting some type of professional development as well because no course is truly an island.” There are so many people today involved in course design, course development and so the Online Learning Consortium was such an amazing partner for us and they touch on absolutely every part of that population. So, we reached out to them early on and said “We’d love to collaborate with you. You’ve got an extensive membership and listserv. Would we be able to develop this survey instrument, send it out through your membership, and ask them through snowball sampling to share it with others who may actually be involved in higher education, in one of these roles.” And, they could not have been a better partner. They’re just incredible to work with. So, that’s how it was conducted.

John: And, we were actually part of that snowball. I sent it out to a list of about 1200 faculty, staff, and professional development people on my campus alone. How large was your ultimate sample?

Kristen: We ended up with approximately 1300 respondents. And, then we actually looked at the full study, we ended up with 929, who met the criteria for inclusion. So, one of the things we wanted to make sure when we looked at the criteria for inclusion that they worked in higher education. You’d be surprised. So many people complete surveys, but they don’t necessarily meet the criteria. Even when you explicitly state you have to be within higher education: teaching or one of these areas. So, we had a total of 929 who met the criteria, and of those they also had a complete 95% of the questions for the neuromyths, and also for the evidence-based practices because we didn’t want to have any gaps. I would say it was an incredible response rate, especially for those completing the survey. They filled out I would say the majority of everything within the survey itself. The respondents were just incredible as well, because you talked about the cross section of participants, but we ended up with really an incredible number of instructors and that was broken down into full-time, part-time, instructional designers, the professional development administrators and it allowed us to run a lot of different tests that we’ll talk about when we look at the findings.

Rebecca: I think one of the things that’s really interesting about how you discuss the setup of the study is thinking about how many different individuals play a role in perpetuating myths, or even perpetuating good evidence-based practices too. That administrators is where funding comes from, so you have to have everybody in the institution on board with what you actually want to essentially Institute.

Kristen: Well, what’s interesting, and you bring up such a great point. One of the top neuromyths out there is learning styles. And, so when you’re looking at learning styles, this is something that almost seems to permeate. It doesn’t matter when you started teaching, whether it’s K through 12, or higher education at some point if you’ve been involved in education, you’ve come across learning styles. Now there are learning preferences and there’s lots of wonderful research on that. But this concept of teaching to learning styles, I think, unfortunately… we talk about this in section seven of our report kind of got mixed in with multiple intelligences. And, that is not at all what multiple intelligence was about, but it was almost the timing of it and so, having been a K through 12 teacher, I remember going through a professional development where we learned about learning styles and how it was something to look at in terms of teaching to learning preferences. And, even to this day when I do presentations, and I know Michelle has run into this as well, especially when we co-teach some of the OLC workshops, somebody will inevitably raise their hand or type in the chat area “Are you kidding? Learning styles is a neuromyth? We just had somebody on our campus six months ago, who taught us how to do an assessment to teach to learning styles.” So, it’s still out there, even though there’s so much in the literature saying it’s a neuromyth. It’s still prevalent within education across all areas.

John: So, you mentioned the issue of learning styles. And, that’s something we see a lot on our campus as well. We’ve even had a couple of podcast guests who we edited out there mention of learning styles and then had a chat with them later about it. I won’t mention any names because they had some really good things to say, but it is a really prevalent myth and it’s difficult to deal with. So, you mentioned learning styles. What are the most prevalent myths that you found in terms of neuromyths?

Kristen: When you look at the report, the first part of our survey had 23 statements. We had eight statements that were neuromyths. If you look at the K through 12 studies, they had many more neuromyths, but we had eight. And, I will tell you, the top five neuromyths in higher education, very closely parallel what you find in K through 12. Now our prevalence is not as high, but it still shows that instructors, instructional designers, and administrators are susceptible to them and that goes back to awareness. So, the top one: listening to classical music increases reasoning ability and that’s really that Mozart Effect. Another one: individuals learn better when they receive information in their preferred learning styles. Some of us are left brained and some of us are right brain due to hemispheric dominance and this helps explain differences in how we learn. So, that’s really that concept of “Oh, I’m right brained. I’m left brained.” And, this again, is something that goes across higher ed and K through 12. Two other really big ones: We only use 10% of our brain. And, if you look at section seven of the report, you will find all of the responses, literally evidence-based practices or research-supported responses to make sure that people aren’t simply saying, “Oh, it’s incorrect. Well, we want people to know why it’s incorrect. So, they can reflect on that and change their understanding, really the rationale and the research behind it. And, then lastly, it is best for children to learn their native language before a second language is learned. This, again, is a big neuromyth. And I think one of the things I’m hoping that will come out of this study, because we talked about this really when we go into evidence-based practices, is this concept of neuro-plasticity, the fact that the brain changes every time you learn something new. When you’re engaged in an experience, the brain is changing. And, sometimes the brain is changing at a cellular level before you might even see that change in behavior, and so we’re able to see now through technology through f-MRI through fNIR so much more than we were able to see before. So, really keeping abreast of what’s happening in the research should be informing our practice because we have more information available than ever before. But, somehow we need to get that into our professional development training, seminars, and workshops or into the classes that we’re teaching in our schools of education or into our onboarding. But yeah, these are the top five neuromyths in terms of susceptibility, and they cut across higher ed and K through 12.

John: In your paper, you also provide some crosstabs on the prevalence by the type of role of individuals, whether they’re instructors, instructional designers, or administrators. Could you tell us a bit about how the different groups due in terms of the prevalence of these neuromyths?

Kristen: Well, the one thing I will say is, everybody is susceptible to neuromyths, so it wasn’t as if there was one group, and I know that’s always in the back of someone’s mind, “Gosh, who’s the most susceptible?” Well, we didn’t find any significant differences, and one of the things that we wanted to do as well was to really be break the participants down and look at other factors. So, when we look at full-time versus part-time faculty, is one group more susceptible to neuromyths. And we found no significant difference in terms of gender, in terms of age, in terms of working at a two-year institution, a four-year institution. And I really think that talks to the amazing reality of the opportunity to integrate professional development in looking at the learning sciences and mind-brain education science in the opportunity to decrease that gap. So, it wasn’t one group over another. But it’s everybody who has this opportunity to increase this awareness across all of these areas.

John: Didn’t you also find that some of these myths were less common among instructional designers relative to faculty,

Kristen: We found with evidence-based practices, when we looked at significant difference with evidence-based practices, instructional designers actually had in terms of percent correct, higher awareness of evidence-based practices. It wasn’t a large difference, but there was a significant difference and Michelle can certainly talk to this point as well. But, this is really the importance of having an incredible team when you’re looking at course design, course development, and part of that may have to do with, when you look at instructional design, there is so much new literature and research that’s getting infused in to that area, and so that may have something to do with it. But, I think there’s lots of additional studies that we could do to follow up.

Michelle: Kind of circling back to the point of the design and delivery of instruction in a contemporary university or college is fundamentally more collaborative than it was in prior eras. And, so I think we definitely need to have everybody involved start to really break out of that old school mold of class is identified with the teacher who teaches it and that’s what a course is. And no, courses reflect, today, everything from the philosophy and the support that comes down from the top to the people that the students may never meet, but who put their stamp on instruction such as instructional designers. And, this is something that I get pretty fired up about in my just practical work as a program director and just being involved in these things in the university, that there are still faculty who you say, “Hey, do we have any instructional designers who are working with us on this project to redesign? Is anybody assigned to help us as we develop this new online degree program or something?” …and you sometimes still get blank look.? Or you get “Oh, aren’t those the people who you call when the learning management system breaks down and that’s their specialty?” I mean, this report, I think, just really hammers home that idea that instructional designers are a key part of this collaborative team that goes into really good quality higher education instruction today. And it isn’t just about the technology. I think that they’re getting exposure to and staying abreast of what’s going on in research that relates to teaching and learning. And, what a great opportunity for faculty to not just rely on them for technology, but to learn from them and to learn with them as we build better courses together.

Rebecca: Can you talk a little bit about the awareness that you found in general about evidence-based practices? So, we focused a lot on the neuromyths, but what shook out when you started looking at the evidence-based practices?

Kristen: Well, one thing that stood out was awareness was much higher. And, that’s really exciting. I think that’s a huge testament to the professional development that we are offering. But, there were still gaps in areas where there certainly could be a lot of improvement. So, a couple of examples that I’ll give because we literally spent months looking evidence-based practices, and we wanted to make sure that we could support them. So, for example, when we look at percent correct, where most individuals across all three groups were not as aware, like “differentiated instruction is individualized instruction.” So, we know that this is incorrect. But most of the respondents did not put that that was an incorrect statement. So, they either stated it was correct, or they didn’t know. So, again, this is an area that we certainly want to explore. Because differentiated instruction is something that really, I think, adds to the classroom. And, there are other ones. For example, we’ll look at Universal Design for Learning. So, one of the statements we had in there actually comes directly from the CAST website, and it says “Universal Design for Learning is a framework to improve and optimize teaching and learning for all people based on scientific insights into how humans learn.” Well, the instructional designers, they were the most aware. So, 87% of them got that correct. Of the professional development administrators 74% got that answer correct. For the instructors, 58% got that correct. So, you can see the difference in the responses and when we share this nationally or internationally…. when we talk about the study, you’ll have a lot of individuals who’ll say “No, universal design for learning, that’s about accessibility.” Well, it certainly is about accessibility. But, most importantly, it’s about learning and how humans learn. It is probably the most dynamic and the most powerful aspect that we can add into pedagogy or into andragogy. But just by looking at the data here, it may not be something that everybody’s aware of, and that’s again a great opportunity to integrate that into professional development. So, there are a number of things. I mean, it’s exciting because when you look at it, there are 28 statements. And, as I mentioned, overall, the awareness was much higher across all three groups, compared to neuromyths or general knowledge about the brain.

Michelle: Just to jump in here, again, from my kind of cognitive psychology perspective, those evidence-based practices that we’re talking about also include, specifically, some items that are related to memory, a topic that’s really close to my heart. So, I think those are just fascinating as well. So, for example, we asked a variation on a classic question that many cognitive psychologists have looked at: “whether human memory works a lot like a digital recording device or a video camera.” So, is your memory basically taking in information that’s in front of you? And, here again, we’ve got 69% of our instructors saying, “Oh, yeah, that’s right. That’s how it works.” And, that is not how it works. 79% of our instructional designers identify this as an incorrect statement and 74% of our administrators, and we have a few other related things such as we asked people whether testing detracts from learning. And, as Tea for Teaching listeners know, that goes to retrieval practice. Testing doesn’t detract from learning, testing builds up learning. So, these are some as well that I think it’s very interesting to tap into what people know and really think about while these maybe seem like inside baseball, or very metaphorical or philosophical questions, if I’m an instructor, and I believe these things, that students are basically just running video cameras in their heads… well, that is going to lead to some different practices. I might be very puzzled as to why I got up and gave this lecture and the students eyes were pointed at me and yet it didn’t end up in memory. So, those are some of the items that I was particularly interested to see when we got all the numbers in.

Kristen: You know, I would say one thing: when anybody reads the report, what we want them to do is look at how it’s presented in terms of the tables, because everything is looking at the percent of correct or accurate responses. So, as Michelle said, when we look at “human memory works like a digital recording device,” 69% of the instructors got that correct. 79% of the instructional designers got that correct. And, 74% of the administrators got that correct. So, that means we still have a fairly large percentage, basically 20 to 30% that either got the answer incorrect, or they didn’t know. And, even looking at these responses, do they actually know why they knew it? Or did they guess or did they make that assumption like, “Oh, that’s got to be right.” And so, really, the intentionality of this study was awareness, really bringing out statements from the literature to help anybody who’s involved in teaching, course design, professional development to look at these questions, and really think “Do I know this?” And, “If I know it, how do I know this? Is it based on some type of research or literature? Could I defend that? If I don’t know with certainty, where do I find that answer? And how can I learn that? And, how can I integrate those practices?”

John: On the day when your report came out, we shared that on our campus to everyone on our mailing list. One of the nice things about the report is that it has all the questions and also provides references for the answers explaining why the specific answer is true or false. And, it’s a really great resource and we’ll share a link to that in the show notes. It is long. When I shared it two people sent back email saying “Maybe we should use this as a reading group for next semester.” And it’s not a bad idea, actually. But, much of that is appendices and so forth. And, it’s a really informative document. I believe in your survey, you were asking people about their participation in professional development, and you looked at the relationship between participation in professional development and the prevalence of these myths. Is that correct?

Kristen: We did. So, one of the things that we wanted to look at was trying to find out if educators were involved in professional development, whether it be neuroscience, psychology or in mind-brain education science, did that actually increase their awareness of neuromyths, general information about the brain, and evidence-based practices? And it did. We found that that it was definitely a predictor and it was found to be a significant predictor and so, for us, again, it looked at what a wonderful opportunity to be able to say that training does have a positive impact. And, that was really the crux of the study… and it’s interesting, you talked about the length of this study, because originally we had thought about doing two different or three different studies. So, we do one on neuromyths, one on evidence-based practices, one on professional development. Then when we brought the data in, the question was: “Do we separate them out into three different long articles or three different reports?” And, we collectively, across all disciplines said, “No, we need to bring them together.” Because first and foremost, it’s about awareness. You can’t really talk about evidence-based practices, until you’re aware of what the neuromyths might be. What are some of the fallacies that you might actually believe? What are things about the brain that you may or may not know? And, once you’re there, and you have that understanding, you can then move into the evidence-based practices, because it’s all really connected. So, when Michelle talks about memory, you can’t really talk about memory without having some understanding of the mind or the brain. And, so we decided collectively, we would bring it together as hopefully a seminal piece that would really present anyone with a continuum as to: “Where am I? What am I possibly doing in my classroom?” …being able to really do that self assessment and then find the answers, as you said, in that section seven, and realize that they’re not an outlier. I mean, chances are anybody that goes through this is going to fall within that span in terms of their understanding and knowledge.

Michelle: And, what I hope is coming out here is that this study is unusual, not just in its scale, its scope and that we focused on higher education, but that it is so explicitly geared to not just identifying gaps in knowledge or awareness, but addressing those. It’s not like we came along six months later and said, “Oh, by the way, here’s a really nice resource we put together.” It is one stop, it’s right there. And, what an exercise that was, as well. Kristen, I think you’ll remember back just saying, “Okay, in a paragraph… this item, all of us look at this and go ‘oh my gosh, that’s wrong’ or ‘that’s right.’ Why is that? and what are the very best empirical sources that we will trace back to, to demonstrate that?” So, we are trying to provide that and also to really be a model to say: next time that you get that handout or that workshop that says, “Oh, here’s some great stuff about the brain.” What are they backing that up with? Can you trace it back to the solid research sources that makes some of these really powerful principles for learning, and make other things just misconceptions.

Kristen: One of the things that I would say was probably the most exciting and the most challenging. We had 10 researchers, we had 10 researchers from different fields: people from nursing, biomedical engineering, psychology; we had people who work in the area of neuroscience, education (as I mentioned), and we needed to come out with a collective voice, writing a report that would be understood across disciplines. And, so when we wrote section seven, all of us had to be reviewers and we vetted it multiple times. Not just within our group, but outside, to make sure when you read about neural pathways, it actually made sense. Because to write something where somebody would not understand or not be able to connect would be a challenge. And, we wanted people to walk away. I know one of the things that we were looking at: Why neuromyths? Well, a lot of the research out there looks at the fact that when you teach, your teaching and your pedagogy is based on your knowledge, and in your understanding of how people learn, and so we wanted to really look at this area in terms of awareness, because it may impact pedagogy. Our study did not do that. And, I want to make sure it’s really clear. Our study was not designed to say, “Oh gosh, the awareness of neuromyths wasn’t very high in this area, therefore, you must be integrating neuromyths into your teaching. That was not the intentionality of our study and that’s not something that we’ve ever said. There are certainly recommendations we put in the study to look at. If there is a high prevalence of neuromyths,how does that affect pedagogy? But ours was simply looking at awareness and could professional development address gaps? So, we could do this across all different groups that would be involved in course design and delivery.

John: That’s one of the things I really like about it, that you do address all these things well, you provide the evidence, and it’s going to be a great go to reference for those of us when faced with neuromyths, with issues about evidence-based practices. We can just go and grab some of the citations and share them back out or refer them to the whole document as I’ve done several times already. These things are really common even in professional development. I was at a session not too long ago, where there were two neuromyths presented during the session. One was the learning styles thing. But the nice thing is, unlike other times when I’ve seen that done, there were two of us who went up and waited until everyone else talked to the presenter. And, we were both ready to do it after other people had gone so we didn’t embarrass her, but it’s starting to get out there. And, I know on our campus, we’ve got a growing number of people who are aware of this partly because of the reading groups we’ve had, where we’ve had a growing number of participants… and that all started actually with Michelle’s book about five years ago now when we first did the group. You came out, you visited, people wanted to do more, so we started a reading group. We’ve done four additional reading groups since then. We’ve had many of the same participants, but it’s spreading out wider. I’m hoping we’re making a difference through these reading groups.

Michelle: And, that’s so gratifying as an author and as a researcher, and I remember well working with your group in Oswego and the great ideas I took away as well. So, I’m a big believer in virtuous cycle. So, maybe we’ve started one.

Kristen: I think what really came out of this study is the passion that everybody has for student success. Everybody from those that are offering the professional development, the instructional designers that want to make sure that the students are successful, even though they might not be teaching the course. And, then the instructors themselves… and so to be able to work with that many individuals who are not only subject matter experts across their disciplines, but so passionate about making a difference. But I think being able to integrate all of this new research relating to neuroscience, psychology and education, it’s going to transform not only how we teach, but it’s going to transform pedagogy, andragogy, and this whole concept of learning.

Rebecca: I really appreciate the bringing it together and that you decided to keep it all together and not to make three separate reports. I think it’s actually really important to understand how these are all connected and related. And, I think that’s one of the most unique things about the report. I think the community is probably very grateful that we have this resource available now.

Kristen: Oh, thank you.

John: One of the things I’ve often been concerned about is how some of these neuromyths, particularly the left brain – right brain thing, and the learning styles belief, often serves as a message to students that they can only learn in certain ways or they only have certain types of skills, and they’re not able to make progress in other ways. And, it can serve as a barrier and can lead, perhaps, to the development of a fixed mindset in students which may serve as a barrier.

Rebecca: …or not even allow those students to feel like they can enter particular disciplines.

John: If people become more aware of this, perhaps it could lead to more opportunities for our students or fewer barriers placed in the way of students.

Rebecca: …or maybe even just more inclusive pedagogy in general.

Kristen: You bring up such a great point. So, if you believe in learning styles, and you believe that you are truly a visual learner, Michelle and I’ve talked about this a lot, it almost becomes a self-fulfilling prophecy. But you probably are an incredible visual learner because you’ve been told you learn better in this learning style, so you’re going to seek materials in that learning styles. So, the challenge with that, especially when you’re looking at younger students or anybody during their education, you’re precluding really other ways to enhance your learning. So, when you look at Universal Design for Learning, it’s so important because you’re looking at multiple means of engagement, representation, action, and expression. And, when you’re looking at learning styles, if a student believes they’re a visual learner and suddenly asked to go in and take a Spanish oral exam, it could trigger, all of a sudden, stress. Well, what do we know about stress? And, Michelle can talk more about that. But, when you’re stressed, it affects working memory. And, so just that thought of, “Oh my gosh, it’s an oral exam. I’m a visual learner. How can I perform well on that?” And it’s really creating, as you talked about, a barrier or it may decrease, possibly, performance. I know that Dr. Tracey Tokuhama-Espinoza is very passionate about this. And, you’ll see in her presentations, she’ll come out and say “Neuromyths do harm.” And so, I think it’s certainly something that needs to be explored. And, Michelle, from a psychological point, I’d be curious to find out what you have to say as well.

Michelle: When you say “self-fulfilling prophecy” and things like that, it also kind of reminds me of a placebo effect, in a way… and learning styles, and continuing that as an example, yeah, I might go: “Oh, visual learning. It is absolutely me,” like “Now I feel like I can tailor all this to myself. I’ll just find teachers, opportunities, and disciplines that are right there in visual learning.” And, I might have some subjective impression that that’s helping me, or from the teacher’s perspective, I might feel like “Well, I brought in some different materials and engaged different modalities and, what do you know, because of learning styles, we’re doing better.” Well, there’s lots of different reasons why that might be happening. An individual may walk away, and maybe they weren’t individually harmed. I just feel like… just like in modern medicine, there’s sort of a promise that we can do better than mere placebos. I think that ought to be the promise of modern pedagogy as well, that we can do better than simply trying to build up expectations or giving people a false sense that they have something based on science that’s going to help them individually do better. And, I hear so many kind of missed opportunities that really kind of get me activated as well. I think about, for example, the energy that goes into faculty professional development. These things come from good impulses. I really believe that. I believe that people who really pursue something like learning styles or things like that, they want to do better and they want to be more inclusive, but that effort is directed down the wrong path simply because of this gap in knowledge and gap in information in getting the right information to the right people at the right time. And, I can’t stand the thought of faculty, especially as limited as faculty time is and as spread as thin as faculty are, to think that they might try to pick up on some better information about teaching and learning and go down the wrong path. I never want that to happen again. And, maybe our report will be a step in the right direction.

Kristen: I’ll say one thing that we’re trying to do with the report, is really to align the report with best practices and evidence-based practices. So, when you look at the concept of neuromyths the wonderful study that was written by McDonald (and this was in 2017) and her colleagues, the title is “Dispelling the Myth: training and education in neuroscience decreases but does not eliminate beliefs in neuromyths” and so professional development is not a silver bullet. Simply offering one workshop that’s going to address neuromyths is not going to necessarily get rid of neuromyths. So, we have to do what? We have to look at spacing. We have to look at interleaving. So, with professional development, how do you take information related to evidence-based practices and integrate spaced practice into our own professional development? How do we integrate interleaving? How do we integrate low-stakes assessment? So, maybe when faculty or instructional designers come in, you do a quick self assessment and find out what that baseline knowledge is, and then at the end to say, “Okay, at the end of professional development, we need to get to 95% or higher.” But, they’re able to actually test their own knowledge. So, we need to kind of turn professional development upside down and make it active learning and really engage everybody in what we’re looking at within pedagogy and andragogy.

Rebecca: Yeah, I always find it really ironic that a lot of training and things on evidence-based practices is not using evidence-based practices… or using really traditional formats: lecture or getting lectured at and not really engaging with the material. And, it’s no different when we’re working with our students. And, if they’re practicing in a way that’s not going to be effective for them, and they’re not successful. They could spend tons of time on something and just not really make progress. The same thing can happen with our faculty and staff who are designing curricula and what have you as well. They can be really invested.

Michelle: Absolutely.

John: We do have an excellent podcast on retrieval practice. In fact, it’s one of our most popular episodes. We’ll share a link to that in our show notes. We don’t yet have any podcasts on interleaved and spaced practice, but I’m sure we’ll be asking Michelle to come back and talk about these things at some point in the future, if she’s willing. So far, we’ve been focusing on the types of neuromyths that are common. What can we do to reduce the prevalence of these neuromyths?

Kristen: Professional development is certainly key. But, I would look at things such as onboarding, making sure that when people are getting hired on, that they’re really introduced to evidence-based practices from the very beginning. And, even individuals that would say, “Gosh, I’ve been in instructional design for 20 years, I’m familiar” …there may still be those gaps. And, it’s almost like adaptive learning. Everybody that comes in very much like the Vygotsky’s work of zone of proximal development, they may have all been teaching for 20 years, but it doesn’t mean that we don’t have neurodiversity in terms of experience, knowledge about different practices. So, it’s important that it’s from the very onset of when people get hired and making sure it’s understood that we’re committed to best practices, evidence-based practices and what we do builds upon the literature and the research. Not only do we introduce it here, but we move it forward and integrate it into our pedagogy and what we’re doing in our classrooms.

Rebecca: So, we always wrap up by asking: What’s next?

Michelle: Conference season is upon us. We’re recording this fall of 2019. I’m gearing up to go to the Online Learning Consortium’s Accelerate conference in November. And so, I will just personally say come find me if you’re there and you want to talk more about this. I will be presenting on a related but different topic having to do with our ongoing Attention Matters project, which is also the subject of another Tea for Teaching episode. So, I’m really working on getting ready for that, and also the upcoming POD network conference. So, for those educational developers who will be attending that, I’ll be speaking there and hopefully having lots and lots of sidebar conversations with plenty of other people who are interested and fired up about these very topics. So, I/m working on those. I’m working on what I will now call a forthcoming book. It’s under contract with West Virginia University Press, tentatively titled Remembering and Forgetting in the Age of Technology. So, maybe someday in the not too far off future, we’ll be talking about that project as well.

John: We should note that this podcast will be released during the OLC conference. In particular, it’s coming out on Wednesday of the conference.

Kristen: Oh, that’s exciting.

John: And, I should also note that we’ll be presenting there as well. I’m hoping we’ll get some people to listen to this podcast because we’re presenting the next day. So, we might get some new listeners. [LAUGHTER]

Kristen: Oh, that’s exciting. In terms of projects that I’m engaged in and working on. We’ve just launched a new lab in our School of Education at Drexel University. So, we’re bringing everything together and trying to align projects coming up for 2020. But it’s a lab called ELABS, Education, Learning, and Brain Sciences Research Collaborative. So, we’ll be looking at different studies related to the learning sciences and mind-brain education science. I am wrapping up an article with several researchers at Drexel University, some of our PhD students, that looks at immersive virtual reality and practice as well as transfer of learning. We also have a report that I’m working on. It’s an update to research that I conducted earlier on online human touch. So, I’m wrapping up that study and putting together an article there. And, then also looking at two publications for books looking at neuro plasticity and optimal learning. One would be for students to really understand neurodiversity, neuroplasticity, how you can optimize the stress response, and then looking at neuroplasticity and optimal learning from the instructor or instructional design perspective. How do you integrate this into your practice? So, those are the initiatives that I’m working on.

Rebecca: Sounds like lots of things for all of us to look forward to.

John: Thank you very much for joining us. This was a fascinating conversation. And, we’ve been looking forward to this report since I first heard a bit about it when you initially did the survey, and when I saw a preliminary presentation at all see last year.

Kristen: Well, thank you so much for having us. It’s such a pleasure to discuss this topic with you. And, I’m looking forward to listening to many of your upcoming podcasts that clearly is connected to this report.

Michelle: Thank you so much. It makes all the hard work worthwhile and we love the opportunity to get the work out to exactly the people with the power to spread it to faculty and instructional designers and leaders in universities today.

Rebecca: Thank you.

[MUSIC]

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts and other materials on teaforteaching.com. Music by Michael Gary Brewer.