26. Assessment

Dr. David Eubanks, created a bit of a stir in the higher ed assessment community with a November 2017 Intersection article critiquing common higher education assessment practices. This prompted a discussion that moved beyond the assessment community to a broader audience as a result of articles in the New York Times, The Chronicle of Higher Education, and Inside Higher Ed. In today’s podcast, Dr Eubanks joins us to discuss how assessment can help improve student learning and how to be more efficient and productive in our assessment activities.

Dr. Eubanks is the Assistant Vice President for Assessment and Institutional Effectiveness at Furman University and Board Member of the Association for the Assessment of Learning and Higher Education.

Show Notes

  • Association for the Assessment of Learning in Higher Education (AAHLE)
  • Eubanks, David (2017). “A Guide for the Perplexed.” Intersection. (Fall) pp. 14-13.
  • Eubanks, David (2009). “Authentic Assessment” in Schreiner, C. S. (Ed.). (2009). Handbook of research on assessment technologies, methods, and applications in higher education. IGI Global.
  • Eubanks, David (2008). “Assessing the General education Elephant.” Assessment UPdate. (July/August)
  • Eubanks, David (2007). “An Overview of General Education and Coker College.” in Bresciani, M. J. (2007). Assessing student learning in general education: Good practice case studies (Vol. 105). Jossey-Bass.
  • Eubanks, David (2012). “Some Uncertainties Exist.” in Maki, P. (Ed.). (2012). Coming to terms with student outcomes assessment: Faculty and administrators’ journeys to integrating assessment in their work and institutional culture. Stylus Publishing, LLC.
  • Gilbert, Erik (2018). “An Insider’s Take on Assessment.” The Chronicle of Higher Education. January 12.
  • Email address for David Eubanks: david.eubanks@furman.edu

Transcript

Rebecca: When faculty hear the word “assessment,” do they:(a) Cheer?; (b) Volunteer?; (c) Cry?; Or (d) Run away?

In this episode, we’ll review the range of assessment activities from busy work to valuable research.

John: Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective practices in teaching and learning.

Rebecca: This podcast series is hosted by John Kane, an economist…

John: …and Rebecca Mushtare, a graphic designer.

Rebecca: Together we run the Center for Excellence in Learning and Teaching at the State University of New York at Oswego.

Rebecca: Today’s guest is David Eubanks, the Assistant Vice President for Assessment and Institutional Effectiveness at Furman and Board Member of the Association for the Assessment of Learning and Higher Education. Welcome, David.

John: Welcome.

David: Thank you. It’s great to be here. Thanks for inviting me.

John: Today’s teas are… Are you drinking tea?

David: No, I’ve been drinking coffee all day.

John: Ok, that’s viable.

Rebecca: We’ll go with that. We stop fighting it at this point.

David: Was I suppose to?

John: Well, it’s the name of the podcast…

David: Oh, oh, of course! No, I’m sorry. I’ve been drinking coffee all day… did not do my homework.

Rebecca: I’m having a mix of Jasmine green tea and black tea.

John: I’m drinking blackberry green tea.

David: I do have some spearmint tea waiting for me at home if that counts.

John: Okay. That works.

Rebecca: That sounds good. It’s a good way to end the day.

John: How did you get interested in and involved with assessment?

David: I wasn’t interested, I wanted nothing to do with it. So I was in the Math department at Coker College… started in 1991… and then the accreditation cycle rolls around every 10 years. So, I got involved in sort of the department level version of it, and I remember being read the rules of assessment as they existed then… and we wrote up these plans…. and I could sort of get the idea… but I really didn’t want much to do with it. This is probably my own character flaw. I’m not advocating this, I’m just saying this is the way it was. So I wrote this really nice report, and the last line of the report was something like: “it’s not clear who’s going to do all this work.” [LAUGHS] Because it sure wasn’t gonna be me… at least that was my attitude. But as the time went on …

Rebecca: I think that’s an attitude that many people share.

David: Right, yeah. As time went on, and I began to imbibe from the atmosphere of the faculty and began to complain about things, I got more involved in the data work of the university. Because, some of the things I was wanting to complain about had to do with numbers, like financial aid awards and stuff like that. So I ended up getting into institutional research, which was kind of a natural match for my training in Math… and I found that work really interesting… gathering numbers and trying to prognosticate about the future. But the thing is… as a small college institutional research is strongly associated with assessment, just because of the way things work… and so the next time the accreditation rolls around, guess who got put in charge of accreditation and assessment. [LAUGHS] So, I remember taking the manual home with all these policies that we were supposed to adhering to… and spreading everything out and taking notes and reading through this stuff and becoming more and more horrified. If it was a cartoon, my hair would have been standing up… and writing to the President saying: “You know… we’re not doing a lot of this… or if we are, I don’t know about it.” So that was sort of my introduction to assessment. And then, it was really at that point that I had to fill some responsibility for the administration on the whole college and making sure we were trying to follow the rules. So, it evolved from the faculty and not wanting anything to do with it, to turning to the dark side and being administrator and suddenly having to convince other faculty that they really needed to get things done. So that sort of the origin myth.

Rebecca: So, sort of a panic attack followed by…. [LAUGHTER]

David: Well yeah… multiple panic attacks. [LAUGHTER]

Rebecca: Yeah.

David: And then, so over the years as I get more involved with the assessment community, I started going to conferences and doing presentations and writing papers and eventually I got on the board of the AALHE, which is the national professional association for people who work in assessment… and started up s quarterly publication for them, which is still going… and so I think I have a pretty good network now within the assessment world…and have a reasonably good understanding of what goes on nationwide, but a particularly good understanding in the South because I also participate in accreditation reviews and so forth.

Rebecca: So like you, I think many other faculty cringe when they hear assessment when it is introduced to them as a faculty member. Why do you think assessment has such a bad rep?

David: Yeah, that’s the thing I’d like to talk about most. Well, part of the problem when we talk about it, and he’s and I think you’ll see this when you look at the articles in The Chronicle, in The New York Times, and the Inside Higher Ed, is that it means different things and people can very easily start talking across each other, rather than to each other… and I think in sort of a big picture… if you imagine the Venn diagram from high school math class and there’s three circles. One circle is kind of the teaching and learning stuff that individual faculty members get interested in at the course level or maybe a short course sequence level… their cluster of stuff… and then another one of those circles is the curriculum level, where we want to make sure that the curriculum makes sense and it sort of adds up to something in the courses if they’re…. calculus one, two, three… actually act like a cohesive set of stuff… and then there’s the third circle in the diagram and that’s where the problem is, I think. In the best world, we can do research… we can do real educational research on how students develop over time and how we affect them with teaching. But if we dilute that too much… if we back off of actual research standards and water it down to the point where it’s just very, very casual data collection… it’s still okay if we treat it like that… but I think what the rub becomes…. because of some expectations for many of us in accreditation, is that we collect this really informal data and then have to treat it as if it’s really meaningful, rather than using our innate intuition and experience as teachers and having experience with students. So I think the particulars… the rock in the shoe if you will… is the sort of forced and artificial piece of assessment that is most associated with the accreditation exercises.

John: Why does it break down that way? Why do we end up getting such informal data?

David: Well, educational research is hard for one thing. It’s a big fuzzy blob. If you think about what happens in order for a student to become a senior and write that senior thesis… just imagine that scenario for a minute… and we’re gonna try to imagine that the quality of that senior thesis tells us something about the program the student’s in. Well, the student had different sequences of courses than other students and in many cases… this wouldn’t apply to a highly structured program… For many of us, the students could have taken any number of courses… could have maybe double majored in something else… even within the course selections could have had different professors at different times a day… in different combinations… and so forth. So it’s very unstandardized… and bringing to that, the student then has his or her own characteristics…. like interests and just time limitations, for example… Maybe the students got a job or maybe the student’s not a native English speaker or something. There’s all sorts of traits of the individual student. Anyway, the point is that none of this is standardized. So that when we just look at that final paper that the student’s written, there are so many factors involved, we can’t really say, especially with very small amounts of data, what actually caused what. And my argument is that in the course, the professors in that discipline are in the best situation to, if they put their heads together and talked about what’s the product we’re getting out and what are the likely limitations or strengths of what we’re getting out, are in a really good position to make some informed subjective judgments that are probably much higher quality than some of the forced limited assessments… that are usually forced to be in a numerical scale like rubric ratings or maybe test scores or something like that. So I’m giving you kind of a long-winded answer, but I think the ambition of the assessment program is fine. It’s just that the execution within many many programs doesn’t allow that philosophy to be actually realized.

Rebecca: If our accreditation requirements require us to do certain kinds of assessment and we do the fluffy version, what’s the solution in having more rigorous assessment? or is it that we treat fluffy data as fluffy data and do what we can with that?

David: Right, well as always, it’s easier I think to point out a problem than it is to solve it necessarily. But I do have some ideas… some thoughts about what we could do that would give us better results than what we’re getting now. One of those is, if we’re going to do research, let’s do research. Let’s make sure that we have large enough samples… that we understand the variables and really make a good effort to try to make this thing work as research… and even when we do that, probably the majority of time, it’s going to fail somehow or another because it’s difficult. But at least, we’ll learn stuff that way.

Rebecca: Right.

David: Another way to think of it is if I’ve got a hundred projects with ten students in each one and we’re trying to learn something in these hundred projects, that’s not the same thing as one project with a thousand students in it, right?

Rebecca: Right.

David: It’s why we don’t all try to invent our own pharmaceuticals in our backyards. We let the pharmaceutical companies do that. It’s the same kind of principle. And so we can learn from people… maybe institutions who have the resources and the numbers… we could learn things about how students learn in the curriculum that are generalizable. So that’s one idea… if we’re going to do research, let’s actually do it. Let’s not pretend that something that isn’t research actually is. Another is a real oddity… That is, somehow way back when, somebody decided that grades don’t measure learning. And this has become an a dogmatic item of belief within much of the assessment community in my experience. It’s not hundred percent true but at least in action… and for example, I think there’s some standard advice you would get if you were preparing for your accreditation report: “Oh, don’t use grades as the assessment data because you’ll just be marked down for that.” But in fact, we can learn awful lot from just using the grades that we automatically generate. We can learn a lot about who completes courses and when they complete them. A real example that’s in that “Perplexed” paper is… looking at the data it became obvious that waiting to study a foreign language is a bad idea. The students who don’t take the foreign language requirement the first year they arrive at Furman look like, from the data, that they’re disadvantaged. They get lower scores if they wait even a year. And this is exacerbated, I believe, by students who are weaker to begin with waiting. So those two things in combination, they’re sort of the kiss of death. And this has really nothing to do with how the course is being taught, it’s really an advising process problem… and if we misconstrue it as a teaching problem, we could actually do harm, right? If we took two weeks to do remedial Spanish or whatever when we don’t really need to be doing that, we’re sort of going backwards.

Rebecca: We are blaming the faculty members for the things that aren’t a faculty member’s fault necessarily.

David: Exactly, right. What you just said is a huge problem, because much of the assessment… these little pots of data that are then analyzed are very often analyzed in a very superficial way… where, for example, they don’t take into account that expressed academic ability of the students who are in that class, or whatever it is you’re measuring. So if one year you just happen to have students who were C students in high school, instead of A students in high school, you’re going to notice a big dip in all the assessment ratings just because of that. It has nothing to do teaching necessarily. And at the very least, we should be taking that into account, because it explains a huge amount of the variance that we’re going to get in the assessment ratings. Better students get better assessment ratings, it’s not a mystery.

John: So, should there be more controls for student quality and studies over time of student performance? or should there be some value-added type approaches used for assessment, where you give students pre-tests and then you measure the post-test scores later, would that help?

David: Right, so I think there’s two things going on that are really nice in combination. One is the kind of information we get from grades, which mostly tells us how hard did the student work? how well were they prepared? how intelligent they are or whatever…. However you want to describe it. It’s kind of persistent. At my university the first year grade average of students correlates with their subsequent year’s grade average at 0.79. So it’s a pretty persistent trait. But one disadvantage is that, let’s say Tatiana comes in as an A+ student as a freshman, she’s probably going to be an A+ student as a senior. So we don’t see any growth, right? If we’re trying to understand how students develop, the grades aren’t going to tell us that.

John: Right.

David: So we need some other kind of information that tells us about development. And I’ve got some thoughts on that and some data on that if you want to talk about it, but it’s a more specialized conversation maybe then you want to have here.

John: Well, if you can give us an overview on that argument.

Rebecca: That sounds really interesting, and I’d like to hear.

David: Okay. Well, the basic idea is a “wisdom of the crowds” approach, in that when things are really simple… if we want to know if the nursing student can take a blood pressure reading… then (I assume, I’m not expert on this but I assume) that’s fairly cut and dried and we could have the student do it in front of us and watch them and check the box and say, “Yeah, Sally can do that”. But for many of the things we care about, like textual analysis or quantitative literacy or something, it’s much more complicated and very difficult to reduce to a set of checkboxes and rubrics. So, my argument is for these more complex skills and things we care about, the subjective judgment of the faculty is really valuable piece of information. So what I do is, I ask the faculty at the end of the semester, for something like student writing (because there’s a lot of writing across the curriculum): :”how well is your student writing?” and I ask them to respond on a scale that’s developmental. At the bottom of the scale is “not really doing college-level work yet.” That’s the lowest rating… the student’s not writing at a college level yet. We hope not to see any of that. And then at the upper end of the scale is “student’s ready to graduate.” “I’m the professor. According to my internal metric of what college student ought to be able to do, this student has achieved that.” The professors in practice are kind of stingy with that rating… but what it does is then it creates another data set that does show growth over time. In fact, I had a faculty meeting yesterday… showed them the growth over time in the average ratings of that writing effectiveness scale over four years. If I break it up by the students entering high school grades those are three parallel lines stacked with high grades, medium grades, and low grades in parallel lines. So the combination of those two pieces: grade-earning ability and professional subjective judgment after a semester of observation, seems to be a pretty powerful combination. I can send you the paper on that if you’re interested.

John: Yes.

Rebecca: Yeah, that will be good. Do you do anything to kind of norm how faculty are interpreting that scale?

John: Inter-rater reliability.

David: Right, exactly. That’s a really good question and reliability is one of the first things I look at… and that question by itself turns out to be really interesting. I think when I read research papers it seems like a lot of people think of the reliability as this checkbox that I have to get through in order to talk about stuff I really want to talk about… because if it’s not reliable then I don’t have anything I need to talk about… and I think that’s unfortunate because just the question of “what’s reliable and what’s not” generates lots of interesting questions by itself. So, I can send you some stuff on this too, if you like. But, for example, I got this wine rating data set where these judges blind taste flights of wine and then they have to give it a 1 to 4 scale rating. And this guy published a paper on it and I asked for his data. And so I was able to replicate his findings which were that what the wine tasters most agreed on was when wine tastes bad. If it’s yeah if it’s yucky, we all know it’s yucky. It’s at the upper level when it starts to become an aesthetic that we have trouble agreeing. The reason this is interesting is because usually reliability is just one number, you say how reliable is the judges’ rating and you get .5. That’s it. That’s all the information you get, it’s .5. So what this does is it breaks it down into more detail. So when I do that with the writing ratings, what I find is that our faculty at this moment in time, are agreeing more about “what’s ready to graduate….” and not really about that crucial distinction between not doing college-level writing and the intro college-level writing.

Rebecca: That’s really fascinating. You would almost think it’d be the opposite.

John: I was astounded by this. Yes. And so I got some faculty members together and asked some other faculty members to contribute writing samples that they thought some were good and some were bad. So that I have a clean set to try to test this with and watching them do it.

Rebecca: Right.

David: So yeah, we got in the room and we talked about this, and what I discovered was not at all what I expected. I expected that students would get marked down on the writing if they had lots of grammar and spelling errors and stuff like that. But we didn’t have any papers like that… even the ones that were submitted as the bad papers didn’t have a lot of grammatical errors. So I think that the standards for what the professor’s expect for entry-level writers is really high. And because it’s high, we’re not necessarily agreeing on where those lines are… and that’s where the conversation needs to be for the students sake, right? It’s never going to be completely uniform, but just knowing that this disagreement exists is really advantageous because now we can have more conversations about it.

Rebecca: Yeah, it seems like a great way to involve a teaching and learning center… to have conversations with faculty about what is good writing… what should students come in with… and what those expectations are… so that they start to generate a consensus, so that the assessment tool helps generate the opportunity for developing consensus.

David: Yes, exactly, and I think that’s the best use for assessment is when it can generate really substantive conversations among the faculty who are doing the work of giving the assignments and giving the grades and talking to students.

Rebecca: So, how do we get the rest of the accreditation crowd to be on board with this idea?

David: That’s a really interesting question. I’ve spent some time thinking about that. I think it’s possible. I’m optimistic that we can get some movement in that direction. I don’t think a lot of people are really happy with the current system, because there are so many citations for non-compliance that it’s a big headache for everybody. There are these standards saying every academic program is supposed to set goals… assess whether or not those are being achieved… and then make improvements based on the data you get back. That all seems very reasonable, except that when you get into it and you approach it as this really reductive positivist approach, it implies that the data is really meaningful when in many cases it’s not, so you get stuck. And that’s where the frustration is. So I think one approach is if we can get people to reconsider the value of grades, first of all. And if you can imagine the architecture we’ve setup, it’s ridiculous. So imagine these two parallel lines, on the top we’ve got grades and then there’s an arrow that leads into course completion… because you have to get at least a D usually… and then another arrow that leads into retention (because if you fail out of enough classes you can’t come back or you get discouraged) and that leads to graduation, which leads to outcomes after graduation — like grad school or a career or something. So, that’s one whole line, and that’s been there for a long time. Then under that, what we’ve done is constructed this parallel grading system with the assessment stuff that explicitly disavows any association with any of the stuff on the first line. That seems crazy. What we should have done to begin with is said, “oh, we want to make assessment about understanding how we can assign better grades and give better feedback to students. So they’ll be more successful, so they’ll graduate and have outcomes,” right? That all makes sense. So I think the arguments there… turn the kind of work we’re doing now into a more productive way to feed into the natural epistemology of the institution rather than trying to create this parallel system. That doesn’t really work very well in a lot of cases.

Rebecca: Sounds to me what you’re describing is… right now a lot of assessment is decentralized into individual departments… but I think what you’re advocating for is that it becomes a little more centralized, so that you can start looking at these big picture issues rather than these miniscule little things that you don’t have enough of a data set to study, is that true?

David: Absolutely, yes, absolutely. Some things we just can’t know without more data, partly because the data that we do get is going to be so noisy that it takes a lot of samples to average out the noise. So yes, in fact that’s what I try to do here…. Generate reports based on the data that I have that are going to be useful for the whole University as well as reports that are individualized to particular programs.

Rebecca: Do you work with individual faculty members to work on the scholarship of teaching and learning so maybe there’s something that in particular that they’re interested in studying and given your role in institutional research and assessment? Do you help them develop studies and help collect the data that they would need to find those answers?

David: Yes, I do when they request it or I discover it. It’s not something that I go around and have an easy way to inventory, because there’s a lot of it going on I don’t know about.

Rebecca: Right.

David: I’d say more of my work is really at the department level and this part of assessment is really easy. If you’re in an academic department so much of the time that the faculty meet together gets sucked up with stuff like hiring people, scheduling courses, setting the budget for next year and figuring out how to spend it, selecting your award students, all that stuff can easily consume all the time of all the faculty meetings. So, really just carving out a couple of hours a semester, or even a year, to talk about what is it we’re all trying to achieve and here’s the information… however imperfect it is… that we know about it, can pay big dividends. I think a lot of times that’s not what assessment is seen as. It’s seen as, “oh, it’s Joe’s job this year to go take those papers and regrade them with a rubric, and then stare at it long enough until he has an epiphany about how to change the syllabus.” That’s a bit of a caricature, but there is a lot of that that goes on.

Rebecca: I think it’s my job this year to… [LAUGHS]

David: Oh, really?

John: In the Art department, yeah. [LAUGHS]

Rebecca: I’m liking what you’re saying because there’s a lot of things that I’m hearing you say that would be so much more productive than some of the things that we’re doing, but I’m not sure how to implement them in a situation that doesn’t necessarily structurally buy into the same philosophy.

John: And I think faculty tend to see assessment as something imposed on them that they have to do and they don’t have a lot of incentives to improve the process of data collection or data analysis and to close the loop and so forth. But perhaps if this was more closely integrated into the coursework and more closely integrated into the program so it wasn’t seen as (as you mentioned) this parallel track, it might be much more productive.

David: Right, and one thing I think we could do is ask for reports on grades. Grade completions… there’s all sorts of interesting things that are latent to grades and also course registration. For example, I created these reports… imagine a graph that’s got 100, 200, 300, 400, along the bottom axis… and those are the course levels. I wanted to find out when are students taking these courses. So what you’d expect is the freshmen are taking 100-level courses and the sophomores are taking 200 on average and so forth, right? But whenever I created these reports for each major program, I discovered that there were some oddities… that there were cases for 400-level courses were being taken by students who were nowhere near seniors. So I followed up and I asked this faculty member what’s going on, and it turned out to just be a sort of weird registration situation that doesn’t normally happen, but it had turned out that there were students in that class who probably shouldn’t have been in there. And she said “Thanks for looking into this because, I’m not sure what to do.” So that sort of thing could be routinely done with the current computing power we have now. I think there’s a lot you could ask for that would be meaningful without having to do any extra work, if somebody in the IR or assessment offices is willing to do that.

Rebecca: That’s a good suggestion.

David: And so in the big picture, how do we actually change the accreditor’s mind? It’s not so much really the accreditors, the accreditors do us a great service, I think, by creating this peer-review system. In my experience it works pretty well. The issue I think within the assessment community is that there are a lot of misunderstandings about how this kind of data, these little small pools of data, can be used and what they’re good for. And so what I’ve seen is a lot of attention to the language around assessment during an accreditation review: are the goals clearly stated… it’s almost like did you use the right verb tense, but I’ve never seen that literally. [LAUGHTER] No, there’s pages of words: are there rubrics? do the rubrics look right? and all this stuff and then there’s a few numbers and then there’s supposed to be some grand conclusion to that. It’s not all like that, but there’s an awful lot of it like this so if you’re a faculty member stuck in the middle of it, you’re probably the one grading the papers with a rubric that you already graded once. And you tally up those things and then you’re supposed to figure out something to do with those numbers. So, this culture persists because the reviewers have that mindset that all these boxes have to be checked off. There’s a box for everything except data quality. [LAUGHS] No, literally… if there’s a box for data quality everything would fall apart immediately. So we have to change that culture. We have to change the reviewer culture, and I think one step in doing that is to create a professional organization or using one that exists, like in accounting and librarianship. They have professional organizations that set their standards, right? We don’t have anything like that on assessment. We have professional organizations, but they don’t set the standards. The accreditors have grown (accidentally, I think) into the role of being like a professional organization for assessment. They’re not really very well suited for that. And so, if we had a professional organization setting standards for review that were acknowledging that the central limit theorem exists, for example, then I think we could have a more rational self-sustaining, self-governing system. Hopefully get away from causing faculty members to do work that’s unnecessary.

John: I don’t think any faculty members would object to that.

David: Well, of course not. I mean, you know, everybody’s busy…. you want to do your research… you got students knocking on the door… you gotta prepare for class. And really it’s not just that we’re wasting faculty members time if these assessment numbers that result weren’t good for anything. It’s also the opportunity cost. What could we have done, researching the course completion that would have, by now in the last twenty years we’ve been doing this, saved how many thousands of students. You know there’s a real impact to this, so I think we need to fix it.

John: How have other people in the assessment community reacted to your paper and talks?

David: Yeah, that’s a very interesting question. What has not happened is that nobody’s written me saying “No, Dave you’re wrong. Those samples, those numbers we get from from rating our students are actually really high-quality data.” Now, in fact, probably every institution has some great examples where they’re doing really excellent work trying to measure student learning. Like maybe they’re doing a general education study with thousands of students or something. But, down at the department level, if you’ve only got ten students like some of our majors might have, you really can’t do that kind of work. So I haven’t had anybody even address the question and the response articles saying that, “no, you’re wrong because the data is really good” because the other conclusion if you believe the data is good – the other conclusion is that the faculty are just not using it, right? Or somebody’s not using it. So I guess the rest of the answer the question is the assessment community, I think, is rallying around the idea naturally that they feel threatened by this, and undoubtedly there are faculty members making their lives harder in some cases. That’s unfortunate. It wasn’t my intention. The assessment director is caught in the middle because they are ultimately responsible to what happens when the accreditor comes and reviews them. The peer review team, right? So it’s like a very public job performance evaluation when that happens and so it depends on what region you’re in – there are different levels of severity, but it can be a very very unpleasant experience to have one of those reviews done with somebody who’s got a very checkboxy sort of attitude. It’s not really looking at the big picture and what’s possible, but looking instead at the status of idealistic requirements.

Rebecca: So the way to get the culture shift, in part, requires the the accreditation process to to see a different perspective around assessment… otherwise the culture shift probably won’t really happen.

David: Right, we have to change the reviewers mindset and that’s going to have to involve the accreditors to the extent that their training those reviewers. That’s my opinion.

Rebecca: What role, if any, do you see teaching and learning centers having in assessment in the research around assessment?

David: Well, that’s one of those circles in my Venn diagram you recall, and I think, it’s absolutely critical for the kind of work that has an impact on students, because t’s more focused than say program assessment‘s very often trying to the whole program… which, as I noted, has many dimensions to it. Whereas, a project that’s like a scholarship of teaching and learning project or just a course-based project may have a much more limited scope and therefore has a higher chance of seeing a result that seems meaningful. I don’t think our goal in assessment in that case is to try to prove mathematically that something happened, but to reach a level of belief on the part of those involved that “yes, this is probably a good program that we want to keep doing.” So, I think if the assessment office is producing generalizable information or just background information that would be useful in that context, like “here’s the kind of students we are recruiting,” “here’s how they perform in the classroom” or some other characteristic. For example, we have very few women going into economics. Why is that? Is that interesting to you economists? So those those kinds of questions can be brought from the bigger data set down to those kinds of questions probably.

Rebecca: You got my wheels turning, for sure.

David: [LAUGHS] Great!

Rebecca: Well, thank you so much for spending some of your afternoon with us, David. I really appreciate the time that you spent and all the great ideas that you’re sharing.

John: Thank you.

David: Well, it was delightful to talk to you both. I really appreciate this invitation, and I’ll send you a couple of things that I mentioned. And if you have any other follow-up questions don’t hesitate to be in touch.

Rebecca: Great. I hope your revolution expands.

David: [LAUGHS] Thank you. I appreciate that. A revolution is not a tea party, right?

John: If you’ve enjoyed this podcast, please subscribe and leave a review on iTunes or your favorite podcast service. To continue the conversation, join us on our Tea for Teaching Facebook page.

Rebecca: You can find show notes, transcripts, and other materials on teaforteaching.com. Music by Michael Gary Brewer.