The Undergraduate Psychology Association has very kindly featured me on their website as "Professor of the Month" for March. (Yeah, so there's only one more day of March. That's OK; the UPA is a volunteer organization--what have you volunteered for lately?) It's not an award or anything; it's an in-depth all-you-ever-wanted-to-know interview. Or at least as much as I could spew out in 18 minutes. (Thanks to Dan L. for interviewing me--and sorry I was in such a rush!)
The Professor of the Month
The Textbook Change
I don't like changing textbooks. It's a huge hassle for a number of reasons. I like my lecture notes to be organized around the textbook I'm using. (Not to say that I repeat what's in the textbook, but minimally, I try to lecture on things in the same order as they are presented in the book.) So changing books means rearranging all my lectures. Plus, I'll have to rewrite the exams to reflect the material presented in the new text and take out questions based on the old one.
The Question on Grade Inflation
In a recent open comment, Anonymous (A studious student) had some pretty serious accusations (sorry, sorry, “questions”!) about grades and evaluations. I’d like to address those questions--not just in another comment, but in full postings. In my last post, I discussed the possible link between grades and teaching evaluations. This post addresses the third of several claims/questions/concerns.
Question: “What is your thought on grade inflation?”
To the extent it exists, it sucks. But I don’t know how prevalent it is. In talking to my colleagues, I’ve found sentiment is universally against it. But then, like I wrote in my last post, maybe we’re all doing it subconsciously anyway.
Claim: “it is getting more and more difficult for me to set myself apart from other students...By 4th year, ~20% of the class is expected to receive an A/A+”
Going to GFC policy on approved grade distributions, it is expected that in 4th year courses, 37% of students are expected to obtain a letter grade of A- or higher, and 20% are indeed expected to receive either an A or A+.
What is this, officially prescribed grade inflation? I can’t speak for GFC, but I’ll give you my view. By the 4th year, there has been some weeding out. Students who have not been able to handle the material have changed majors, or maybe have even left university. So the students who are left are, in general, more capable than those in, say, first year. Also, class sizes at the 400 level are smaller, giving you more access to the instructor, which (I would hope) impacts grades.
So yes, it is literally harder for you to set yourself apart from other students--in terms of grades. But there are other things you can do to differentiate yourself. Talk to your instructors; show interest in what they’re teaching. I’ve formed great relationships with students over the years in part because they did more than just show up to class. In fact, I’ve been privileged to be able to help some of them advance their academic careers, too. (It’s been great watching people go from being undergrads to being practicing psychologists, or holding other positions of importance in the real world!)
Concern: “salary is partially determined by these evaluations (I think), so professors/lecturers have greater incentive to give higher grades.”
Yes, you are correct. Even though evals are not supposed to be the sole determinant of teaching, sadly, those numbers may be the only representation of teaching on my yearly review. I am not a number! There is an incentive to give higher grades only if there is a belief that doing so will result in better evals and thus performance increments. I can’t give you any statistics on this one, and I wouldn’t want to. I don’t want to imply that my colleagues are so shallow. Rather, in working with them on the AASUA Teaching and Learning Committee and in other groups, going to teaching seminars put on by University Teaching Services, and in talking with them one-on-one, I find them--to a person--to be hardworking, dedicated, and committed to doing the best teaching job they possibly can. This is not puffery; I am not stoking anyone’s ego. If the University of Alberta were not seriously interested in the importance of teaching, I would have thrown in the towel and left.
Claim: “Even to this day, most believe that Harvard grades are meaningless.” I have no data on this, and cannot speak to this. Even if the grades are meaningless, I know for a fact that a degree from Harvard is not meaningless. In fact, it can be a ticket to more money than I’ll ever see. I know your University of Alberta degree has value; there are too many people working hard for the reputation of the whole university go down the drain. And I don’t think that’s going to change.
To be sure, the issues you have raised are important ones, and they are being discussed and considered on campus (and on other campuses, too). I hope I have not dismissed your valid comments, concerns, and criticism. Instead, I’ve tried to pull the curtain aside and let you hear my thoughts and ideas. I’m impressed that you have been considering these issues, and have brought them forward for discussion. That’s what I wanted in this blog, and boy did I get it--thanks!
Why aren’t you studying?
Update 3/21/2009: Just found out about the website GradeInflation.com. Are instructors inflating grades, are students getting better, are teaching techniques improving, or is it something else?
The Question on Higher Grades and Teaching Evaluations
In a recent open comment, Anonymous had some pretty serious accusations (sorry, sorry, “questions”!) about grades and evaluations. I’d like to address those questions--not just in another comment, but in full postings. In my last post, I tried to make the case that I don’t always give out “higher grades,” although there is a tendency for higher grades to appear in some of my classes. This post addresses the second of several claims/questions/concerns.
Question: “Do you think your preoccupation about evaluations are one reason why you are willing to give out relatively higher grades?”
You know what people’s greatest fear is? Public speaking. So how would you like to teach a course? That’s what my graduate supervisor asked me one day. Now, it wasn’t a question. No, it was more of a prediction about the future: You’re going to teach a course. Gulp.
Back in my day, there weren’t any how-to seminars for graduate students to learn about teaching like they have now. The preferred method was to throw you in the deep end and walk away, leaving you to thrash around, coughing and sputtering, waving your arms frantically. This is, of course, terrifying. How are you supposed to improve? How do you know what you are doing wrong, or maybe even, doing right? The answer came a month after my first course ended: teaching evaluations.
The students I taught were very understanding, and gave me some really good constructive feedback on improving my teaching. Incredibly, the students who I had been trying to teach had ended up teaching me some valuable lessons of my own. (I know, this sounds like every Hollywood movie set in a classroom that’s ever been filmed. If anyone is interested in buying my script, please get in touch with my agent.)
So, obviously, reading evals is like eating chips--bet you can’t eat just one! Am I addicted? Am I so desperate to hear nice things about myself that I will pander to students by pumping up their grades?
Answers: 1) I dunno. 2) Geez, I sure hope not.
As it turns out, the AASUA’s Teaching and Learning Committee has been looking at the issue of the validity of teaching evaluations for the past couple of years. As it also turns out, I know this because I’m on this committee. So I have some actual (but general) answers--not just facetious ones.
Is there a statistical link between higher expected grades and evaluations of teaching? Yes. “Class-average grades are correlated with class-average student’s evaluations of teaching, but the interpretation depends on whether grades represent grading leniency, superior learning, or pre-existing differences” (Marsh & Roche, 1997, p. 1194). So what is the correlation? Unsurprisingly, there is a range, which usually goes from 0.10 up to 0.30; the “best estimate” is taken to be probably about 0.20 (Marsh & Roche, 1997). That’s a correlation, but it’s a pretty weak one. In terms of the overall variance in evaluation scores, grade expectations account for less than 10%. If an instructor decides to pump up his or her ratings by inflating grades (and risking his or her career), the payoff isn’t there--there are too many other things that influence the ratings.
Coming back to me, though (because it is all about me). Am I inflating grades to get good evaluations?
Answers: 1) I dunno. 2) Geez, I sure hope not.
I don’t want to know. I do a lot of statistics on the performance of my classes. (I’ll eventually post about how I use point-biserial correlations to analyze exam performance. Your eyes are guaranteed to glaze over! Woot!) I know, for instance, that student evaluations of me are positively correlated with their evaluations of the textbook (r = 0.498 last time I calculated). This is useful information: it’s really important to find a textbook students like. But, duh, I know that anyway. Why is this related to me, is it just spurious?
But I have not, do not, and will not calculate or correlate my evaluations with student performance. I’m not going to take any student rating of my teaching and compare it with the median, mean, or mode of the marks in any of my classes. Why? I don’t want to know. If I’m unconsciously, unknowingly giving out higher marks to get better evals, that’s one thing. But if I’m doing that willfully, consciously, that’s unethical--it’s just wrong.
It may look like I’m preoccupied with evaluations. I do mention them in class, and even put the evaluation date on the syllabus (we are supposed to tell students when evals are going to take place, ya know--I read the fine print). But telling you about how good my evals are, I think, makes my job harder: your expectations increase. In contrast, if I tell you that I really suck and then I kinda don’t, maybe you’ll be happy and give me a good rating.
So maybe it works against me. You think you’ve got this instructor who thinks he’s hot as snot, but turns out to be awful. So you burn me on the evaluations. I welcome that. As long as you tell me what I did wrong, I can still learn and improve. I can try harder next time. Maybe someday I can be as good as the instructors who inspired me to go into this psychology business in the first place, like my graduate supervisor.
I’ll address the remaining concerns expressed by Anonymous in my next post.
Why aren’t you studying?
References:
Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist, 52, 1187-1197.
The Question on Higher Grades
In a recent open comment, Anonymous made some pretty serious accusations (sorry, sorry, “questions”!) about grades and evaluations. I’d like to address those questions--not just in another comment, but in full postings. This post addresses the first of several claims/questions/concerns.
Claim: “...you also give relatively higher grades.”
So, relative to other instructors, I’m taking it? The University, specifically GFC, has approved grade distributions for different undergraduate courses. The fine print says this: “These distributions are provided for guidance in your grading. It is not necessary for the grades in a particular class to follow any of the distributions exactly.” (Unless an instructor is grading on the curve with the help of a spreadsheet, it’s impossible to get these exactly anyway. And I don’t grade on a curve.)
Instead, I focus on the expected medians for each course level:
1st year = B-If my classes don’t match these, well, I don’t know what happens. So far, nothing yet.
2nd year = B
3rd year = B
4th year = B+
Anyway, here are the actual medians for the last 12 courses I’ve taught:
1st year: B, BAny patterns? Am I consistently giving higher grades? It looks like the 100-level courses are a bit higher than expected. Why? Major components of that course (20% of the overall mark) consist of easy marks (Information Literacy, Research Participation) that boost students’ grades. These components are out of my hands; I don’t do any marking, I just accept the results as they are. So should I make my exams harder to compensate for these “free” marks? Of course not. Class means on my exams in that course are around 65%, and I don’t want them any lower than that.
2nd year: B+, B+, B, B, B-, B+, B+
3rd year: B
4th year: A-, B
Let’s skip to my 400-level course. Yup, I recently had a class earn a median of A-. They all deserved it. It was the best bunch of students I’ve ever had in that course, and I was really happy to give the marks I did. Their term papers were great, and their exams were outstanding. Didn’t even know the median was so high until it popped out of my spreadsheet when I was filling in the final grade forms. (And look, another 400-level class only got a median of B.)
It looks like there’s something funny going on in my 200-level courses. Yup, the grades are a bit high, tending to a median of B+, whereas GFC expects a B. That’s not a huge difference--in terms of the percent cutoffs I use, 72% is right in the middle of my “B”, whereas 76.5% is the middle of my “B+”. That’s a difference of 4.5%. Still, for a class of over 200 to have a grade that’s almost 5% “too high” is significant.
So, why the high marks? In my 200-level perception course, the textbook I used was extensively revised a few years ago, and the testbank of multiple choice questions that comes with it was really--how should I put it?--simplified. Because these questions make up about half of the exam, the marks went up by a few points. In my 200-level cognition course, the textbook I use is written by the same person who wrote my perception textbook. This textbook was also recently revised. Guess what the testbank is like?
It’s tough to rewrite dozens of exam questions, but I’m slowly working on it. Realizing that the marks have been increasing, I’ve also been slowly changing the percentage cutoffs for each letter grade. I don’t want to make huge changes all in one term--that’s not fair to those students. But it’s also not fair to give them inflated grades compared to other terms.
I’d like to think that my teaching improves over time--but is this reflected in students’ grades? If that were universally the case, wouldn’t instructors near retirement have sky-high marks in their classes, and wouldn’t graduate students teaching their first class have rock-bottom marks? Hmm, unless those sneaky novice instructors are inflating their students’ marks.
But that’s the topic of my next post.
Why aren’t you studying?
The Spacing Effect
There is a finding in the scientific literature that's been known for over 100 years. Recent research has continued to support its existence, and it turns out to be one of the most robust effects ever discovered about memory and learning. The problem is, few people know about it. It's called: The Spacing Effect.
This effect was first described by Hermann Ebbinghaus, who discovered it while doing a series of experiments on his own memory. In short, he found that, although repetition helped him remember things, spacing the repetitions over time led to a big improvement in his ability to remember.
When you're studying, the spacing effects says that it's counterproductive to repeat what you've studied immediately after you've studied it once. Instead, you should actually let some time go by before you refresh your memories. How much time? That depends--when do you need to know the information? Here's the deal, taken from a study published in 2008:
When the gap between initial learning and test date was a week, the optimum review took place a day after initial learning... With a month gap, the ideal review occurred after about a week; with a year, the prime review came three weeks after learning.So, midterms are roughly a month apart, right? That means you should be reviewing and repeating the material about every week.
The research shows that not only is spaced repetition a benefit to your remembering, but also that cramming is bad. Bad, bad, bad. Really bad. Did I mention it's bad? Optimally spaced repetition beat cramming by 77% to 111%. Note that a 100% improvement would mean a doubling of performance. Yeah, it's that good.
The bottom line: everyone should space out.
(For more, read the article, "Will that be on the test?" from the APS Observer.)
Why aren't you studying?
The High Cost of Textbooks
I've recently been exploring the cost of the textbook for one of my courses. By itself, it goes for $152.35 at the bookstore. However, I require students to do online labs, so they have to buy an access code. Previously, bundling the code with the (new) textbook added exactly $2.95 to the cost. The cost of the textbook + code bundle this term? A whopping $187.40. This means the code adds $35.05 to the price of the book. Yikes!
Just wait--it gets worse. If you buy this code online, it will cost you $33.26. Whuh? The reason why I've gone with the bundle is that it has saved students money--not cost them more. What's the advantage of bundling?
And it's even worse. According to the bookstore, the bundle increased in price since last semester by $17.75. Whuh? Worse, compared to a year ago, it now costs $41.19 more. Double-whuh? This is for the same textbook, mind you; not a new edition or anything.
Back in 2005, an error was made (either by the bookstore or by the publisher) and the price of the textbook + code bundle in another course of mine was too high. In that case, students were able to get a refund of the difference, as long as they still had their receipt from the bookstore. That time, the error was caught pretty early on, when the term was less than one month over. Now I don't know if students will be getting a refund this time, but even if that is the case, who hangs on to their receipts for everything for months and months? (And how come no mistake ever happens where the cost of something is accidentally too low?)
It may come as a surprise, but instructors don't know the price of textbooks beforehand. I kind of have the implicit assumption that prices will be stable--at least from one term to another within the same academic year. I see now that's not the case, which is pretty sneaky.
Asking students to spend almost $190 is just too much. I hate to dump a (very good) textbook just because of its price. I would also hate to get rid of a highly regarded and useful online lab because of price. But too much is too much.
What price do you think is reasonable for a textbook? (No, $0 is not considered reasonable...)
Why aren't you studying?
- - - - - - - - - - - -
Update: The publishing company rep got back to me with this on 3/20/2009:Last year we had to bring our prices more in line with their prices to prevent cross border sales of lower priced product into their market (it is illegal but happens nonetheless). That is largely why the prices went up last year. We are under an agreement to maintain a certain level of compliance with this. We realize the price for the 7th edition is high but it isn’t a mistake.
In a wacky twist, the new edition of the textbook (the 8th edition) is going to be cheaper than the 7th edition.
- - - - - - - - - - - -
Update on 9/3/2009: The new edition of the textbook is in--but the news is not good. The publisher told me the net price of the package is $121; the list price was supposed to be $151.95. The list price is the publisher's suggested retail price. So why is the package selling for $166.95 in the bookstore? According to the bookstore manager, that price reflects the bookstore's actual markup, which is 5% above list price. Argh!
Find It
About Me
- Karsten A. Loepelmann
- Edmonton, Alberta, Canada
- Faculty Lecturer in Psychology at the University of Alberta
Category
- awards (27)
- behind-the-scenes (197)
- exam prep (13)
- exams (13)
- learning (22)
- miscellaneous (175)
- research (8)
- studying (14)
- teaching (107)