The Gay Bisanz Memorial Turkey Drive

Dr Gay Bisanz taught me developmental psychology as an undergraduate. It wasn't a required course, but I took it anyway--I wanted to take as many psych courses as I could. It was a good choice. Sure, she taught me specific things about how people grow and develop, and general general things about science, psychology, critical thinking--but she also showed me the importance of giving back to your community. Sadly, Gay Bisanz died of cancer on June 1, 2005.

Gay started the Department of Psychology's now-famous Turkey Drive. People in the Department--academic staff, support staff, post-docs, students, and more--give money that is directed to CBC Edmonton's Turkey Drive for the Food Bank. But beyond that, some people bake cookies for sale, make jewelry, and donate items for raffle--there are a lot of very creative ways to part people with their money. Some instructors have volunteered to catch a pie with their face if their class contributes more money than any other class.

Last year, a total of $6,531.91 was collected. This year, the Turkey Drive goes from November 24 to December 8. Stop by the Psychology General Office (BS P-217) and buy a cookie, or a raffle ticket for one of the really nice gift baskets up for grabs. (The 2-for-$1 white chocolate and macadamia nut cookies are my personal weakness.)

Why aren't you studying?

The Exam Statistics: The Q Score

This is my final post on the topic of exam statistics. Previously, I described my use of the mean, difficulty scores, and point-biserial correlation. This time: the dreaded Q score. (Just to clarify: I'm not describing the other "Q Score", which represents the public's familiarity with--and appeal of--a person, product, company, or television show. That's not dreaded at all.)

The dreaded Q score is not a statistic that I regularly receive with all my other exam stats. I have to put in a special request. It's extra work for the people over at TS&QS, which means there's an additional cost that must be paid by the Department of Psychology. TS&QS has to go into their database of exam results for my class and perform a statistical comparison between two (or more) given exams.

Here's where the dread comes in: Why would I want to statistically compare two (or more) students' exams? If I suspect them of cheating, that's why. Sometimes, the cheating is blatantly obvious. The cameras in the classrooms (you know about those, right?) may clearly show one person peering over at the exam of another. Other times, it's not so obvious. Why is that guy jittering in his seat, looking everywhere except at his exam? Maybe he's nervous, or has exam anxiety. Why is that girl acting squirrelly, shifting her eyes back and forth? Maybe she drank too much coffee, has caffeine overload, and now really, really has to pee. Whatever the case, the exam proctors will not interrupt any student taking the exam. Nope. We'll just let you do what you do. If that's cheating, so be it.

However, at the end of the exam, the answer sheets from any suspicious students are set aside. (Think you can fool us by not leaving at the same time, or handing in your exams to different proctors? Tsk. You don't know how many eyes are watching, do you?) Those answer sheets will be analyzed, and I will get the dreaded Q score. I don't want to say too much about how it works, so suffice it to say that it gives a probability that cheating has occurred, compared to chance. Maybe I'll write more about how I have to deal with cheating in another post, but for now, let's just say it involves a lot of dread.

Although I have caught several cheaters over the years, I'm glad I've never had to deal with anything like what happened in Professor Richard Quinn's class recently. Yeesh.

(Cartoon by Frank Cammuso. It's important to give credit where credit is due. Otherwise, it's like, um...cheating.)

Why aren't you studying?

The Mouse

That's right, a mouse. I've got a mouse in my office. Well, it's probably not here full-time, but it does drop in and visit. Speaking of dropping, that's what's in the photo: droppings. See the two little black sesame-seed looking things? Mouse poop. On my desk. (Describing it with a food metaphor is kinda making me queasy. Bleah.)

A couple of weeks ago, there was a knock at my door. A couple of jolly fellows were putting mouse traps in everyone's offices in the Psychology wing. I told them I hadn't seen any mice, but they were quick to point out two little black grains of rice (bleah) on the floor. They didn't clean up the poop.

At this point, the lightbulb went off in my head. Oh, yeah. The chocolate bar that I left on my side table the other day. I came in to find it half eaten. I wasn't pleased as I threw the remainder away (Swiss dark chocolate!)--I figured the cleaning staff had seen it an gotten a bit hungry. Nope. Those must've been mouse teeth marks.

OK, so now: mouse trap. The problem is that it hasn't been working. I come into my office in the morning and regularly find more poops. On my desk. Of course you know, this means war! I don't want to get hantavirus. So yesterday I went out and got a couple of better mousetraps, put some cheese in them (this is what cartoons have taught me: mice love cheese), turned out the lights and left for the day. Heh-heh-heh, I laughed menacingly.

Today I opened the door to my office hesitantly. What would I find? Answer: nothing. OK, not exactly nothing. No mouse. No cheese. Yup, the l'il sucker ripped off my cheese. But at least the mouse traps were still there. This means I'm now helpfully feeding the mouse that's running around on the second floor. Dr Snyder, whose office is just down the hall, recently saw it looking at him from his bookshelf, but he wasn't able to catch it. In my office, however, the mouse prefers my desk. Evidence? Another poop. Probably left right after polishing off those two bits of cheese. Can a mouse be impertinent?

I suppose there's some joke in here somewhere about a psychologist and a mouse, but I'm drawing a blank. Do you know any good ones?

Why aren't you studying?

The Exam Statistics: The Point Biserial Correlation

I'm continuing my explanation of the reams of statistics I get about multiple choice exams. Last time, I explained exam item difficulty scores. (Fascinating, no?) This time: point biserial correlation coefficient, or "rpb". That is, "r" for the correlation coefficient (why, oh why is it the letter r?) and "pb" to specify that it's the point biserial and not some other kind of correlation. Like, um, some other kind.

If I've constructed a good exam item, it should be neither too hard nor too easy. It should also differentiate among students. But I can't tell how well it does that just by looking at the difficulty score. Instead, there's a more complex measure, the rpb. In general, I need a correlation index for a categorical variable with a continuous variable. More specifically, I want to correlate the categorical variable of a test item (i.e., whether a student answered the test item correctly or incorrectly), with the continuous variable of the student's percent score on the examination. Got that? I didn't think so.

Let me try again. Student A did well on the exam, getting 90% correct. Student B did not do so well, getting only 50%. If I look at any given exam question, in general, student A should be more likely to answer it correctly than student B. This is not the same as difficulty, because I'm not simply looking at what proportion of the class answered the question correctly. I'm correlating each student's score with their performance on each question. The key to all this is the word "should" in the sentence above.

If an exam item is poorly constructed for whatever reason, good students may do worse on it than students who did worse on the exam. That is, the better you are overall, the less likely you are to answer it correctly. That is not supposed to happen. The rpb gives me this information for each question on the exam. Experts in exam construction recommend that the rpb should range from 0.30 to 1.00. Anything question getting a rpb lower than 0.30 means that I will take a look at it and try to figure out why that's happening.

And if the rpb is negative, well...it's a negative correlation. That's the worst case I described: better students are doing worse answering this question; and poorer students are doing well. I won't use any questions getting a negative rpb again unless I can figure out why it's happening. Maybe I can tweak the question, maybe I have to rewrite it to ask about the same knowledge in a different way. Or maybe I'll just give up entirely, go and get a coffee, and check out some LOLcats.

Why aren't you studying?

Find It