The Research: The Results

In this series, I've been describing my latest research, looking at how ebook use affects academic outcomes. In my previous post, I described the process of data collection. The next step: analyzing the data and seeing the results for the first time.

As a graduate student, I took a lot of advanced statistics courses (in a couple educational psychology stats courses, I even had to learn APL. Eep!). This fact does not mean that I am in love with the field of statistics, and the mathematical process of analyzing data. It's just a means to an end. Still, it's an important means to an end. Not all science relies on quantitative research, but a lot of it does. If you don't know how to analyze your data, er...then what? All you've got is a big file full of (meaningless) numbers. Bottom line: It's important to know how to adequately analyze your data.

Here's a story. From 2001 to 2003, I was the statistics advisor for students in the Department of Psychology's Internship Program. After working out in the Real World collecting data, students would come to me with a file full of (meaningless) numbers. Many students were up to speed on their stats, had planned their data collection and analysis in advance, and just wanted to run their stats by me to make sure they were on the right track. Some others, however... Others...oh dear.

Others had not planned their data analysis in advance. They just went about their jobs, collecting data here and there--like they were meandering through a field picking daisies whenever and wherever they wanted. They'd come to me, give me their data, and expect me to work a miracle. This, not surprisingly, did not go well. You can't perform any kind of meaningful, valid analysis on 15 data points collected from one participant over various different time periods. With no independent variable (other than time, sort of). Yes, you've got a file with numbers. That's...great. But numbers do not statistics make. The moral of the story is: Take a statistics course. Then, take another one. Then, take one more.

I'm happy to say that a large majority of students in my class opted to allow their data to be included in my analysis--almost 200 people. Unfortunately, I had to exclude data from a small number of students (they were randomly assigned to receive an ebook, but chose not to use it; that kind of self-selection may throw off the results). I collected a lot of different kinds of data, which will require a more sophisticated analysis, so what I'm going to present is a bit of a "cheat", but I couldn't help myself--I really wanted to see the bottom line right away. So here it is: r = -.035. Neat, eh?

Er, OK, so here's an interpretation of the results: students who used the ebook got lower grades than students who used the printed textbook (negative correlation). But look at the size of the correlation--it's basically zero. The analysis also shows that there was no statistically significant difference between ebook and printed textbook users (p = .620). This means that, all other things being (hopefully) equal, using an ebook should not cost you any marks; or, reading material on a screen does not impair outcomes, at least in this perception course.

I'll need to sift through the data some more, to see if that's because ebook users spent more time studying than textbook users, or if there are other variables that also account for the results. In the meantime, I won't have any trouble recommending that students use an ebook instead of a textbook--and hey, seeing as ebooks are typically cheaper than paper textbooks, you'll even save some money. You're welcome.

Why aren't you studying?

Find It