The Research: The Results

In this series, I've been describing my latest research, looking at how ebook use affects academic outcomes. In my previous post, I described the process of data collection. The next step: analyzing the data and seeing the results for the first time.

As a graduate student, I took a lot of advanced statistics courses (in a couple educational psychology stats courses, I even had to learn APL. Eep!). This fact does not mean that I am in love with the field of statistics, and the mathematical process of analyzing data. It's just a means to an end. Still, it's an important means to an end. Not all science relies on quantitative research, but a lot of it does. If you don't know how to analyze your data, er...then what? All you've got is a big file full of (meaningless) numbers. Bottom line: It's important to know how to adequately analyze your data.

Here's a story. From 2001 to 2003, I was the statistics advisor for students in the Department of Psychology's Internship Program. After working out in the Real World collecting data, students would come to me with a file full of (meaningless) numbers. Many students were up to speed on their stats, had planned their data collection and analysis in advance, and just wanted to run their stats by me to make sure they were on the right track. Some others, however... Others...oh dear.

Others had not planned their data analysis in advance. They just went about their jobs, collecting data here and there--like they were meandering through a field picking daisies whenever and wherever they wanted. They'd come to me, give me their data, and expect me to work a miracle. This, not surprisingly, did not go well. You can't perform any kind of meaningful, valid analysis on 15 data points collected from one participant over various different time periods. With no independent variable (other than time, sort of). Yes, you've got a file with numbers. That's...great. But numbers do not statistics make. The moral of the story is: Take a statistics course. Then, take another one. Then, take one more.

Results
I'm happy to say that a large majority of students in my class opted to allow their data to be included in my analysis--almost 200 people. Unfortunately, I had to exclude data from a small number of students (they were randomly assigned to receive an ebook, but chose not to use it; that kind of self-selection may throw off the results). I collected a lot of different kinds of data, which will require a more sophisticated analysis, so what I'm going to present is a bit of a "cheat", but I couldn't help myself--I really wanted to see the bottom line right away. So here it is: r = -.035. Neat, eh?

Discussion
Er, OK, so here's an interpretation of the results: students who used the ebook got lower grades than students who used the printed textbook (negative correlation). But look at the size of the correlation--it's basically zero. The analysis also shows that there was no statistically significant difference between ebook and printed textbook users (p = .620). This means that, all other things being (hopefully) equal, using an ebook should not cost you any marks; or, reading material on a screen does not impair outcomes, at least in this perception course.

I'll need to sift through the data some more, to see if that's because ebook users spent more time studying than textbook users, or if there are other variables that also account for the results. In the meantime, I won't have any trouble recommending that students use an ebook instead of a textbook--and hey, seeing as ebooks are typically cheaper than paper textbooks, you'll even save some money. You're welcome.

Why aren't you studying?

The Research: The Data Collection

After my project passed ethics, it was time to design the data collection. While waiting (and waiting...and waiting...and waiting...) for ethical approval, I was able to fine tune the survey questionnaire that I would have students fill out. I got some great advice from people who know way more than I do about this kind of research; it helped tremendously. (One of the best things about the UofA is the amount of specialized knowledge that exists on campus. It's truly staggering how many academics there are here with top-notch knowledge. It's easy to take it all for granted.)

You don't always know what to ask on a questionnaire. What factors are relevant? (Did you use the online etextbook or printed textbook?) Which ones might matter? (Have you used etextbooks before?) What kinds of things are probably irrelevant? (Are you male or female? Better ask that one anyway.) There has to be a balance between asking for enough information, and making the survey as short as possible. Ever done an online survey that just seems to go on, page after page? Fill out this big long page, click "next" and the percent completed graph ticks up by only 1%? To get as many participants responding as possible, you've got to keep it as short as possible.

To keep everything in line with ethics guidelines, I didn't work on the online form until everything was okayed. Although I could have coded the forms myself (my websites are all hand-coded, thank you very much), but I didn't have a lot of spare time. Fortunately, the Department of Psychology has a great resource available: the Instructional Technology and Resources Lab. This lab is staffed by an undergraduate student who is enrolled in our internship program. (Plug: If you want to get hands-on experience doing a real psychology job before you graduate, look into it. You actually get paid for it, too.) Lauren McCoy coded the entire questionnaire for me. (Thanks, Lauren!)

Next, via Bear Tracks, I sent out a mass email to all the students in my class asking them to participate. Nothing to do after that but wait. It was hard to be patient, waiting for the data to roll in. And, according to ethics, I couldn't even look at it until the course was over. Argh!

Why aren't you studying?

The Research: The Ethics

In previous posts, I described the beginnings of the current research project. But before any research can be conducted, it has to be vetted through the research ethics approval process.

The major research granting agencies in Canada (CIHR, NSERC, and SSHRC) have come up with a (recently updated) policy document outlining ethical treatment of human research participants, called TCPS 2. If you want to do any research funded by one of the "tri-council" agencies, you must follow this policy. TCPS 2 has also trickled down to the university in general; research on campus is overseen by the Research Ethics Office. The REO has established a number of different Research Ethics Boards or Panels that review all research applications (whether funded by tri-council or not), and give their approval. Different boards oversee different kinds of research, like a typical psychology experiment, versus medical and clinical kinds of research.

It's important to me to make sure the willing participants in my research are (at the very least) not harmed, are treated properly, have their rights and human dignity respected, and (where appropriate) have their individual research results remain private and confidential. The process of obtaining ethical approval, though, is not trivial.

It used to be pretty easy to get ethics approval for research. Five years ago, I'd have to fill out a form indicating what I'd be doing (having students fill out a survey), whether there were any known risks to participants (um, maybe getting a paper cut?), and what I'd do if there were (rush them to the hospital). I'd talk to my colleague down the hall who would look over the application, make suggestions, and give his verbal okay. Now, it's a different story.

My Department requires that any Contract Academic Staff have their research sponsored by a professor (tenured or tenure-track staff). Luckily, a colleague of mine was able and willing to sign off on my project. It's really just a formality, which makes me question why it's necessary. Don't they trust me? And, isn't my research going to be overseen by the University?

This brings me back to the REO, which has switched to an online application process, using a system called HERO (Human Ethics Research Online--cute, eh?). Although it's now online, the process is very involved (sorry, I mean thorough) with many, many pages of questions that I have to fill out. Things like, how am I going to maintain security over the data to ensure privacy and confidentiality? (256-bit triple DES.) Will I be retaining an sensitive information, like student ID numbers? (Temporarily, yes.) Do I expect participants to come to any harm? (Er, no. Unless someone drops their computer on their foot.) The good thing was that all these questions forced me to think about ethical issues that I hadn't considered. Like, what if someone withdraws their consent--even after completing my online form? All of this really helped in designing the study itself.

Unfortunately, my ethics application was...misplaced (lost? forgotten? ignored?) for six weeks. Because this was my first experience with HERO, I didn't know how long the process would take. But after a month and a half of waiting, I asked my colleague who told me that approval should come after six days, not six weeks, and that I should "scream" about it. I didn't scream, but I was firm and persistent until my application was found, reviewed, and approved. Altogether, applying for and getting ethical approval for my project took two months. Piece of cake.

Next time: Data collection!

Why aren't you studying?

The Research: The Opportunity

As I wrote in my last post, I do research. But doing research is not easy if you aren't allowed to apply for major research grants. Sometimes, though, you get lucky.

I've got a good relationship with Nelson Education--the Canadian imprint for Cengage Learning, publisher of a number of textbooks I use in my courses, and also my employer (I do consulting for them on their Canadian psychology websites). So late last year, the local rep asked if I'd be interested in helping them evaluate the CengageNOW platform (which includes online etextbooks and interactive study guides). Oh, and they'd provide free access codes for students in my Perception class--but unfortunately, only for about half the class.

OK, I've just been designing this kind of experiment for a couple of years: Does using an etextbook cause students to do better, worse, or exactly the same in a course? I jumped at the chance. Half the class would get a free access code to the etextbook, the other half would use a regular printed textbook. At the end of the course, I could compare the two groups in terms of the dependent variable of final grade. Perfect! But who would get the free etextbook and who would have to pay for a textbook? How would that be decided? And is it fair that some students get something for free, and others don't?

These are important questions to consider. Obviously, the fair thing to do (and also the most obvious, from a statistical point of view) would be to randomly assign students to the etextbook and printed textbook groups. However, some students may not want to use an etextbook--even if it's free. In that case, I would have to exclude them from the study data, but then I could use their access code to give to students who registered late. The issue of free, though, I couldn't overcome. Nelson was not willing to pay for free printed textbooks for the other half of the class (about 107 students). Rats! This means I've got a confound that I couldn't overcome: students who got the etextbook would also be getting it for free, whereas students who bought and used the printed textbook would be paying for it.

If there are any difference in grades between these two groups, it could be because of the resource used (maybe reading an etextbook is more fatiguing so students spend less time reading, compared to a printed textbook--or maybe it's easier to read). Or it could be because of the "free" aspect (students feel less "invested" in the free etextbook, so don't read it as much as they would a printed textbook that they had to pay money for). Argh! Not so perfect. But it was the best I could do under the circumstances; I'd need almost $20,000 to buy textbooks for half the class!

There still were many more hurdles to overcome. Next: research ethics and the maze that is HERO.

Why aren't you studying?

The Research: Primary vs. Secondary

As a scientist, I do research. The first thing the word “research” brings to mind is probably experimental research. But this is only one method under the broader umbrella term of empirical research, which can include other methods like surveys, for example.

Another way of dividing up research into different kinds is into primary and secondary. In primary research, you collect original data; you’re discovering something no one else has ever known (you hope!). In secondary research, you are going through data that has already been collected. Maybe you are looking for something specific, or maybe you want to do a (formal, statistical) meta-analysis. (This doesn’t mean that every time you do a Google search, you’re doing secondary research--but secondary research might employ an Internet search now and then. More likely, I’ll use PsycINFO or MEDLINE.)

I do a lot of secondary research in prepping my courses. For example, when I created my lecture on synesthesia, I did a lot of secondary research--searching for studies, reading and analyzing them, and synthesizing the information in a systematic, coherent way. (At least, I hope it’s coherent! ;-)

I also do some primary research. It’s not something that I’m required to do in my role as Faculty Lecturer (but it can be a lot of fun to do). In fact, the University makes it hard for contract academic staff to do primary research: we are not allowed to apply for research grants. As you can imagine, having no money makes it kinda hard to do research. Unless: a) you’re rich, b) you have a sugar daddy, or c) a publishing company comes to you with a bunch of free stuff and asks if you’re interested in using it to do a study.

Late last year, I had the opportunity for option c). In the next few posts, I’ll describe the steps in the research process, ending up with a summary of my results.

Why aren’t you studying?

The Awards: 5

I've been named to the Department of Psychology's Honour Roll with Distinction for all three courses I taught last term. Thank-you to everyone, and special thanks to those who went to the trouble of giving written comments. I'm not going to post "best-of" student comments this time because (a) I've done that before, (b) there weren't many comments that, er...cry out for a response (most were constructive and helpful, which is great!), and (c) I don't want to reinforce anyone trolling for their comments to be posted in this blog (getcher own blog, eh?).

This time, I want to congratulate my colleagues who were named to the Honour Roll:

  • Brown, N. (PSYCO 405 X5)
  • Dixon, P. (PSYCO 258)
  • Friedman, A. (PSYCO 212)
  • Hurd, P. (PSYCO 400/409)
  • Masuda, T. (PSYCO 241 B1, PSYCO 305)
  • Passey, J. (PSYCO 105 B1, PSYCO 233)
  • Schimel, J. (PSYCO 105 B4)
  • Spalding, T. (PSYCO 105 B3, PSYCO 405 B2)
  • Snyder, M. (PSYCO 403 B2)
  • Westbury, C. (PSYCO 339)
  • Wylie, D. (PSYCO 267 B2)
And those who were named to the Honour Roll with Distinction:
  • Busink, R. (PSYCO 436)
  • Caplan, J. (PSYCO 403 B4)
  • Colbourne, F. (PSYCO 403 B1)
  • Gagne, C. (PSYCO 532)
  • Hurd, P. (PSYCO 414/505)
  • Kuiken, D. (PSYCO 415)
  • Lee, P. (PSYCO 105 S1)
  • Mou, W. (PSYCO 403 B3)
  • Mullins, B. (PSYCO 104 B2)
  • Nicoladis, E. (PSYCO 323)
  • Noels, K. (PSYCO 300)
  • Passey, J. (PSYCO 241 S1, PSYCO 405 B1)
  • Singhal, A. (PSYCO 377)
  • Spetch, M. (PSYCO 485)
  • Todd, K. (PSYCO 475)
  • Varnhagen, C. (SCI 100)
  • Watchorn, R. (PSYCO 323)
  • Wylie, D. (PSYCO 405 B3)
Quite a list, isn't it? I think the criteria are pretty stringent (see below for details); that means the Department has a lot of great teachers. I am humbled to be included among them.

Here are the criteria for the awards:
1. The course section median response was equal to or greater than 4.0; for Honors with Distinction, the course section median response was greater than 4.0 and at least 45% of the students agreed strongly that the instructor was “Excellent;” For classes with fewer than 10 students enrolled, the majority of students responded “Agree” or “Strongly Agree”; for Honors with Distinction, the majority of the majority responded “Strongly Agree”;
2. At least 60% of the class responded to the questionnaire;
3. There were no abnormalities in the grade distributions (e.g., distributions skewed too high or too low);
4. Instruction was conducted in accord with the ethical standards of teaching as outlined by the APA and CPA.

Why aren't you studying?

The Value of Your Degree

I’ve posted about The Best Job in the World, according to a number of metrics. Now, a new study has been released, looking at the earnings of people who hold a bachelor’s degree.

Researchers at Georgetown University Center on Education and the Workforce analyzed US Census Bureau data on over 3 million bachelor’s degree-holders who graduated over the last 40 years. Specifically, they recorded the median salaries they earned in 2009; this means that they took a cross-section of people currently in the workforce (it didn’t just look at people who graduated in 2009).

I’m sad to report that the field with the lowest median salary is Psychology and Social Work (P&SW) at $42K (range: $29K to $53K). Sob. Below is the breakdown within P&SW (from The Chronicle of Higher Education):









The lowest is counseling psychology, ringing in at a paltry $29,000 per year. At the top of P&SW is I/O psychology (those who work in large companies, measuring and improving performance and/or wellness). I guess those companies pay pretty well; the pay is almost double that of the poor counseling psychology graduates.

But wait--what are those lowly counseling psychology-degree-holders actually doing? They’re not working as counseling psychologists. Why not? Generally, you can’t, not with just a Bachelor’s degree. Maybe they just got their BA and are now working in retail. On the other hand, you can work in I/O psychology with “just” a bachelor’s degree.

Also, the US economy isn’t in great shape. The data came from 2009, when the job situation was pretty grim--not that it’s great today. It’s possible that some people had their salary cut, or at least not increased recently. Still, the numbers above are based on full-time, full-year workers with a Bachelor’s, not part-time workers.

Finally, those who hold higher-level degrees like a Master’s or Ph.D. earn more. According to the report, median earnings of those with a graduate degree in P&SW was $60K, moving P&SW up to the third-lowest field.

So, did you make a bad choice of major? Should you have taken engineering (overall median: $75,000)? Or computing science (overall median: $71,000). Maybe we’re just all in this for the love of it.

Why aren’t you studying?

Find It