Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

The Research Project: Results

Last year, I worked on a research project in my PSYCO 104 classes. (I've referred to it as the "Secret Project", only because I didn't want to influence students too much.) One class was a control, the other was the experimental group. The latter group had to do a lot of extra work.

I made students go to a website (or two). Some websites had students do experiments online, like taking a left-brain/right-brain "test," or making judgments of stimuli that comprised a visual illusion. Next, students had to go online and discuss their findings with other members of their 5-person group. Finally, one person was chosen by the group to submit a summary of the discussion, which was marked. There were 10 of these assignments. These assignments were intended to foster greater engagement with the material: students didn't just go to class and read the textbook. Rather, they had to try and apply what they knew to these online examples, and compare and contrast their findings with that of other students.

The experimental class also used a different textbook that came with a rich set of online tools. The platform, from publisher McGraw-Hill, is called Connect. It included an adaptive testing tool called LearnSmart (which is also available as an app for iOS and Android). LearnSmart asks you questions about things you've read in the textbook, but it also asks you how confident you are before you answer. It's assessing your metacognition: your knowledge of how much you know. One of the things new learners have difficulty with is knowing that they don't know everything. That is, they are overconfident they know it all. LearnSmart was designed to give feedback on your actual learning--not just your perception of it. I chose these resources to make mobile learning easier. That is, you can pull out your phone and do a bunch of LearnSmart questions, which can help you identify the things that you need to work on understanding better.

At the end of the course, both the control and experimental classes were given questions about their experiences. The results are in--and they're posted on the APRIL website. You'll see that, on some questions, there were no differences between the classes. (For example, "Reviewed your notes prior to class" showed no difference--no surprise.) However, other questions related to engagement showed a statistically significant difference (e.g., "Discussed ideas based on your readings or classes with others outside of class (students, family members, co-workers, etc." increased in the experimental class).

I also looked to see if students in the experimental class fared better on exam questions based on my lecture notes. Nope, no difference. (There was a difference in exam means, but there was a confound: The control class used a different textbook than the experimental class. The experimental class's averages were higher, but many exam questions were drawn from the textbook, which was not as "high-level" as the book I used in the control class.)

It's important for me to send out a thank-you to all of the students in my classes who were involved in this project. It wouldn't have been possible without you! I'm still pondering the implications of the results. I think they may have led to one change already: the new textbook adopted by the Department of Psychology this year is published by McGraw-Hill, and includes Connect and LearnSmart. I found it to be very useful (and students have informally told me that they liked it, too.)

Why aren't you studying?

The Updates

Occasionally, I get new information about something I've posted about before, and usually update the original post. Unless you go back and reread those, however, you may miss out on that info. Sometimes, it's hardly worth it to do an update. So here are some updates to things I've previously written about:

  • The New Colleague: After the brutal Alberta budget was released, our potentially new Faculty Lecturer decided not to come to the UofA. This is probably not a coincidence.
  • The Budget and the Clocks: I wish the clocks were the only thing affected by the devastating provincial budget. There's talk that the University may invoke "Article 32," a clause in the Faculty Agreement that allows the University to eliminate programs--in effect, cutting tenured positions. This would be a very bad precedent, and would substantially affect morale. (Oh, also? The brackets on the wall where the clocks used to be have been removed. And I may lose the phone in my office. Do you think the Premier has a clock and a phone in her office?)
  • The End of Perception: The end is near. I'm teaching PSYCO 267: Perception for the last time this term, and taught PSYCO 365: Advanced Perception for the last time in the past winter term. On the other hand, I will teach PSYCO 403 (LEC B2): Advanced Perception in Winter, 2014. And I will try to teach PSYCO 367: Perception in Spring, 2014.
  • The Udacity Partnership: Massive Online Open Course provider Udacity, with whom the UofA signed an agreement last year, has decided to concentrate on one discipline (Computer Science). That means all of our MOOCs (including the psychology one I'm working on as well as DINO 101) will have to find a new home. Stay tuned.
  • The Secret Project: ...is still moving ahead. I've been consulting with some people who have given me some great ideas about how to improve student engagement. Plus, there's a cool new learning technology that I'm going to be using. All of this will be tested in my Fall, 2013 PSYCO 104 LEC A3 class.
Why aren't you studying?

The Udacity Partnership

Earlier today the UofA signed a MOU (memorandum of understanding) with Udacity to develop a research partnership about MOOCs (massive open online courses). In a MOOC, the entire course is done online, for free. You may or may not get some kind of credit for participating and completing it; you may have to pay for a certificate. So far, you can't use these MOOCs for credit towards an actual degree.

This morning at 10:00, a group of instructors, researchers, and administrators met with Sebastian Thrun, who cofounded Udacity. (Yes, this is one of the "secret projects" I'm currently involved in. Now it's not a secret anymore.) Thrun, who gave a talk about MOOCs on campus last month, showed us his content creation system, which runs as an iPad app. Even in pre-alpha, it was pretty slick, allowing videos, sketches, and interactive quizzes to be put together to create a course, which can also be "consumed" via an i-device.

MOOCs raise many important questions about pedagogy (the "science of education"), instruction, interactivity, and the role of universities. We're thinking about those. But the reason I'm writing this post is to get the view of students on MOOCs.

What do you think about free, online courses? Would you take one? Why? What would you want to get from it? Would it help your mom learn about psychology (or whatever your major is)? Or for your younger sister in high school, who hasn't decided what topic to study in post-secondary education (much less her future career)? Would you take it to supplement what you're learning in your in-person, for-credit class? Or would you want to get your whole degree online, instead of going to meat-space classes? (Hmm, isn't that already available?)

Why aren't you studying?

The Research: The Results

In this series, I've been describing my latest research, looking at how ebook use affects academic outcomes. In my previous post, I described the process of data collection. The next step: analyzing the data and seeing the results for the first time.

As a graduate student, I took a lot of advanced statistics courses (in a couple educational psychology stats courses, I even had to learn APL. Eep!). This fact does not mean that I am in love with the field of statistics, and the mathematical process of analyzing data. It's just a means to an end. Still, it's an important means to an end. Not all science relies on quantitative research, but a lot of it does. If you don't know how to analyze your data, er...then what? All you've got is a big file full of (meaningless) numbers. Bottom line: It's important to know how to adequately analyze your data.

Here's a story. From 2001 to 2003, I was the statistics advisor for students in the Department of Psychology's Internship Program. After working out in the Real World collecting data, students would come to me with a file full of (meaningless) numbers. Many students were up to speed on their stats, had planned their data collection and analysis in advance, and just wanted to run their stats by me to make sure they were on the right track. Some others, however... Others...oh dear.

Others had not planned their data analysis in advance. They just went about their jobs, collecting data here and there--like they were meandering through a field picking daisies whenever and wherever they wanted. They'd come to me, give me their data, and expect me to work a miracle. This, not surprisingly, did not go well. You can't perform any kind of meaningful, valid analysis on 15 data points collected from one participant over various different time periods. With no independent variable (other than time, sort of). Yes, you've got a file with numbers. That's...great. But numbers do not statistics make. The moral of the story is: Take a statistics course. Then, take another one. Then, take one more.

Results
I'm happy to say that a large majority of students in my class opted to allow their data to be included in my analysis--almost 200 people. Unfortunately, I had to exclude data from a small number of students (they were randomly assigned to receive an ebook, but chose not to use it; that kind of self-selection may throw off the results). I collected a lot of different kinds of data, which will require a more sophisticated analysis, so what I'm going to present is a bit of a "cheat", but I couldn't help myself--I really wanted to see the bottom line right away. So here it is: r = -.035. Neat, eh?

Discussion
Er, OK, so here's an interpretation of the results: students who used the ebook got lower grades than students who used the printed textbook (negative correlation). But look at the size of the correlation--it's basically zero. The analysis also shows that there was no statistically significant difference between ebook and printed textbook users (p = .620). This means that, all other things being (hopefully) equal, using an ebook should not cost you any marks; or, reading material on a screen does not impair outcomes, at least in this perception course.

I'll need to sift through the data some more, to see if that's because ebook users spent more time studying than textbook users, or if there are other variables that also account for the results. In the meantime, I won't have any trouble recommending that students use an ebook instead of a textbook--and hey, seeing as ebooks are typically cheaper than paper textbooks, you'll even save some money. You're welcome.

Why aren't you studying?

The Research: The Data Collection

After my project passed ethics, it was time to design the data collection. While waiting (and waiting...and waiting...and waiting...) for ethical approval, I was able to fine tune the survey questionnaire that I would have students fill out. I got some great advice from people who know way more than I do about this kind of research; it helped tremendously. (One of the best things about the UofA is the amount of specialized knowledge that exists on campus. It's truly staggering how many academics there are here with top-notch knowledge. It's easy to take it all for granted.)

You don't always know what to ask on a questionnaire. What factors are relevant? (Did you use the online etextbook or printed textbook?) Which ones might matter? (Have you used etextbooks before?) What kinds of things are probably irrelevant? (Are you male or female? Better ask that one anyway.) There has to be a balance between asking for enough information, and making the survey as short as possible. Ever done an online survey that just seems to go on, page after page? Fill out this big long page, click "next" and the percent completed graph ticks up by only 1%? To get as many participants responding as possible, you've got to keep it as short as possible.

To keep everything in line with ethics guidelines, I didn't work on the online form until everything was okayed. Although I could have coded the forms myself (my websites are all hand-coded, thank you very much), but I didn't have a lot of spare time. Fortunately, the Department of Psychology has a great resource available: the Instructional Technology and Resources Lab. This lab is staffed by an undergraduate student who is enrolled in our internship program. (Plug: If you want to get hands-on experience doing a real psychology job before you graduate, look into it. You actually get paid for it, too.) Lauren McCoy coded the entire questionnaire for me. (Thanks, Lauren!)

Next, via Bear Tracks, I sent out a mass email to all the students in my class asking them to participate. Nothing to do after that but wait. It was hard to be patient, waiting for the data to roll in. And, according to ethics, I couldn't even look at it until the course was over. Argh!

Why aren't you studying?

The Research: The Ethics

In previous posts, I described the beginnings of the current research project. But before any research can be conducted, it has to be vetted through the research ethics approval process.

The major research granting agencies in Canada (CIHR, NSERC, and SSHRC) have come up with a (recently updated) policy document outlining ethical treatment of human research participants, called TCPS 2. If you want to do any research funded by one of the "tri-council" agencies, you must follow this policy. TCPS 2 has also trickled down to the university in general; research on campus is overseen by the Research Ethics Office. The REO has established a number of different Research Ethics Boards or Panels that review all research applications (whether funded by tri-council or not), and give their approval. Different boards oversee different kinds of research, like a typical psychology experiment, versus medical and clinical kinds of research.

It's important to me to make sure the willing participants in my research are (at the very least) not harmed, are treated properly, have their rights and human dignity respected, and (where appropriate) have their individual research results remain private and confidential. The process of obtaining ethical approval, though, is not trivial.

It used to be pretty easy to get ethics approval for research. Five years ago, I'd have to fill out a form indicating what I'd be doing (having students fill out a survey), whether there were any known risks to participants (um, maybe getting a paper cut?), and what I'd do if there were (rush them to the hospital). I'd talk to my colleague down the hall who would look over the application, make suggestions, and give his verbal okay. Now, it's a different story.

My Department requires that any Contract Academic Staff have their research sponsored by a professor (tenured or tenure-track staff). Luckily, a colleague of mine was able and willing to sign off on my project. It's really just a formality, which makes me question why it's necessary. Don't they trust me? And, isn't my research going to be overseen by the University?

This brings me back to the REO, which has switched to an online application process, using a system called HERO (Human Ethics Research Online--cute, eh?). Although it's now online, the process is very involved (sorry, I mean thorough) with many, many pages of questions that I have to fill out. Things like, how am I going to maintain security over the data to ensure privacy and confidentiality? (256-bit triple DES.) Will I be retaining an sensitive information, like student ID numbers? (Temporarily, yes.) Do I expect participants to come to any harm? (Er, no. Unless someone drops their computer on their foot.) The good thing was that all these questions forced me to think about ethical issues that I hadn't considered. Like, what if someone withdraws their consent--even after completing my online form? All of this really helped in designing the study itself.

Unfortunately, my ethics application was...misplaced (lost? forgotten? ignored?) for six weeks. Because this was my first experience with HERO, I didn't know how long the process would take. But after a month and a half of waiting, I asked my colleague who told me that approval should come after six days, not six weeks, and that I should "scream" about it. I didn't scream, but I was firm and persistent until my application was found, reviewed, and approved. Altogether, applying for and getting ethical approval for my project took two months. Piece of cake.

Next time: Data collection!

Why aren't you studying?

The Research: The Opportunity

As I wrote in my last post, I do research. But doing research is not easy if you aren't allowed to apply for major research grants. Sometimes, though, you get lucky.

I've got a good relationship with Nelson Education--the Canadian imprint for Cengage Learning, publisher of a number of textbooks I use in my courses, and also my employer (I do consulting for them on their Canadian psychology websites). So late last year, the local rep asked if I'd be interested in helping them evaluate the CengageNOW platform (which includes online etextbooks and interactive study guides). Oh, and they'd provide free access codes for students in my Perception class--but unfortunately, only for about half the class.

OK, I've just been designing this kind of experiment for a couple of years: Does using an etextbook cause students to do better, worse, or exactly the same in a course? I jumped at the chance. Half the class would get a free access code to the etextbook, the other half would use a regular printed textbook. At the end of the course, I could compare the two groups in terms of the dependent variable of final grade. Perfect! But who would get the free etextbook and who would have to pay for a textbook? How would that be decided? And is it fair that some students get something for free, and others don't?

These are important questions to consider. Obviously, the fair thing to do (and also the most obvious, from a statistical point of view) would be to randomly assign students to the etextbook and printed textbook groups. However, some students may not want to use an etextbook--even if it's free. In that case, I would have to exclude them from the study data, but then I could use their access code to give to students who registered late. The issue of free, though, I couldn't overcome. Nelson was not willing to pay for free printed textbooks for the other half of the class (about 107 students). Rats! This means I've got a confound that I couldn't overcome: students who got the etextbook would also be getting it for free, whereas students who bought and used the printed textbook would be paying for it.

If there are any difference in grades between these two groups, it could be because of the resource used (maybe reading an etextbook is more fatiguing so students spend less time reading, compared to a printed textbook--or maybe it's easier to read). Or it could be because of the "free" aspect (students feel less "invested" in the free etextbook, so don't read it as much as they would a printed textbook that they had to pay money for). Argh! Not so perfect. But it was the best I could do under the circumstances; I'd need almost $20,000 to buy textbooks for half the class!

There still were many more hurdles to overcome. Next: research ethics and the maze that is HERO.

Why aren't you studying?

The Research: Primary vs. Secondary

As a scientist, I do research. The first thing the word “research” brings to mind is probably experimental research. But this is only one method under the broader umbrella term of empirical research, which can include other methods like surveys, for example.

Another way of dividing up research into different kinds is into primary and secondary. In primary research, you collect original data; you’re discovering something no one else has ever known (you hope!). In secondary research, you are going through data that has already been collected. Maybe you are looking for something specific, or maybe you want to do a (formal, statistical) meta-analysis. (This doesn’t mean that every time you do a Google search, you’re doing secondary research--but secondary research might employ an Internet search now and then. More likely, I’ll use PsycINFO or MEDLINE.)

I do a lot of secondary research in prepping my courses. For example, when I created my lecture on synesthesia, I did a lot of secondary research--searching for studies, reading and analyzing them, and synthesizing the information in a systematic, coherent way. (At least, I hope it’s coherent! ;-)

I also do some primary research. It’s not something that I’m required to do in my role as Faculty Lecturer (but it can be a lot of fun to do). In fact, the University makes it hard for contract academic staff to do primary research: we are not allowed to apply for research grants. As you can imagine, having no money makes it kinda hard to do research. Unless: a) you’re rich, b) you have a sugar daddy, or c) a publishing company comes to you with a bunch of free stuff and asks if you’re interested in using it to do a study.

Late last year, I had the opportunity for option c). In the next few posts, I’ll describe the steps in the research process, ending up with a summary of my results.

Why aren’t you studying?

Find It