Sunday, June 11, 2017

The Problem with Education Research

"Education Research." Even in these times of political ignorance of research, science, and fact-based decision-making, there's still a place in every American's brain for education research.

It's probably due to the ever-present mantra of "Won't somebody PLEASE think of the children?" coupled with an ferocious need to believe that one's own children would be superstars if only the damned teachers weren't so terrible. Parents tweet, post, and search for information about "best-practices", proCCSS or anti-CCSS, pro-disease or pro-vaccination, in a desperate search for confirmation that they have a brilliant child.

The problem, of course, is that the searchers don't connect with the research.

Linda Graham, in an article on TES, Teachers Need to Trust Research Again, complained that
Just over a year ago, I was disturbed to read the suggestion – tweeted by a teacher attending a ResearchED conference at the University of Cambridge – that education academics should be made to pay schools for access to research participants. I was shocked because education research was clearly not being perceived as a public good; something we should support in the way that we do other forms of research.
I'll say this: it takes a certain chutzpah to complain that education researchers should be any different from others and pay subjects for their time. If you can't do that, then the taxpayer funded research based on studying taxpayer's kids in taxpayer-funded schools should at least be made available to read after its completed, without a $49.95 access fee. It's not that I think this research is a public bad, it's that few understand it and I want to see that it says and means what those above me think it says and means.

This giant game of "telephone" is getting frustrating. I've named it the "Workshop Effect". Here's how it works:
  1. Educational researcher (e.g., Kamii) presents results from her research (e.g., examining 3rd and 4th graders and the appropriateness of the common algorithm for subtraction) at large conference with consultants and workshop presenters in attendance. These folks take notes. Some completely understand what's being said, others less so. Not everyone is an elementary school teacher with a nerd-on for math.
  2. Consultants and presenters then travel, collecting $3000 for a day's workshop in Central Vermont. The presenter has collected several sets of research results and displays them all. Superintendents and Principals from K-12 are here because that's $3000 and "let's make the most of it."  They pick up some details to bring back.
  3. High School Principal hold faculty meetings or PD and mandate that "Research has shown that students should not be taught the common algorithm for subtraction." 
  4. Curriculum coordinators and teachers spend months adapting curriculum to the new paradigm. Anyone who objects, or wants verification, is called "Anti-Team Player", a "Naysayer", a "Curmudgeon", or is criticized or written up for "not obeying District policy."
And that's how the rot begins.  Why should my 10th grade Geometry students be bound by research on third-graders, research that expressly states that it is done on 3rd graders? Nothing in the paper said that extrapolating 7 or 8 years held any meaning.
Underlying much of the critique of research in education is the charge that it doesn’t tell stakeholders “what works”. My first objection to the “what works” mantra is that this is based on a very insular view of what is important in education. My second objection is that it completely discounts the importance of researching what doesn’t work, particularly from the viewpoint of the largest stakeholder group: students. Nonetheless, the value of research in education is increasingly being judged in relation to the “what works” agenda: if something works, then there must be evidence to prove that it works. If there isn’t evidence (perhaps because the research is not about what works but what doesn’t), then that research has no value.
Maybe the criticism that Graham reads is like this, but mine is over not being able to see the original documents. I am not going to spend the money to download and read this research. I only got the Kamii paper because someone sent it to me (Grant Wiggins, Dave Coffey, Bowen Kerins? I can't remember). I understand that research often is intended to find a connection, a correlation, and that a cause is more elusive. I understand that sometimes we need to run the same study again and again to confirm (or not) previous findings.

The problem is in the interpretation and filtering that happens between the researcher and the teacher. What did the research actually say, and what can I actually take from it?
Teachers are now being encouraged to “challenge” education researchers for “evidence” to support their views. That’s OK – if the request is accompanied by an understanding of the research process and how knowledge is accumulated.
Sure. It's called peer-review.

It would be nice to be able to tease out findings instead of leaving it up to the ex-fifth grade teacher - turned curriculum coordinator.

Publish your work or face the criticism.

If you'll excuse me, I've got to get back to work.


  1. Maybe the criticism that Graham reads is like this, but mine is over not being able to see the original documents.

    This is a valid concern, but it sounded to me like you're critiquing researchers for not making their research widely available. But all the researchers I talk to are frustrated by the state of affairs. Journals are extortionary and researchers can only publish their work by either violating copyright or paying a ridiculous fee to the publishers.

    That said, it's no challenge to get your hands on a piece of research in 2017. It's easy to obtain paywalled research if you want it, and (hopefully) this is disrupting the publishers' ridiculous business model.

    Anyway, my point is just that researchers are usually as frustrated by this as we are, and there are ways around paywalls.

  2. I am criticizing researchers for not making their research public. I am more critical of the game of telephone that makes it damned difficult to find out who did that research that I can't always find online. I am very critical of the admin who can't remember or didn't pay attention to the name of researcher when they listened to the presentation at that conference in the state capital four months ago.

    Here's one for you, Michael:

    Research shows that Proficiency Based Grading is good. My admin then tells me that a "4" = 95%+, a 3 = 80%-95%, a 2 is 50%, and a 1=25%.

    Find the data that encourages that conversion, or states that you really shouldn't be averaging a 2 on F.IF.7a with a 4 on A.SSE.4 and get a 3 average, proficient.

    Me neither. And we're doing that.

    Even if I find out who did this research ... and it's probably more than one study, or even meta-analysis from someone like Hattie, where do I get this study so I can, at least, read the executive summaries.

    That's my main beef.