Let me use the occasion of Jay Greene’s response to my earlier post to explain the differences between blogging and research, as I see it. Greene makes no distinction between the two activities, and is, as a result, skeptical about my anonymity. As he explained to me off-blog, “The same basic principles apply. They are both part of the spectrum of how people communicate ideas that may be related to policy decisions.”
Blogs provide opinions, commentary, and analysis. Blogs are a place to discuss ideas, consider other points of view, and hear what a community of readers has to say. Blogging is great for testing out ideas, reflecting on the news of the day, and discussing and disseminating existing research. But bloggers don’t do academic research. Academic research, in contrast, is subject to norms about method. The central norm in academic research is subjecting your work to the scrutiny of a critical community of scholars.
Undoubtedly, blogs, thinktank research, and academic research are “part of the spectrum of how people communicate ideas that may be related to policy decisions.” But different levels of confidence should be assigned to different parts of that spectrum in educational policymaking. Below, from least to most credible:
1) Blogs: Blogging is free-form exchange, and the blogger is judged by the quality of his or her arguments and content by readers who seek out the blogger. Blogs are grassroots online communities where everyone, irrespective of their identity, is entitled to an opinion.
2) Thinktank research: Thinktank research is generally released without external review. The questions that are asked and the policy recommendations that are put forth are usually – but not always – tied to the stated objectives of the organization, which are sometimes ideological in nature. Thinktanks are well-funded and endowed with PR departments that publicize studies to policymakers and the media. As a result, thinktank research on education receives more attention than blogs and academic research in the media.
It is important to note that thinktanks do vary significantly in the extent to which they internally and externally review work before releasing it. They also vary in the extent to which they make their methods transparent enough that their analyses can be evaluated and replicated. Some thinktanks are more judicious than others about describing the implications of their work for policy and in spinning their findings. And some thinktanks don’t appear to sanction researchers when their studies are consistently discovered to be wrong. I imagine that other thinktanks would treat such a violation differently, because at the end of the day, these mistakes reflect poorly on the institution.
3) Academic research: Academic research is intended to contribute to a body of scholarly knowledge, and is subject to thorough peer review and to norms of scholarly inquiry. Though it is often policy relevant, the primary audience for this research is a community of scholars, who judge the research not for its policy contributions but its innovativeness, rigor, and contributions to a body of literature.
But peer reviewers are human, too, and they come with their own set of biases; the idea of a search for truth immune to ideology is a fantasy. Academic research that is imperfect does get published. And people do make mistakes in their papers, both innocent and intentional. That’s why one of the norms of scholarly inquiry is to replicate studies and to take caution in declaring that the case is closed on any issue. This can be thorny, because academic research communities are small and dense. Everyone knows everyone else, and the scholars that take on prominent colleagues, even when they are clearly wrong, can pay a handsome price. People also have personal relationships with mentors and colleagues, and sometimes we don’t challenge each other as much as we should.
For all of these reasons, peer review is double-blind. In practice, papers are submitted to conferences before they are submitted to journals. On more than one occasion, I have reviewed papers of scholars who have sat on the same conference panels that I have. But the academics whose work is under review do not know the identity of their reviewers (except when reviewers cry foul that their work wasn’t cited, and suggest references that give away their identity!), and this provides a countervailing force against the social dynamics that sometimes cloud our judgment. And with academic research, no study is taken as a “killer study,” and Jeff Henig has advocated for the same in the policymaking arena. Rather, individual studies are put in context of a larger evidence base.
To be sure, I, and some other bloggers, will occasionally present and analyze data in our postings, with the goal of persuading readers of a point of view. When I do so, I provide links to the data, which are generally in the public domain. When these data are not publicly available, I have always extended an offer to my readers to request data from me, which they have often done. When these posts involve more than making figures using publicly available numbers, I also provide detail about what I’ve done, which is simple descriptive analysis that a competent Excel user can replicate.
But there’s no pretense that this is peer-reviewed academic work. And let’s be realistic: an anonymous blogger isn’t shaping public policy. In equating the two, Greene either overstates the influence of this blog on education policy, or diminishes the contributions of his own work. Of course, if my postings lead readers to think differently about research and policy matters, then those readers may have an influence. I see this as a very different dynamic than with thinktank research, where, because the objective is to influence public policy directly through research, the researchers have a greater obligation to their audience to vet what they’ve done before taking it public.
Finally, to Greene’s point that my anonymity makes it impossible to “consider the source” - is it likely that Education Week would host an anonymous blog by someone working for or funded by “special interests” in education? Or that they would allow me to critique policymakers with whom I have some conflict of interest? The editors at Education Week know who I am, and decided to host this blog with full knowledge of my professional biography. I’m quite proud of - and grateful for - the community that we’ve built here, which has challenged and refined my own thinking on a wide variety of topics. At the end of the day, potential readers can decide for themselves whether this blog is worth reading, can tell me when they think I’m wrong (and you often do), and can expect me to listen, and even modify my positions in response.
Update: Be sure to check out Dean Millot’s exceptional post related to this issue, The Letter From: “In short, I see no problem with research becoming public with little or no review” (I) , as well as Sherman Dorn’s from earlier in the week, Can reporters raise their game in writing about education research? eduwonk also weighs in here: Politics of Information.