Opinion Blog


Rick Hess Straight Up

Education policy maven Rick Hess of the American Enterprise Institute think tank offers straight talk on matters of policy, politics, research, and reform. Read more from this blog.

School & District Management Opinion

New CPRE Study Claims Common-Core Advocates Are Rational, Opponents Are Not

By Rick Hess — February 25, 2015 3 min read
  • Save to favorites
  • Print

Yesterday, researchers at the Consortium for Policy Research in Education (CPRE) released a study that analyzes how the Common Core has played out on Twitter. (Or, more precisely, how it’s played out for those people who’ve included the hashtag "#CommonCore” in their tweets.) The graphics for the web-based results are quite nifty. And some of the analysis is innocuous enough.

Where I started to have concerns, though, was with the claim—blasted out in a series of press releases and repeated in various quarters—that the analysis shows how rational Common Core advocates are and what simpleminded, ideological hacks the skeptics are. For instance, POLITICO’s “Morning Education” summarized the results: “Supporters of the Common Core have used Twitter to present an array of policy points, while opponents have crafted their 140-character tweets to build a political movement. That’s one of the key takeaways from a new report on how social media is shaping education politics. Researchers analyzed 190,000 tweets[...]”

What’s the problem? Well, for one thing, the basis for this conclusion is entirely unclear. For another, the relevant analysis is based not on 190,000 tweets but on 508. Now, I’m not usually a zealot about policing someone else’s research methods. But, when researchers flatly proclaim that one side of a national debate is populated by dunderheads and the other by cool, rational thinkers, I get curious.

If you haven’t seen the website, be forewarned that the lack of page numbers, combined with cutesy titles and colorful graphics, makes it hard to talk about the study in an especially clear fashion. That said, under “Act 3,” in the “Political Language by Faction” section, the researchers offer a visual explaining that “supporters speak policy speak” and “opponents speak political speak.”

Just how does one tell whether 140 characters are “political” or “policy”? Well, I couldn’t find anything on the website. The site includes no actual “methods” section. After enough clamoring, I eventually got a couple of tweets in response. CPRE explained, “We took a random sample of about 500 tweets from the elite outdegree (transmitters) and indegree (transceivers) networks & hand coded their tweets as to the language they used (policyspeak/politicalspeak). Then we analyzed them by faction...” In other words: “trust us.” Another tweet explained, “Coding rubrics out next week.” (My own preference is for methods to be explained at the same time the press releases are blasted out, but maybe that’s old-school.)

Given the absence of coding rubrics, what is available on the website? Well, in “Act 3,” in a bit of presumably fair-minded analysis, the researchers explain:

Some of the tweeters used rational, analytical language that appealed to the intellect, while other actors employed more visceral, emotional language that stirred emotion and spoke to more elemental instincts. We dubbed these two approaches "policyspeak" and "politicalspeak." Policyspeak refers to the cooler, more rational language that appeals to a policy audience, where debate is based on the merits of the evidence and the logic of the argument. Politicalspeak is more emotional and appeals to people's passions.

Mind you, this wasn’t a finding. Turns out, this was just the statement of the hypothesis. It’s never made clear why it is that researchers hypothesize that advocates are smart and informed and opponents are not. Most of the time, such a statement would be regarded as evidence of bias or prejudice—and it does raise concerns about the fair-mindedness of the coding used to test the hypothesis.

So how did the researchers “test” this loaded hypothesis? The researchers explain that they pulled and coded a sample of 4,500 tweets from their 190,000. (I’m still not sure why they pulled such a small sample. I mean, seriously, how long does it take to code a tweet?) Now, it turns out that they didn’t use all 4,500 for this hypothesis test. Rather, they coded the “sample of 4,500 tweets by their references to education topics end education policy/political issues” and “found that about 21% (930 tweets) had a reference to either education topics or politics/policy related issues.” (I still can’t figure out what the other 79% of tweets addressed.) They then explained:

We drew a sample of tweets from the 930 that referenced education topics or politics/policy issues. Because this sample was heavily weighted toward tweets from the faction of actors outside of education (yellow), we took the lowest represented group (the blue faction, which contributed 168 of 930 tweets) and drew equivalent random samples for the green and yellow groups. This produced a sample of 504 tweets. We then coded these on a three point rubric of 1=Policyspeak, 2=Politicalspeak, and 3=Undetermined.

Bottom line: I still have no idea how the researchers decided whether a tweet was about “politics” or “policy.” Of course, while the researchers were too busy to explain how they actually analyzed the 504 tweets in question, CPRE had plenty of time to blast releases and tweet away about their Twitter analysis of (supposedly) 190,000 tweets—and to loudly declaim the emotional, political nature of those who have doubts about the Common Core.

Which makes me wonder, perhaps uncharitably, if that wasn’t the point all along.

Related Tags:

The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.