Tomorrow, I’ll be unveiling the 2014 RHSU Edu-Scholar Public Influence rankings, honoring and ranking the 200 university-based education scholars who had the biggest influence on the nation’s education discourse last year. Today, I want to run through the scoring rubric for those rankings. The Edu-Scholar rankings employ metrics that are publicly available, readily comparable, and replicable by third parties. This obviously limits the nuance and sophistication of the measures but such is life.
Given that there are well over 20,000 university-based faculty tackling educational questions in the U.S., even making the Edu-Scholar list is an honor--and cracking the top 100 is quite an accomplishment in its own right. So, who made the list? Eligible are university-based scholars who have a focus wholly or primarily on educational questions. The rankings include the top 150 finishers from last year, augmented by 50 “at-large” additions named by a selection committee of about two dozen accomplished and disciplinarily, intellectually, and geographically diverse scholars. The selection committee (composed of members who were already assured an automatic bid by dint of their 2013 ranking) first nominated individuals for inclusion, then voted on whom to include from that slate of nominees.
I’m indebted to the members of the committee for their assistance, especially given that they’re all extraordinarily busy folks. So, I’d like to acknowledge the members of the 2014 RHSU Selection Committee: Deborah Ball, U. Michigan; Linda Darling-Hammond, Stanford; Susan Dynarski, Michigan; Ronald Ferguson, Harvard; Susan Fuhrman, Columbia; Dan Goldhaber, Washington; Sara Goldrick-Rab, Wisconsin; Jay Greene, Arkansas; Rick Hanushek, Stanford; Doug Harris, Tulane; Jeff Henig, Columbia; Gloria Ladson-Billings, Wisconsin; Robin Lake, Washington; Bridget Terry Long, Harvard; Pat McGuinn, Drew University; Pedro Noguera, NYU; Robert Pianta, Virginia; Andy Porter, UPenn; Jim Ryan, Harvard; Marcelo Suarez-Orozco, UCLA; Sarah Turner, Virginia; Jacob Vigdor, Duke; Kevin Welner, CU-Boulder; Marty West, Harvard; Daniel Willingham, Virginia; Yong Zhao, Oregon; and Jonathan Zimmerman, NYU.
Okay, so that’s how the list of scholars was compiled. How were they ranked? Each scholar was scored in eight categories, yielding a maximum possible score of 200. No one scored a 200. Surveying the results shows that a 100 would qualify for top ten status, a 90 suffices to crack the top twenty, and a 60 will pretty much get someone into the top 50.
Scores are calculated as follows:
Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A neat, common way to measure breadth and influence of a scholar’s work is to tally works in descending order of how often each is cited, and then identify the point at which the number of oft-cited works exceeds the cite count for the least-frequently cited. (This is known to aficionados as the h-index). For instance, a scholar who had 20 works that were each cited at least 20 times, but whose 21st most-frequently cited work was cited just 10 times, would score a 20. The measure recognizes that bodies of scholarship matter, influencing how important questions are understood and discussed. It helps ensure that results recognize deep influence, and not just research that was buzzworthy last year. The search was conducted on December 11-12, using the advanced search “author” filter in Google Scholar. A hand-search culled out works by other, similarly named, individuals. For those scholars who had been proactive enough to create a Google Scholar account, their h-index was available at a glance. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. Points were capped at 50--if a scholar’s score exceeded that, they received a 50. This score offers a quick way to gauge both the expanse and influence of a scholar’s body of work.
Book Points: An author search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a coauthored book in which they were the lead author, a half-point for coauthored books in which they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g. “David Cohen” became “David K. Cohen,” and “Deborah Ball” became “Deborah Loewenberg Ball.”) The “format” searched “Printed Books” so as to avoid double-counting books which are also available as e-books. This obviously means that books released only as e-books are omitted. However, circa 2013, this still seems appropriate, given that few relevant books are, as yet, released solely as e-books (this will likely change before long, but we’ll cross that bridge when we come to it.) “Out of print” volumes were excluded. This measure reflects the conviction that book-length contributions can shape and anchor discussion in an outsized fashion. The search was conducted December 9. Book points were capped at 25.
Highest Amazon Ranking: The author’s highest-ranked book on Amazon, as of December 9. The highest-ranked book was subtracted from 400,000, and that figure was divided by 20,000. This yielded a maximum score of 20. Given the nature of Amazon’s ranking algorithm, can be volatile and is biased in favor of more recent works. For instance, a book may have been very influential a decade ago, and continue to influence citation counts and a scholar’s larger profile, but produce few or no ranking points this year. The result is a decidedly imperfect measure, but one that conveys real information about whether a scholar has penned a book that is shaping the conversation. To that point, a number of books that stoked public discussion in recent years score well--including those by authors like Diane Ravitch, Linda Darling-Hammond, Yong Zhao, Rick Hanushek, Tony Wagner, and Paul Peterson.
Education Press Mentions: This reflects the total number of times the scholar was quoted or mentioned in Education Week or the Chronicle of Higher Education between January 1 and December 17 The search was conducted using each scholar’s first and last name. The number of appearances was divided by 2 to calculate Ed Press points. Ed Press points were capped at 30. This, like the next couple categories, seeks to use a “wisdom of crowds” metrics to gauge a scholar’s ubiquity and relevance to public discourse last year.
Blog Mentions: This reflects the number of times a scholar was quoted, mentioned, or otherwise discussed in blogs between January 1 and December 15. The search was conducted using Google Blogs. The search terms were each scholar’s name and university affiliation (e.g. “Bill Smith” and “Rutgers”). Using affiliation serves a dual purpose: it avoids confusion due to common names and ensures that scores aren’t padded by a scholar’s own posts (which generally don’t include affiliation). At the same time, if scholars who blog are provoking discussion, the figures will reflect that. If a scholar is mentioned sans affiliation, that mention is omitted here. (If anything, that may tamp down the scores of well-known scholars for whom affiliation may seem unnecessary. However, since the Darling-Hammonds, Ravitches, and Hanusheks fare just fine, I’m not concerned.) Because blogging is often informal, the search also included common diminutives (e.g., “Rick Hanushek” as well as “Eric Hanushek”), and names were run with and without middle initial. In each instance, the highest result was recorded. Points were calculated by dividing total mentions by four. Scores were capped at 30.
Newspaper Mentions: A Lexis Nexis search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers between January 1 and December 16. As with Blog Mentions, the search was conducted using each scholar’s name and affiliation. Searches were run with and without middle initial, and the highest result was recorded. Points were calculated by dividing the total number of mentions by two, and were capped at 30.
Congressional Record Mentions: We conducted a simple name search in the Congressional Record for 2013 to determine whether a scholar had testified or if their work was referenced by a member of Congress. The tally was conducted on December 17. Qualifying scholars received five points.
Klout Score: A Twitter search determined whether a given scholar had a Twitter profile, with a hand search ruling out similarly named individuals. The score was then based on a scholar’s Klout score as of December 13. The Klout score is a number between 0 and 100 that reflects a scholar’s online presence, primarily how often their twitter activity is retweeted, mentioned, followed, listed, and answered. The Klout score was divided by 10 to calculate points earned, yielding a maximum score of 10. If a scholar was on Twitter but did not possess a Klout score, then they received a zero.
The scoring rubric is intended to both acknowledge scholars whose widely-referenced body of work influences our thinking on edu-questions, and scholars who are actively engaged in public discourse and in writing and speaking to pressing concerns. That’s why the scoring is designed to discount, for instance, academic publications that have rarely been cited or books that are unread or out of print. Generally speaking, the scholars who rank highest are those who are both influential researchers and also active in the public square.
There are obviously lots of provisos when perusing the results. Different disciplines approach books and articles differently. Senior scholars obviously have had more opportunity to build a substantial body of work and influence (which is why the results unapologetically favor sustained accomplishment). And readers may care more for some categories than others. That’s all well and good. The whole point is to spur discussion about the nature of responsible public engagement: who’s doing a good job of it, how much these things matter, and how to gauge a scholar’s contribution. If the results help prompt such conversation, then we’re all good.
Two questions commonly arise: Can somebody game this rubric? And am I concerned that this exercise will encourage academics to chase publicity? As for gaming, color me unconcerned. If scholars (against all odds) are motivated to write more relevant articles, pen more books that might sell, or be more aggressive about communicating their thinking in an accessible fashion, I think that’s great. That’s not “gaming,” it’s just good public scholarship. If I help encourage that: sweet. As for academics working harder to communicate beyond the academy--well, there’s obviously a point where public engagement becomes sleazy PR... but most academics are so immensely far from that point that I’m not unduly concerned.
A final note. Tomorrow’s list is obviously only a sliver of the faculty across the nation who are tackling education or education policy. For those interested in scoring additional scholars, it should be a straightforward task using the scoring rubric. Indeed, the exercise was designed so that anyone can generate a comparative rating for a given scholar in no more than 15-20 minutes. Meanwhile, for the arduous task of coordinating the selection committee and then spending dozens of hours crunching and double-checking all of this data for 200 scholars, I owe an immense shout-out to my ubertalented, indefatigable, and eagle-eyed research assistant Max Eden.
The opinions expressed in Rick Hess Straight Up are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.