Diplomas Count 2014: Motivation Matters - Engaging Students, Creating Learners

Student Surveys Seen as Imperfect Engagement Measure

| Includes correction(s).
Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

Schools have been using student surveys for decades. But in recent years, increasing interest in student engagement has led to increasing interest in this classic method of assessing belonging, enjoyment, attachment, investment, perseverance, and other assets.

In 2011, the Regional Educational Laboratory Southeast, which was run at the time by the SERVE Center at the University of North Carolina, Greensboro, produced a report listing 21 different survey instruments designed to measure the engagement of upper-elementary and secondary school students.

This is not to say that all of the measures are widely used or useful.

"Most districts have run surveys and haven't found them very useful," said Aaron Feuer, the CEO of Panorama Education, a Boston-based data-analytics startup that raised $4 million in seed money last year from Facebook founder Mark Zuckerberg's Startup:Education and other investors.

Surveys don't necessarily measure what they purport to assess. For example, student course evaluations, a standard and much-criticized fixture of higher education, might be used to assess teaching though they really measure whether students like the instructor, Mr. Feuer suggested.

"Great teaching is not a popularity contest," he said.

An additional obstacle is that districts often receive results in the form of one enormous data table that is difficult to interpret. As a result, the reports get thrown into a drawer and forgotten.

Panorama addresses that challenge by providing users with individualized, online reports that permit them to break down the results in a variety of ways. For example, The Colorado Education Initiative, a Denver-based nonprofit that collaborates with the state education department and districts and developed a free, publicly available student survey, contracted with Panorama to create reports for Colorado Student Perception Survey that permit teachers to explore results by class period, survey question, student subgroup, and other categories.

The company has developed and distributed surveys in more than 5,000 schools. Clients have included the Los Angeles school district, the Connecticut state education department, and Aspire, which manages 37 publicly funded, independently operated charter schools.

Other Efforts

The Cambridge, Mass.-based Tripod Project for School Improvement, now more than a decade old, developed another survey that is widely used to measure student engagement and evaluate teachers. The survey was administered to more than 1 million students last year.

As part of a three-part series on student surveys presented by the Washington-based nonprofit American Youth Policy Forum in 2013 and 2014, Pittsburgh Science and Technology Academy teacher Paul Ronevich said that Tripod helped him improve his instruction by providing specific feedback and permitting him to compare himself with others.

Although surveys may be widely used measures of student engagement, one place where they have failed to take hold is in the accountability systems proposed by states under waivers from the federal No Child Left Behind law. Nearly every state has received a waiver, but only a handful, including New Mexico and South Dakota, have used the opportunity for additional flexibility to incorporate student surveys into accountability measures.

During the American Youth Policy Forum session, Elaine Allensworth, the director of the University of Chicago Consortium on Chicago School Research, shed light on some of the difficulties in publicizing and attaching stakes to surveys. She drew on her experiences with the Chicago 5 Essentials Survey, a survey of students and teachers developed by the consortium to gauge school effectiveness. Although used since 1997, the results were not publicly released until 2009. Consortium researchers worried the publicity would lead principals to game the system by encouraging higher ratings, thus compromising the survey's validity. However, Ms. Allensworth concluded that the publicity instead led to improvements in the schools.

Vol. 33, Issue 34, Pages 22-23

Published in Print: June 5, 2014, as Student Surveys: Familiar Tools, Mixed Success
Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Clarification: An earlier version of this article failed to mention that a 2011 report on student-survey instruments was produced by the Regional Educational Laboratory Southeast at the SERVE Center at the University of North Carolina, Greensboro.

An earlier version of this article misstated the author of the Colorado Student Perception Survey. It was developed by the Colorado Education Initiative.

Most Popular Stories