Research Column

Article Tools
  • PrintPrinter-Friendly
  • EmailEmail Article
  • ReprintReprints
  • CommentsComments

Why does educational research and development have such an awful reputation among federal policymakers?

To find out, Carl A. Kaestle, the Vilas professor of educational policy studies and history at the Wisconsin Center for Education Research, conducted an "oral history'' of the past 25 years in federal education-research policy. The project was done in connection with a National Academy of Sciences study of the federal role in education R&D. A summary of its findings is published in the January-February 1993 issue of Educational Researcher.

Mr. Kaestle found that, while researchers can point to a number of areas where research has "mattered,'' educational research tends to suffer from three recurrent criticisms: Education research does not pay off--the "everybody's been to 4th grade'' syndrome; the research community is in disarray; and the field is politicized.

To improve the reputation of educational research, Mr. Kaestle suggests that leading researchers join with the U.S. Education Department's office of educational research and improvement to set a coherent agenda and a dissemination strategy for the field, and that researchers improve their connections to practitioners.

"[I]f education researchers could reverse their reputation for irrelevance, politicization, and disarray,'' he writes, "they could rely on better support because most people, in the government and the public at large, believe that education is critically important.''

A new study by University of Pennsylvania researchers adds fuel to the debate over the usefulness of the Scholastic Aptitude Test.

The College Board, which sponsors the test, has long contended that, together with high school grades, the S.A.T. provides a good predictor of first-year success in college.

But the study, which examined the grades of 4,000 Penn students who entered college in 1983 and 1984, found that high school grades and Achievement Test scores were more likely to predict how a student would perform in college. S.A.T. scores, it found, did not improve the prediction.

Jonathan Baron, a Penn psychologist who conducted the study along with M. Frank Norman, said Achievement Tests, which the College Board also sponsors, are "better tests'' than the S.A.T., since they measure what students learn from coursework, rather than more general knowledge.

"They send the right message,'' Mr. Baron said.

The study was published in the December issue of Educational and Psychological Measurement.--R.R.

Vol. 12, Issue 22

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

Back to Top Back to Top

Most Popular Stories