Using Assessment to Strengthen Teaching and Learning

James H. McMillan and Rachel C. Syrja took readers questions on the use of assessment data to inform instruction and drive student learning.

February 12, 2009

Using Assessment to Strengthen Teaching and Learning

    Guests:
  • James H. McMillan, is a professor of education and chair of the foundations of education department at Virginia Commonwealth University. He is the author of numerous books, including Assessment Essentials for Standards-Based Education (Corwin), Classroom Assessment: Research, Theory, and Practice (Teachers College), and Classroom Assessment: Principles and Practice for Effective Instruction (Allyn & Bacon).
  • Rachel C. Syrja, a teacher on special assignment for the office of instruction in the El Monte City, Calif., school district, has designed districtwide staff development in assessment for learning, standards-based education, and English-language development. An associate with the professional-development, publishing, and consulting firm the Leadership and Learning Center, she has also conducted workshops for teachers and administrators on data teams, professional-learning communities, and using assessment data to drive instruction.

  • Liana Heitin, teachermagazine.org (Moderator):

    Welcome to our live chat on using assessment data to inform instruction and meet student needs. Our guests today are both experts in assessment-for-learning. James H. McMillan, chair of the foundations of education department at Virginia Commonwealth University, has written numerous books on this topic, including Classroom Assessment: Principles and Practice for Effective Instruction. Rachel C. Syrja, an associate with the Leadership and Learning Center, designs staff development on assessment, standards-based education, and data teams. She is currently on special assignment in the El Monte City, Calif. school district. I’m Liana Heitin, a member of the editorial team here at teachermagazine.org, and I’ll be your moderator today. We’ve already received lots of great questions, so let’s get started.


    Question from bouchakka mohamed, teacher of english , ALC AGADIR? Morocco:

    whats summative vs formative assessment?


    James H. McMillan:

    Summative assessment is measuring the current proficiency of students, after instruction (of learning). Formative assessment occurs during instruction to give teachers and students feedback on their progress and help them determine next instructional steps (for learning).

    Question from Walt Gardner, education writer:

    Why is formative assessment being given greater attention today than in the past?


    James H. McMillan:

    Primarily because of research in cognitive psychology on learning and motivation that has shown the importance of providing immediate feedback to students so that instruction can be enhanced. Recent research has also demonstrated that effective formative assessment results in greater student achievement, on both classroom tests and large-scale tests.

    Question from Marsha Pincus, Retired Teacher, Teacher Educator, Philadelphia Writing Project, University of Pennsylvania:

    How is “assessment for learning” different from assessment of learning? How is it different from test preparation teaching strategies?


    Rachel C. Syrja:

    According to such experts in the field as Rick Stiggins and Larry Ainsworth, assessment for learning is assessment that takes place while the learning is taking place. It is usually ungraded and used for the purposes of determining next steps for teaching as well as for providing detailed feedback to students that can advance their learning. Assessment of learning takes place at the end of the teaching cycle in order to determine whether students mastered the standards they were being taught. Assessement for learning is an invaluable instructional tool that drives the instruction at the classroom level. It is different from test preparation strategies in that it is intricately tied to the teaching and learning process rather than test prep, per se.

    Question from angele passe, education consultant , BlueWater Associates, Minnesota:

    How do we address this comment many teachers make: “With all these assessments, I dont’ have time to teach anymore. I can just teach to the test.” Thank you.


    Rachel C. Syrja:

    This is a comment which many of us hear quite often. I think that it stems from an unclear notion of the difference between testing and assessing. In fact, Doug Reeves writes in The Learning Leader, that our students are “over tested” and “under assessed.” It is important to continuously remind teachers that testing is summative in nature and used to help us determine final grades. It includes such measures as chapter tests, benchmarks, and state tests. We can probably all agree that a lot of this type of ‘testing’ is currently taking place in our classrooms. Assessment, on the other hand, is formative in nature and is critical in helping us determine the next steps in our teaching. It takes place as the learning is happening and helps us determine whether we are on the right track or whether we need to make adjustments to our teaching. It is characterized by short, quick opportunities that help us find out if our students are grasping the concepts we are teaching. Exit cards are an excellent example of a quick, and to the point assessment that yields valuable information. If for example, we are teaching addition with regrouping, at the conclusion of our lesson, we have our students solve 2-3 problems on an index card and hand it to us as they are leaving the classroom. We can quickly shuffle through these cards and divide them into two piles, those students that mastered the learning and those that didn’t. The very next day we can address the needs identified by this very efficient and necessary form of assessment.

    Question from Pat Christiansen, Associate Director & Program Facilitator, Saint Mary’s University:

    I have found from experience that the idea of formative assessment is foreign to many teachers. However, there are some who “think” they are using formative assessment because they have provided a rubric (as an example) for their students. How can I as an educator know that the assessment piece I am using truly helps me identify my students learning and I don’t get trapped into thinking that just because I’m using a rubric I’m providing ongoing assessment for my students?


    James H. McMillan:

    Your insight is correct. There are two keys to formative assessment - ongoing feedback and instructional correctives. Use the rubric for feedback, but make sure it isn’t just at the end, after a project is completed. Also make sure that after feedback further instruction is designed to improve performance. So when you use the rubric is important, as is what results from using it. A good way to ensure formative is to use student self-assessment after they fully understand the rubric.

    Question from Nancy Hameloth, Adjunct Professor, Regent University:

    How is the data from standardized tests being disseminated to the various classroom teachers to inform their practice?


    Rachel C. Syrja:

    Standardized testing data is usually disseminated to teachers at the beginning of the school year by their site administrators and helps give teachers a cursory first glance at their students and possible areas of strength and weakness. However, classroom formative assessment data quickly replaces state data and is usually a much better measure on which to make everyday instructional decisions. While standardized test data is a great tool for setting a base line, it should not replace ongoing, real time formative assessment data that our teachers are collecting on a daily basis. The two should be used in conjunction with eachother within the context of a well thought out assessment system.

    Question from Margaret Savidge, Middle School Teacher:

    How do we address the needs of students who do well in classwork, homework, and projects but who do poorly on tests - especially those in the format of standardized achievement tests?


    James H. McMillan:

    Your question is excellent. Your objective in grading is to provide the most accurate indicator of student knowledge and skills. Every type of assessment has error, and some students do have trouble with standardized tests so that these results have more than their share of error. It is fine, in my opinion, to give less emphasis to these test scores when all the other indicators suggest a different level of achievement. However, your judgment of this needs to be reviewed by others, not in a formal way, but some verification is good.

    Question from Howard Blume, L.A. Times:

    The teachers union in Los Angeles has begun a boycott of periodic assessments on the grounds that students are tested too much and that standardized assessments are of little value. How unusual is this action and, conversely, how widespread is the use of periodic assessments?


    James H. McMillan:

    I can’t comment on how widespread this kind of boycott is, except I know in central Virginia that hasn’t occurred. Periodic, or what is called benchmark testing, has become pervasive as schools look for a way to identify student weaknesses that can be addressed before taking the high-stakes test at the end of the year. Often these are nine weeks tests. These tests are helpful as long as there is teacher professional development to know how to use the results, and the results are provided in a helpful format. The tests are not needed in schools where teachers know from other data student weaknesses, or, generally, in schools with high student achievement. Less useful in high schools than middle or elementary. Testing companies are putting on a full court press to districts to sell these kinds of tests, and have misled educators to think they are “formative” assessments.

    Question from Laura Kuhlenbeck Regional Vice President of Academic Support Leona Group Ohio:

    What is the best way to integrate summative and formative assessment into the curriculum mapping process?


    Rachel C. Syrja:

    Based on my experience, the best way to integrate formative and summative assessments into the curriculum mapping process is to plan for them explicitly in advance. After identiying power or priority standards for the year, teachers can design or identify well aligned formative assessments that will yield the most informative data to guide their instruction. Obviously, the best type of assessments are common formative assessments. They help us identify common areas of strength or weakness so that we can plan appropriate instruction. I cannot overemphasize the importance of collaboration in this process. By having common formative assessments for the year, we can target our instruction and insure that we are working together to achieve high levels of learning for all students. These high quality, highly aligned common formative assessments should be followed up by summative assessments that confirm for us whether or not our students met the standards. Including the assessments in the mapping process ensures that we stay focused on the learning that is taking place in our classrooms.

    Question from Motsienyane Lethena, Teacher; Student at MSU:

    How do I use formative assessment results for differentiated instruction? Can differentiated instruction work for big classes?


    James H. McMillan:

    Hi! My Masters degree is from MSU! Formative assessment is great for differentiated instruction, even in large classes where you need to work with small groups as much as individually. The key is knowing ahead of time what kind of instructional correctives will address the specific problems or lack of understanding. Too often in differentiated instruction the teaching options don’t match well enought with student needs. Use feedback as a way to bridge to different instructional options.

    Question from Cheryl McMillan, NYC DOE:

    How can teachers effectively use this data to differentiate instruction with class sizes of 35 at a middle school level? Please consider the students with disabilities and the English Language Learners.


    Rachel C. Syrja:

    The instructional decisions we make on a daily basis are only as good as the assessments we use to make those decisions. So the first order of prioriy is making sure that we have high quality, well aligned common formative assessments on which to base such important decisions as how to differentiate our instruction. If the assessments are short and targeted, assessing no more than 1-2 priority standards, then we can easily analyze the results of that assessment for strengths and weaknesses then plan for differentiating accordingly. We run into trouble when our assessment is not focused and covers too much material, which makes it nearly impossible to be able to use the data in a productive way. The next step is planning for differentiation based on the data. Rather than try to teach a different lesson for all of the different levels of learners, it makes much more sense to teach the same lesson but differentiate the content, process, or product. For example, if we have read a story and are writing a response to literature, I would make sure that the writing assignment was differentiated for the different levels of learners in my class. My level 2 English Language Learner would be expected to complete a CLOZE activity, or use pictures to express their ideas. I would then adjust the activity or product accordingly for the different levels of learners. Again, it is best to attempt to differentiate only 1 content, process, or product, and not try to do all 3 at the same time. This is especially true at the middle school level. I also highly recommend working collaboratively to come up with different grouping options that might facilitate this process and make it more effective overall.

    Question from Kim Weaster, Teacher, The Oaks:

    At the high school level, what are the challenges with formative assessment across disciplines? Have reliable and valid probes been developed in science and social studies? Thank you.


    James H. McMillan:

    Hi Kim. The main challenge is that formative assessment looks different for different subjects. The literature is now recognizing that subject area, as well as other variables like student ability level and cognitive style, need to be considered. You can now find books on formative assessment that are subject-specific (Corwin Press has several). When these differences are considered, along with the styles of the teachers, the focus needs to be on results, not how things are operationalized because it will be done some differently in each class. What is purported to work in different subjects is more idea and experience than it is research-based, so little that I’m aware of with respect to validity and reliability. Hope this is helpful. JM

    Question from Linda Kay Davis, Associate Professor of English Education, Austin Peay State University, Clarksville, TN:

    What do you perceive to be the role of self-assessment or peer assessment in informing student learning?


    James H. McMillan:

    Hi Linda. In my view student self-assessment is next significant assessment process that will be very popular - much like formative assessment was in the 1990’s to today. Self-assessment is supported very well by research on learning and motivation and really helps students to understand the criteria by which they will be graded. They learn to self-monitor their progress. It helps student’s feel a sense of efficacy and control, as well as responsibility. Student self-assessment can be very powerful. I’m less enthusiatic about peer assessment, but it in the right settings it can be helpful. It requires a great classroom climate of trust and respect. Hope this is helpful. JM

    Question from Cory Denena MS/HS principal Colegio Americano de Durango:

    Can you talk about how best to design a staff development plan in assessment for learning? Can you recommend a set of suggested priorities when thinking about and designing the plan?


    Rachel C. Syrja:

    First of all, research has shown that the best staff development is based on a 3-5 year plan. Therefore, I think it is important to be realistic and think about initial training and follow-up. Your first priority should be making sure teachers understand the difference between assessment of and assessment for learning as well as when it is appropriate to use each one. Your professional development plan should include detailed and structured follow-up opportunities designed to help teachers implement their newly acquired knowledge and skills. The best type of follow-up in my experience has been working with teachers on the creation of the assessments then walking them through the process of analyzing the data using a structured protocol.

    Question from Shawn Whitt, System Administrator, Peoples Education:

    What technology is helpful in interpreting assessment data for curriculum and instruction use of scale?


    James H. McMillan:

    Shawn, not sure what you mean by use of scale. Any technology is helpful in doing accurate arithmetic and recording grades and such. My experience is that data need to be provided electronically in a form that allows the teacher flexibility. A big problem now is that for benchmark testing electronic results are given for individual students and questions, which results in data overload, and is subject to reading too much into performance on a single item. For grading there are plenty of good software programs - the problem is that teachers depend too heavily on numbers crunched, as if it was “objective.” JM

    Question from Terry Weaver, Associate Professor, Union University:

    What would be the pass/fail rate data on any classroom assessment whereby a teacher should consider reteaching the content of the assessment to the whole class?


    James H. McMillan:

    Hi Terry. An important principle in assessment is that final judgments about student competence need to be based on multiple assessment methods. When this is accomplished you are able to evaluate whether a single assessment is off target. For a specific assessment, look carefully at how your best students perform. If they do not seem to get it, that’s an indication that instruction or the assessment itself could be improved. Also, think about how students have performed in past classes. All teachers do weak assessments at some point. As far as pass/fail rates, it all depends on how difficult the standard is and how well aligned the assessment is to the learning targets and instruction. Bottom line is don’t be concerned about a possible weak assessment. Be willing to gather additional data to be confident about your judgments. Hope this is helpful. JM

    Question from James Falvo, faculty, University of Phoenix:

    It’s debatable whether the traditional A-F grading scheme provides a valid assessment of academic performance. What assessment tool(s) would provide a better means?


    James H. McMillan:

    Hi James. Validity is key to good assessment. The question with grades is how they are interpreted and what they are used for. It really doesn’t matter that much whether you use rubrics, letter grades, or averages, the key is being clear about what they mean and what they should be used for. If the grade designates academic performance, but the teacher includes participation and effort in the grade, the inference about the performance is compromised, resulting in weaker validity. The reality of most grades is that they do include multiple things, most of which are important in the development of the students, so the inferences need to include everything that went into the grades. The older system used in many elementary classes, where you got separate grades for responsibility, etc., was quite good in my view. JM

    Question from Janet Jones, School Improvement Coach, KIswaukee Intermediate Delivery Service,:

    Could you speak to the use of Data Teams to drive the use of assessment data and student work to improve instruction, particularly at the high school level. Could you describe how to get these started at the High School level.


    Rachel C. Syrja:

    Data Teams is a highly effective process for analyzing the results of common formative assessments. It involves such steps as analyzing the data for strengths and obstacles, setting SMART goals, and determining instructional strategies. From my work with the Leadership and Learning Center I know that their are many high schools across the country that are currently using Data Teams as a process for analyzing their data. The first challenge is determining how to form those teams. Things to consider include, what works best for your particular high school. For example, does it make more sense for you to meet as a content area or grade level? Once you establish the teams, then it is up to each team to assign roles and schedule meeting times. High school definitely presents more challenges when it comes to establishing Data Teams, but with the proper guidance and support, they can accomplish great things.

    Question from Jason Schwartz, Director of Publishing Systems, Pacific Metrics:

    What needs are today’s commercially available assessments not meeting that you believe could be addressed by the commercially available assessments of tomorrow?


    James H. McMillan:

    Hi Jason. Commercially available assessments are not, as advertised, doing very well in providing formative data. Some companies claim to do formative assessment, but all they do is provide results by item or subscale from tests covering the content. In the future companies will have better products for formative assessment, especially when they systematize the process of tailoring instruction depending on the wrong response given by the student. Testing companies also need to be more responsible for providing subscale scores, along with psychometric properties of the tests. Finally, companies need to find ways of scoring constructed-response items. Hope this helps, Jim.

    Question from Terri Tomassi, MA Student Oxford Brookes University:

    What are the main challenges, in your view, do US teachers face when trying to implement formative assessment in current classrooms?


    James H. McMillan:

    Hi Terri, great question. For me it’s not a challenge to diagnose - the problem is in giving appropriate, individualized feedback that includes instructional correctives. It’s one thing to assess progress, it’s another to go the next step to tell students what further learning is needed. In a large, diverse classroom, this can be difficult. By using “clickers,” flags, or other indicators of understanding, teachers can get a better sense of the whole class. Hope this is helpful, Jim.

    Question from Julie Cooley, Assistant Superintendent of Student Services, Lake Forest Schools, IL:

    Can you give advice to teachers who would like to move away from excessive use of grades to more effective types of assessment for learning? What are the first steps? At the high school level, how does on deal with student and parent questions about less emphasis on grades?


    James H. McMillan:

    Hi Julie. You are right that it is a challenge to change to put less emphasis on grades. I’d recommend teachers provide exemplars, anchor performances, and rubrics that are associated with the grades to communicate more than good or excellent. We need to live with grades, we just need to have them provide more specific information. First steps include getting teachers to talk about what goes into their grades to see if there can be some agreement about common practices, developing the exemplars and rubrics, then working on ways to inform students and parents about what is represented by different grades. Hope this is helpful, Jim.

    Question from Kim Hoag, educator, 4th grade, Columbus West Elementary:

    After two sessions of testing (fall and winter) on a computer assessment called MAPS, some very good students went down in level. The students are tested in a classroom, three or four at a time while the rest of the class focuses on something else. Why would so many good students go down? If I do not claim responsibility for that, for whatever reason, then I cannot claim responsibility for those that went up. Is such testing worth the money, the stress, and the lost learning time?


    Rachel C. Syrja:

    In his book, Classroom Assessment for Student Learning, Rick Stiggins states that there are different purposes and uses for assessments of and assessments for learning. Certainly these types of assessments of learning serve a purpose in education. However, the most valuable type of assessments are teacher created common formative assessments for learning. The results of those assessments can be used to guide our instruction on a daily basis. The results of the assessment you are referring to are more summative in nature. While they can definitely point to areas of deficiency, we should not depend solely on any one measure. My advice is for you to take a look at your own common formative assessments to see if your data confirms the data from this summative assessment.

    Question from Cathie M. Currie, Ph.D., adjunct assistant professor, Adelphi University:

    Bloom et al’s work on higher level assessment has not been as much of an influence in testing as I would expect. Testing at the higher levels gives me a metric on how my students are able to use the material I teach. Why is Bloom not used more? And what do other instructors use to test how their student use, rather than simply ‘know’, course content?


    James H. McMillan:

    Cathy, Bloom isn’t used more because it is not very friendly to teachers, nor is the more recent revision of Bloom. It’s too easy to get caught up with differentiating the levels, causing confusion. Get teachers to agree to three or four categories of cognition, e.g., recall knowledge, application, higher skills, and be sure it’s clear how they are different. Most teachers also don’t use detailed tables of specification because it just isn’t efficient. Hope this is helpful - Jim.

    Question from Jeff Jackson Education Director, OrgSync, INC:

    How do we know we are asking the right questions. I want to know what they actually learned, not what I think they learned.


    James H. McMillan:

    Jeff, good question!! (and the right one). It seems to me that you may need to have more difficult questions, ones that require deep understanding and an ability to apply learning to new situations. It is also helpful to ask colleagues to observe to provide feedback. Finally, test questions that involve novel applications, giving answers that can be reviewed by colleagues, is a good indicator. Hope this helps. Jim

    Question from Michelle Bullard, student teacher, Flagler College:

    Having observed some classrooms, I’m really worried about the stress some children undergo when administered tests. A professor of mine did a ‘group test’ in class that I thought would work well. We were in a circle and were called upon to answer or pass on a question. We all learned what we didn’t know, and he had to opportunity to see who knew the answers. How do you think this would work and apply as a regular assessment in the elementary classroom?


    James H. McMillan:

    Hi Michelle. I’m afraid I wouldn’t do it in an elementary class unless it was a game meant for formative assessment. for summative assessments all students need to have the same opportunity to show what they know & understand. hope this helps. Jim

    Question from Delia Chumpitazi-Foye, ESL Teacher, New London:

    Your thoughts on the understanding and using data to inform instruction as it relates to ELL students. You need to ask, are the results a true indicator of content/skills or an indicator of English language development?


    Rachel C. Syrja:

    First of all, I believe that English Language Learners need to be assessed at their appropriate English Language Development level. If we need to assess a students knowledge of a particular concept or skill, then we need to use an assessment measure that is appropriate to their EL level so that they can truly demonstrate their level of knowledge. Therefore, the key to effectively assessing ELLs is to have a high quality measure for determining a students EL level in each of the 4 domains, Listening, Speaking, Reading, and Writing. We then need to assess our ELs on an ongoing basis throughout the year in order to adjust our instruction and assessment accordingly.

    Question from Sarah Mayeda, The Chicago Public Education Fund:

    Please comment on the “best practice” infrastructure requirements at the school level for supporting teachers in their effort to use student learning data to make instructional decisions (e.g. well-trained facilitators, common planning time, etc.) What are the primary elements that are required to ensure that teachers’ use of data is authentic and that it is focused on moving all students forward? Are there exemplary models out there in urban districts?


    James H. McMillan:

    Sarah, very good questions; There clearly needs to be a commitment and there needs to be departmental ownership. Of course, ongoing staff development is essential, focusing on formative assessment. You need to have sufficient planning time. I wish I could help more - JM

    Question from Joy, grad student, Kaplan University:

    What is the most appaling mistake that classroom teachers make with their assessments?


    Rachel C. Syrja:

    Let me answer as well. The worst is averaging a zero with other scores; another is that they are not sufficiently transparent to students so that students aren’t sure what they will be assessed on or how they will be assessed. Another is using only one assessment for a grade, especially if it is not well aligned with the content and instruction. JM

    Question from Martha Gustafson, Program Specialist, CASAS:

    Martha Gustafson, CASAS I often get this question from coordinators in the field - how do you get teachers really engaged and interested in looking at assessment data to inform instruction, even if all the resources are in place to facilitate this? Thank you.


    Rachel C. Syrja:

    The best way to get teachers really engaged is to have them experience first hand the power of data analysis. I am a strong believer in finding a group of teachers who are already motivated, ready, and willing to learn how to use data to inform instruction. I work with that group diligently and have them share their success stories using data walls. Their data wall includes data from their pre test and post test and shows the growth that students made. It also highlights the strategies used that yielded these results. Teachers respond when they see the strategy working. This always helps me build critical mass.

    Question from Melissa Campbell, Graduate Student, Miami University:

    Can parents play a role in formative assessment?


    James H. McMillan:

    Melissa, good question! I’m not aware of much that includes parents, but there are now many practical guides to formative assessment that may have some guidelines (check Corwin Press). It seems to me that parents could help to elucidate what the student misunderstandings are. Hope this helps - JM

    Liana Heitin, teachermagazine.org (Moderator):

    That’s all the time we have for today. Unfortunately, we weren’t able to answer all the great questions we received, but thank you to our readers for participating. A special thanks to our guests, James McMillan and Rachel Syrja, for their insightful answers. The transcript of this chat will be posted shortly on teachermagazine.org.

    The Fine Print

    All questions are screened by an edweek.org editor and the guest speaker prior to posting. A question is not displayed until it is answered by the guest speaker. Due to the volume of questions received, we cannot guarantee that all questions will be answered, or answered in the order of submission. Guests and hosts may decline to answer any questions. Concise questions are strongly encouraged.

    Please be sure to include your name and affiliation when posting your question.

    Edweek.org’s Online Chat is an open forum where readers can participate in a give- and-take discussion with a variety of guests. Edweek.org reserves the right to condense or edit questions for clarity, but editing is kept to a minimum. Transcripts may also be reproduced in some form in our print edition. We do not correct errors in spelling, punctuation, etc. In addition, we remove statements that have the potential to be libelous or to slander someone. Please read our privacy policy and user agreement if you have questions.

    ---Chat Editors

Related Tags: