Opinion Blog

Classroom Q&A

With Larry Ferlazzo

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers’ questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to lferlazzo@epe.org. Read more from this blog.

Curriculum Opinion

Author Interview: ‘Making Evaluation Meaningful’

By Larry Ferlazzo — July 09, 2017 8 min read
  • Save to favorites
  • Print

PJ Caposey agreed to answer a few questions about his new book, Making Evaluation Meaningful: Transforming the Conversation to Transform Schools.

Dr. PJ Caposey is an award-winning principal and superintendent who is an expert in teacher evaluation, school culture, personalized learning, and student voice. Find him on Twitter @MCUSDSupe.

LF: You talk about the importance of giving teachers feedback using a “why, how, what” formula. This seems to me to be a good strategy for teachers’ giving student feedback, too. Can you talk about this method, including an example or two?

PJ Caposey:

I stole this as my preferred method of communication from Simon Sinek’s Ted Talk and I attempt to route all of my communication this way. I focus on the relevancy of the message first and then work backward to the concrete initial thought. For instance, compare these two pieces of very similar feedback and determine which you think it most likely to cause growth and/or change.


  • STATEMENT 1 - It was unclear if the students knew the learning target for the day. Nothing was written on the board and nothing of the sort was heard during the observation. Please address this before the next observation.

  • STATEMENT 2 - When students are clearly aware of what they are supposed to learn, their probability of success increases. There are multiple ways you can accomplish this, but I suggest that you explicitly state the learning target verbally and present it visually as well. Doing so, it will ensure students are aware of their purpose for attending your class each day.

The statements essentially say the same thing, right? But, the manner in which we say what we say matters. I prefer that evaluators start from Why and travel to What—but as long as they follow the NERD format somehow I will be happy. All feedback should have the nugget (N), evidence (E), and relevance (R), and then commentary on that topic should be done (D).

N - Nugget

E - Evidence

R - Relevance

D - Done—STOP WRITING!!

LF: Fortunately, I work in a district that encourages a collaborative teacher evaluation process between administrator and teacher, and which encourages teacher-initiated goal-setting. Unfortunately, many educators work in districts where it’s a top-down checklist that is used to determined your fate. In your book, you share lots of great ideas on how administrators can handle those kinds of tools more effectively, and I’ll ask you about those ideas in a minute. But what are your suggestions for teachers who are on the receiving end of these evaluation instruments and whose administrators haven’t read your book or who might not necessarily agree with your recommendations?

PJ Caposey:

First, buy them a copy of the book . . . :) In all seriousness, the evaluation process is about the teacher. The question is whether or not the administrator realizes that. In my experience, that answer is mixed and is largely why I wrote this book. Regardless of the evaluator’s behavior, however, the teacher has a mindset decision to make.

Are they going to look at the evaluation as an assessment of their value to the organization or are they going to look at as a systematic opportunity for them to grow?

If an individual’s mindset is about personal growth and their administrator is not equally invested—the leadership of the situation simply changes hand. If leadership is equally invested—then synergy can take place and great results occur. If the teacher is forced to lead their own personal development leveraging the evaluation process to improve their performance there are a few key points for them to remember.



    • The purpose is to grow, not score better on the evaluation framework. These two things should align, but the focus should be altruistic—not rating chasing. It may seem like semantics—but it profoundly changes the work.

    • Goal-setting is a must. Use the data—whatever level of feedback is provided—to formulate concrete goals for your personal growth.

    • Seek an accountability partner. This, hopefully, is your evaluator. If they are uninterested, too busy, or non-committed, it is fine to look elsewhere. This person should be monitoring your progress toward your stated goals consistently between evaluations.

    • Peer-to-peer or video observation may be the best way to stimulate your own growth. In the case the evaluation process is completely void of feedback - find a way to keep grinding forward. Posting a video of yourself on YouTube and then asking for feedback on Twitter is a sure-fire way to get feedback from people whose only interest is to help you get better. It may be as pure of a feedback system as exists.

LF: For administrators who are open to your ideas, what would you suggest might be three key “takeaways” from the book that they keep in mind?

PJ Caposey:

To be honest, I think that there is so much wrong with most evaluation systems that it is truly hard to come up with three. That said, if pushed to identify only three takeaways I hope everyone leaves with they would be:


  • If you are not finding meaning in the current process or you believe your teachers are not finding value in the process you must change your own behavior in order to achieve different results. You are responsible for you. The book can help facilitate your change—but the decision to change must come from the individual reader.

  • Assume teachers want to be great. If you operate from that paradigm two things happen. First, you communicate differently. You would set high ceilings and desire to provide support instead of trying to fix something that is broken. Communicate for your teachers, not to fix them. Second, when you believe teachers want to be great feedback changes. Evaluators then provide concrete examples for growth instead of issuing a statement such as: “Work to engage your students more” is effective feedback. TRUST ME—if the teacher knew how to “engage their students more” they would already be doing that.

  • Fix the pre-conference process. The book outlines this in great detail—but if the pre-conference is just a protocol you follow that details the upcoming lesson it is a waste of everyone’s time. Think about this through the lens of how can I best use the time with the teacher before the lesson to best facilitate future growth. Through that lens it is hard to imagine many pre-conferences would look the way the vast majority of them currently do.

LF: Are there some districts \you know of that you think are doing teacher evaluation well? What made the difference in those places?

PJ Caposey:

I am glad you asked this question. The answer is a somewhat reluctant no. Everything is on a continuum—so certain districts are doing a really nice job, but I have not seen any district that is universally ‘killing it’ when it comes to evaluating teachers. This is a subtle theme in the book—doing evaluation well is dependent upon good processes. However, even with good processes, the evaluation protocol is extraordinarily dependent upon excellent personnel performance.

To explain, it would be very difficult (impossible) for an evaluator to do a wonderful job if the system and processes are not put into place at the district level or are not functioning well. However, even with ‘perfect’ processes designed, evaluations are as good as the person administering them. That is why Marzano and Danielson’s frameworks have not had the enormous positive impact many would expect. Think of it this way—I have a better chance of playing golf well if I have all the best equipment (metaphorically quality frameworks and processes in this case). But, since I am horrible at golf—even great equipment does not make me great at the sport.

So—the short answer to your question is as follows. Many districts have great frameworks in place and solid processes. I am yet to work with a district where every single evaluator gets ‘it’ or wants to get ‘it’, provides great feedback, and is truly concerned with developing every teacher they serve, however.

LF: If you had to identify the one biggest common mistake you have seen in teacher evaluation programs, what would it be and how should it be fixed?

PJ Caposey:

I will break this down by employment group:



    • Superintendents or District Office - not reading the evaluations (or hiring a third party to do it) to identify themes and areas of growth for each evaluator. Simply not investing in a time-intensive, stressful, and costly process.

    • Evaluators - Focusing on either ‘just getting done’ or on assigning a rating instead of teacher growth. Teachers matter and the quality of teaching matters. We must invest time, effort, and energy into the relentless pursuit of growing our teachers.

    • Teachers - Being inauthentic with answers or even with the lesson observed. If you do your best one day out of 175 when the administrator is there then you forfeit your opportunity to receive feedback that can help positively change your career. The same goes for answering questions with the ‘right answer’ instead of what you actually do or believe. The evaluation process is about growth—only when the teacher allows it to be.


LF: Is there anything I haven’t asked you that you’d like to share?

PJ Caposey:

Two quick things:

First, as educators and educational leaders we must think. Deeply think through everything that has become traditional, practice, or common protocol and ensure that it is truly benefitting the organization. If your evaluation process is one large hoop to jump through—STOP. Recalibrate and move forward. The process is too important and too stressful to just keep doing the same thing over and over again.

Second, doing the process exceptionally well versus pretty poorly is not considerably more resource-intensive. To say it in another way—to evaluate significantly better it does not take a significant amount of additional time or money.

LF: Thanks, PJ!

Related Tags:

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.