“How’m I Doing?”: How to Tell if Peer Coaching is Working and if You’re Doing it Right

A Seemingly Simple Question

I’m the kind of person who likes feedback. I like to know if I’m headed in the right direction and if I’m doing the right things to get there. The late Ed Koch, former Mayor of New York, used to ask people, “How’m I doing?” and it quickly became a sort-of populist campaign slogan. As a teenager growing up in the Midwest when Koch was mayor of the Big Apple, I couldn’t really relate to much of what his administration was about, but there was something about his slogan that resonated with me. It was sign of his commitment to serving constituents and I found that admirable.  This quarter we’ve been focusing on peer coaching and it too is a form of service. I think I’ve learned a lot about peer coaching and service this quarter. Our cohort has discussed how to ask questions, how to give feedback, how to establish norms, how to build trust, and generally how to be a successful peer coach. I’ve even had the opportunity to practice this at my school as part of our “community engagement project.” But as I’ve been working through this, I can’t help but wonder, “how’m I doing?”  Yes, there is feedback from my instructors, but this is based on my reflection of the events – hardly an impartial (or even accurate) source. I can rely on my collaborating teacher for feedback, but given that he has less experience than I do with peer coaching, I’m not sure where his feedback matches with what I am expected to do as a peer coach. So I’m not sure if my question about how I’m doing is really being answered adequately.  I’m pursuing this question not to merely to be pedantic (though it would not be the first time I was accused of that), but because it matters. As digital ed leaders who are tasked with employing this method of leadership, we need to 1) make sure that we are doing it right ourselves, and 2) make the case to our administration that the process is worthwhile; that the resources and energy spent on peer coaching programs generate results for teachers AND students.

The ISTE coaching standards clearly mandate the first part of this in standard 6c where they say ed tech leaders should “Regularly evaluate and reflect on their professional practice and dispositions to improve and strengthen their ability to effectively model and facilitate technology-enhanced learning experiences” (ISTE). So my question then revolves around the evaluation part of this process (I think we’ve done a fair amount with the reflection piece). Who does this evaluation and how is it carried out? What are the standards against which we are evaluated? We’ve discussed a number of topics related to peer coaching, but measuring our own success remains illusive.

Les Foltos touches on the second aspect in chapter 10 of his book on peer coaching when he discusses the importance of communicating the successes of peer coaching, warning that, “Their [peer coaches’] successes may wither on the vine, and coaching may never become more than a small educational experiment if coaches fail to communicate about their coaching work with the school’s leaders and staff” (Foltos p. 174). But can we substantiate or quantify these successes? Or are they all anecdotal stories about how teachers started using technology in their classrooms? If so, is this enough – should this be enough, to persuade our administrators to continue such programs?

One Approach

Elena Aguilar, founder and president of Bright Morning Consulting, and author of The Art of Coaching: Effective Strategies for School Transformation provides an answer to at least part of my first concern. With regards to assessing coaches, she provides a “Transformational Coaching Rubric” which attempts to gauge the progress of academic coaches in schools. She breaks the progress of coaches down into 5 different levels:

Beginning: The coach is talking about the strategies, demonstrating awareness of them, and may occasionally try them out.
Emerging:  The coach has begun to use these strategies, but is inconsistent in usage and effectiveness.
Developing: The coach consistently uses these strategies and approaches; employing these practices leads to meeting some coaching goals.
Refining: The coach’s usage of the strategies and approaches is deeply embedded in the coaching practice and directly results in meeting goals.
Modeling: The coach’s practice is recognized as exemplary and is shared with other coaches; the coach shares and creates new knowledge and practice.

Aguilar then categorizes 6 basic areas where coaches can be assessed as to their progress:

1. Knowledge Base – Coach understands and applies a set of core coaching
knowledge components.

2. Relationships – Coach develops and maintains relationships based on trust and respect and demonstrates cultural competency in order to advance the work.

3. Strategic Design – Coach develops strategic work plans based on data and a variety of assessments. Coach is continuously guided by the work plan, makes adjustments as necessary, and monitors progress along the way.

4. The Coaching Conversation – Coach demonstrates a wide range of listening and questioning skills. Coach is able to effectively move
conversations toward meeting the client’s goals

5. Strategic Actions – Coach implements high-leverage strategic actions that support client in reaching goals and uses a gradual release of responsibility model to develop a client’s autonomy.

6. Coach as Learner – Coach consistently reflects on his or her own learning and development as a coach and actively seeks out ways to develop his or her skill, knowledge, and capacity.

These categories are subsequently broken down into smaller indicators (“elements”) which are then evaluated on according to the levels indicated above. There is additional space to provide specific evidence of meeting the standard. While this rubric is helpful, it’s unclear exactly who is supposed to fill it out. Perhaps it the teachers themselves or perhaps its a supervisor or administrator. If it’s the former, I wonder how accurate the results will be as it’s essentially self-reporting. If it’s the latter, I wonder how effective the norms of communication will be and how trust will be established if there’s another person – possibly an administrator, present to evaluate the coach. Another issue I have is that upon closer inspection, it’s not entirely clear that this rubric is geared for “peer” coaches.  The tone is one of customer/client rather than peer-to-peer. In fact, the word “client” is used exclusively to refer to the person being coached.  Since a “client” is not really a peer, this rubric may have some shortcomings as it applies to our implementation of the coaching process. Still, it contains many of the elements we have discussed in class this quarter (building trust, confidentiality, asking questions, reflection, etc.) and can possibly be a beneficial place to start – if only for thinking of how good coaching manifests itself in practice.

Another Approach

A second approach I found, which focuses more on the second part of my inquiry regarding providing support for administration of the benefits of coaching, comes from the Hannover Research’s report for Iowa’s Area Education Agencies (AEA). The AEA was created by the Iowa state legislature in 1974 to study special education in response to Section 504 of the 1973 Federal Rehabilitation Act which mandated “free, appropriate, public education for children with disabilities.” The 2015 Hanover report, “Best Practices in Instructional Coaching,” deals with academic coaching in great deal and contains a section on “Evaluating the Impact of Coaching.” The report echos what Foltos says about the importance of advocating for coaching programs, but focuses on more formal evaluations. “Evaluating coaching models allows programs to demonstrate progress toward identified goals and prove the value of the program as a worthwhile use of limited resources” (Hanover).

The report suggests three major types of data surrounding the effectiveness of coaching:

 Product – did you get the outcomes you hoped to find?
 Process – how well did coaching serve each of the parties involved?
 Inputs – what was invested in the program? (e.g., frequency of meetings, content, and quality of coaching, etc.)

These three criteria read almost like a corporate review, but schools would do well to examine the outcomes they hope to achieve, consider if the stakeholders’ interests were being met, and calculate the total cost of the program when they look at peer coaching. I think peer coaching programs can stand the scrutiny and they will be endorsed once all the data is in. This framework provides a solid basis for this examination.

The report also cites several different theorists and their take on evaluating the coaching process. This includes the aforementioned Elena Aguilar who “recommends that evaluation efforts focus on data related to student learning, but also on teacher outcomes.” The report also cites Leanna Harris, another consultant, who cautions, “You have to have a very soft view of data and if student achievement is improving and teachers are reporting making changes in their practice voluntarily, I’d say you’re well on your way.” So while there appears to be some merit to measuring the success of coaching by checking the students, the greater emphasis seems to be on how coaching effects teachers. The impact of peer coaching on students, the report concedes, is difficult to measure. “According to one district administrator at Dysart USD, ‘It’s hard to isolate coaching as a variable on student performance [even though] we’ve gathered data on effective teachers based on student achievement.’ However, despite these difficulties, the exemplary districts reported tying a variety of positive impacts to their instructional coaching program.” The positive impacts on the overall education program, according to the study, include: “improved teacher retention and cost savings, improved district and campus academic performance, improved graduation rates, and improved campus collaboration.” Unlike some of the other measures of coaching success, all of these factors can be objectively quantified and thus can be used to support a case for peer coaching if the numbers improve. It may not provide individual insight, but this kind of programmatic overview can provide valuable data.

So How AM I Doing?

If my immediate question was “how’m I doing?” I still don’t have an immediate response. The rubric from Aguilar at Bright Morning can be helpful though it may not be specific to our particular brand of peer coaching and if it’s self-evaluation, that puts me somewhat in the same position I am now. If it’s an external review, then it will certainly impact the coaching process – and probably not in a positive way. The Hanover report for the Iowa ASA can provide crucial data with regard to the success of an entire program over time, but as the only coach in a small school trying it for the first time, there’s not really enough data to be statistically significant at this point.  So I’m back to self-reflection. Perhaps that’s for the best.  It’ll make me think about what I’m doing and try to discover where I fall short. I guess this will all come with time. Les Foltos says in chapter 9 of his book, “much of their success as coaches comes from what they learn on the job. Many peer coaches feel somewhat overwhelmed as they start coaching peers, but they feel much more effective after a year or more of classroom coaching experience” (Foltos p. 164). I’ve got the “overwhelmed” part of that equation down pat – now I just need to get through first year and hopefully I’ll be better next time. It looks like the only person who can really tell me “how’m I doing?” is me.

 

Aguilar, Elena (2013). “The Art of Coaching: Effective Strategies for School Transformation.” Retrieved from: http://brightmorningteam.com/wp-content/uploads/2017/09/Transformational-Coaching-Rubric.pdf

Foltos, Les (2013).  Peer coaching : Unlocking the Power of Collaboration. Thousand Oaks, California: Corwin.

Hanover Research (December 2015). Best Practices in Instructional Coaching: Prepared for Iowa Area Education Agencies. Retrieved from: http://www.esc5.k12.in.us/index.php/inside-wvec/documents-and-forms/resources-for-instructional-coaches/856-best-practices-in-instructional-coaching-iowa-area-education-agencies-1/file

Iowa’s Area Education Agencies. Website. Retrieved from: http://www.iowaaea.org/

ISTE (2011). “ISTE Standards for Coaches.” International Society for Technology in Education. Retrieved from http://www.iste.org/standards/standards/standards-for-coaches

Peer-Ed (2015). Defining 21st Century Learning.  Accessed from: https://docs.google.com/document/d/1x6hkuLvVKoXb3FCOn_rOtMKDi08Ryr3gMKko_LyZSz0/edit

 

3 thoughts on ““How’m I Doing?”: How to Tell if Peer Coaching is Working and if You’re Doing it Right”

  1. Some thoughtful reflection in this post. Aguilar offers some formal mechanism for evaluating a coach, and as you note they can be useful in self reflection as well. AEA is great for looking at data as part of the process for evaluation. But many coaches are like you. They are just beginning or they work with one or two peers and often on just a few learning activities in a year. So student data for the school is unlikely to track the impact of coaching, and even if the data is broken down by teacher it may not reveal much. Changing teacher practice is recognized as an essential precondition for improved student learning. What can you and your learning partner do to track changes in teacher practice? Is there a role for the school administration in observing these changes over time? If so, how can they play this role?

  2. It seems like there would have to be some middle ground in evaluation here. Maybe a combination of self-reflection, and reported results whether reported from teachers or students achievement or a combination of the two. I will have to look into both of your resources to possibly use in my practice with the other coaches I work with. It is interesting that Les is asking above how a coach and teacher can track changes in teacher practice. I wonder if that combined with some other metrics would work well?

  3. When Lezlie was “assessing” me, her big picture items were: did I make her feel supported, and did I make her feel judged? (And of course you hope the answers are yes, and no.) I think those are far more general than you are looking for, but I think of them as nice guiding assessment questions as I think about other ways to assess.

    You said, “I think peer coaching programs can stand the scrutiny and they will be endorsed once all the data is in.” I agree!

Leave a Reply

Your email address will not be published. Required fields are marked *