Sunday, July 14, 2013

The Data Monster

I've been thinking a lot about data since I came home from the PLC Institute in Minneapolis. As an English teacher, it seems the tendency in my discipline is to point out the difficulty in making meaningful information out of data.

After the Institute experience, though, I left feeling more convicted than ever that if we are going to move forward in helping all students learn at higher levels, we MUST start looking hard at the evidence of what students are learning. I must be willing to look at the quality of the writing my students are producing in comparison to my colleagues. The trick is making sure the data we are looking at is actually measuring something meaningful.

(Still, in the back of my mind is this Einstein quote: "Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted." This thought does not go away as I consider what it means to "focus on results," the third big idea in the PLC process. I have to let go of this a little bit and try to count some things I know matter, because I could cycle here forever if I allowed myself to.)

In the panel discussion on Monday afternoon, someone (Mike Mattos?) said that "I like" and "I feel" should really be banished from team meetings. We should use the evidence in front of us to make a decision and use the educational research to decide the best approach to take moving forward. If we don't know the answer, we should go find it. But sitting around and discussing our opinions on the best way to teach is not moving us forward.

There are components of this idea that make me want to stand up and shout, "Yes!" (Mostly, I feel connected to this idea when the evidence backs up my gut feeling anyway. This is human nature, right?) There are components of this idea that terrify me. Teaching is a people profession, and kids' feelings are involved. If we're looking at hard data all the time, what are we sacrificing? What important information might we be missing? At our building leadership team meetings, my principal is always saying, "What gets monitored is what gets done." I believe this wholeheartedly. So the caveat is that if we are monitoring data of student learning, and that is our primary focus of team meetings, we better make damn sure that data is measuring something worth measuring.

I think this is where we have to take a leap of faith as English teachers and tackle difficult questions about assessment. Together. We have to be willing to dive into the muddy pit of subjective grading. Together. If we don't, we'll just have a mound of data that doesn't really count for much.

The other piece of banishing "I feel" or "I like" from team meetings that I struggle with is individual students over years with whom I've trusted my gut instinct and helped them grow. I ran into a student at the park yesterday morning who just graduated from high school, whom I had in eighth grade English. He commented on how he ran across a green index card in my handwriting that had all the details of a personal narrative he had dictated to me. He later wrote his best piece of writing all year, but he does not remember this, nor does he still have it. What he remembers about that moment is how it made him feel: like someone was listening, like he mattered. If I put this in terms of "best practices," I'm sure there's something about talking it out helping students be more fluent in their writing. But in that moment, I have no idea where I pulled the idea from. I just wanted words on the page. My point is that we don't always know WHY we are making the decisions we are with individual students. We are teaching professionals rather than robots because we are able to trust our intuition.

Intuition and data are not mutually exclusive. The conclusion that I've come to (for now) is that the primary data that matters is rigorous student work in front of us. If I used a strategy I cannot place in the research, but it worked with a number of my students, the new data for our team shows that that might be a strategy worth trying. If I trusted my intuition, but the assessment shows that my students did not meet the target, then that's a problem. My students deserve to have the same access to improvement that other students do, whether or not I like the approach.

The power of data comes down to teachers remembering (again and again) that it's not about US. It's about kids and how well they are learning. If the data is helping us see our students' needs more clearly, we have an obligation to design assessments that get at the important stuff (even if it's harder for us), and be open to making decisions that are based on more than our likes and feelings.