To Blog or Not to Blog - That is the Question

As a part of our session on growing and enhancing learning communities using Web 2.0 tools at NSDC, we are introducing participants to a variety of tools within the context of the dispositions of practice.  One tool that we are sharing is blogs.

For me, blogging has been a way to put my thoughts out to the greater world.  Sometimes I get feedback in the form of comments, sometimes I don't. (That makes me very sad by the way! Post a comment! Just don't post spam!)  Either way - the very act of putting my question out there, of writing about my thought process helps me to clarify my work and the vision I have for that work.

When I began blogging, there was the initial fear of "what if I say the wrong thing" - particularly as I blogged about my work.  Then I ran into the problem that my chose blogging platform was often blocked in district (very frustrating!)  However, I persevered and my blogging continues.

I find that I blog less often than I used to - perhaps because I tweet more? - but that I am always reflecting on my work which will usually result in a blog post.  As a result, blogging is one of the most important tools that I use.

If you are new to blogging, what are your concerns or questions about blogging? What most intriques you about blogging?

Vision in Education

A colleague posted the following on Facebook and Twitter the other day:

"Is there any reason for a lack of vision in education?"

At a recent meeting, the keynote speaker asked us several questions about our organization and one of them was:
 "Who would you follow?"

Who would you leave the safety and security of your tenure and seniority for? Who is that person in your organization that you want to work with - to plan with - to dream with?



I have been thinking a great deal about vision, or the lack of it, in our educational systems for quite a while.  As a Fellow in Communities for Learning, vision is one small part of a Framework that has been developed to look at school improvement.  But I am beginning to think that if I had to weight them - it would be one pretty important part.

I am not just talking about a "vision" statement that is created by a committee and done for the sake of compliance. I am talking about an understanding of where we want to be - what it will look like, sound like, be like for students, teachers, administrators and parents.  Something that we hold in our sights, revisit and refine regularly, reflect on at the end of a good day and a bad day.  Something to move toward.

I am not sure that I have seen one of those.  I have met lots of teachers and leaders who are trying but clarifying vision and making it something manageable and achieveable can be daunting.  We might speak it once in a meeting and then never again after we have been given "the look" by our colleagues or leaders.  Or we might push for it - every day and in every way - only to be disappointed in the obvious lack of vision that we are handed.

I know that I don't have the answers - but I love the fact that more and more people are posing the questions.  But the real thing I am interested in is

What are we going to do about it?

Teaching as a profession

The following is an exchange from an #edchat on Twitter last night:



I am going to be the first to admit that the heat of the conversation and being limited to 140 characters did not highlight my ability to word things well - but what about those bigger questions?

  • Is there a difference between a learner and a teacher? Between a learner and a student?
  • What is a profession? Who are professionals?

Why I Love Rubrics

Theresa and I started Grand Rounds as a place to discuss educational research and professional development. Slowly, I moved away from the blog and into a PLN based primarily on Twitter to accomplish that goal. Several times, I’ve jumped into Twitter conversations around topics of interest but last night, I watched a conversation about rubrics fly by on Tweetdeck and self-censored. Knowing I wouldn't be able to say what I wanted to say in 140 characters, I returned here to Grand Rounds to lay out my argument for why, frankly, I love rubrics.

Seven years ago, I was a new staff developer, fresh from the classroom and attending a series of workshops that my new office was sponsoring. The program was school based, so I sat among teachers that had a long history together and was privy to their student work, curriculum tasks, and conversations. The theme of the program was “Communicating Expectations” and when rubrics were first mentioned, it was as a tool, not as the end unto itself. After a series of activities around expectations and feedback, including a discussion around measuring work that seemingly can’t be measured, we started to work with a task the teachers had recently assigned. It was an authentic task that involved creating, exploring, communicating - a whole slew of skills and tasks. They brainstormed what they expected from their students, organized what students actually did by their approximation to their expectations, articulated the attributes of the work that met their expectations, and slowly but surely, built a rubric. Teachers then took the rubric they wrote, modified it for a future task, and came up with a plan for using it with students. When they returned to the next session, almost every teacher spoke of the improved quality of student work and clarity of language between teacher and students. They used the rubric as a gauge for assessing the distance between their work and what the task required. The teachers weren't using rubrics for all tasks and they weren't treating them as some sort of a holy grail.

I was hooked. Since then, I've seen numerous examples of high quality rubrics being used by students and teachers. I use them regularly in my work and will continue to advocate for taking the time to design high quality rubrics for worthy tasks. When I read blogs, tweets, and books that are anti-rubric, I almost always agree with their dislike of the things they are describing. But frequently, what I see people describing aren't rubrics, they're checklists. So to me:
  1. The rubric itself is the least important part of the process. The sheet of paper is the product of a process articulating expectations of student learning and work.

  2. Any rubric that hasn't been checked against student work, developed with students or gotten student feedback is still in draft form.

  3. The language describes that quality of a piece of work - not the quantity. Some, few, and many are quantitative terms and are slippery terms to define. To me, a rubric's purpose to is articulate expectations of success - so a student working on a task will know what they need to do to improve their work. The language needs to reflect that goal.

  4. The language of the rubric focus what is present, not just what is absent. ("Includes irrelevant material" versus "doesn't stay focused on topic")

  5. The highest level describes what exceeds the standard or expectation, and often includes language about "breaking rules" or "new and unexpected" approach to task.

  6. The task is worthy of a rubric. That's a value loaded statement, so to clarify - not all tasks need a rubric and a well-written rubric does take time to write. Generally speaking, I use rubrics for authentic, process tasks that are similar to real world tasks.

  7. We need to be critical consumers of rubrics that are available in the cloud.

I won't go into the rubrics in writing debate as far better writers than I have tackled it (I recommend reading Ruth Culham and then Maja Wilson for two takes on that particular issue) but I will state explicitly that I think rubrics are among the best tools available for articulating expectations in a way students can refer to when their teacher isn't around.

For a more recent view from both sides, check out TeachPaperless' Why I Hate Rubrics/Rubrics Were Great (especially the comments) and Two Arguments for Using (Some) Rubrics and please share your thinking around the sticky wicket that are rubrics.

Tornado Testing

While in Connecticut for a Communities for Learning last week, we received the updated, revised testing schedule for the 3-8 assessments in New York. The air was immediately charged as building level leaders let the new schedule sink in and began to think about the implications. After airing our frustrations, we did what we always do and set about getting the work done.

In my region, we have a strong history of regional scoring so these changes mean that my team will need to develop a new calendar of scoring dates to assist the districts. We have sent out a survey to our districts regarding participation and trying to determine the best way to handle the tight testing/scoring window. I am confident that with the input from our districts we will determine a way to get this task done, although I am concerned that we will lose the professional development aspect that has been the cornerstone of what we do.

Working in a position that requires me to pass along information from SED to our districts and then work to make their directives reality, I tend to walk a fairly careful line with my thoughts and actions. Often, I try to put the realities of a state office into perspective for our districts and get to what the true intent of their decisions are, not how they are actually implemented, funded or twisted by media coverage. I remind teachers about the importance of standards - while we wrestle with making meaning of their broad guidelines and inconsistency across grade levels. I remind administrators that the 3-8 testing system over time will give us important information about a cohort of students that we can use to address issues in curriculum as well as remediate using data, while bracing myself for the "Business First" month of coverage. I share, in a user friendly format, the regulations that deal with mentoring, AIS, RTI and every other mandate there is pushing my districts to think outside the box and find where they are already doing these things, while fighting off the complaints that these are all non-funded and how are districts/teachers supposed to do all of this.

But lately - I am feeling a little like the aftermath of the tornadoes that recently hit my area. Changes have come upon us with little warning, the path is unpredictable and the aftermath is going to require a great deal of clean-up.



I am still processing all of the information and trying to help our districts find a way out of the storm. But it is becoming harder and harder to defend the wizard.

New York State moves 3-8 testing to May

It started with a question on a listserv, sort of like a rumble in the background. I spent a lovely day doing program design and then visiting The Cloisters in New York City. My cell phone battery died mid-morning so I missed the rumble rising to a dull roar. By time I got home and back on-line, the roar had crested and the conversations regarding implications already in progress.

All New York State students in Grades 3-8 will be taking the mathematics and English Langauge Arts assessments in May beginning next year.

The rumor was confirmed on the DATAG listserv with the following message:
Johanna Duncan Poitier just sent out a special edition issue of News and Notes which provides important updates from the June meeting of the Board of Regents to District Superintendents, Superintendents of Schools, Administrators of Charter and Nonpublic Schools, and Other Partners which included a confirmation of the Regents action earlier today moving the 3-8 ELA and Math assessments to May starting next year.
I’m sure more will be released in the coming days, including guidance on how schools should handle administration, scoring, data reporting, and other aspects of the assessments. To hear what others were thinking, I connected with my PLN on Twitter and Facebook and the responses were similar. Lots of surprise that we went from survey to action so quickly, panic at the thought of 3rd graders sitting through 5 straight days of testing, and bafflement about what it will look like in practice. Talking through the consequences has been fascinating. Some of the comments from conversations are below. I liked to the author's blog when possible:

Positive Consequence: Teachers can now teach Math and ELA all year long. April Spring Break can provide a natural break between teaching content and teaching students test sophistication or test wise-ness.

Negative Consequence: Eighth graders may conceivably be testing (Math, ELA, Science, and SS) more than learning during the month of May. (Angela)

Positive: The assessments can be viewed as a one-shot deal that happens at the end of the year. A chance to show off what you know, like the big kids in high school.

Negative: A nine year old probably won’t see it that way. (Theresa)

Positive: Weather is less likely to impact testing administration.

Negative: Scoring all assessments at the same time might lead to more than one testing and assessment coordinator cowering in a corner, whimpering.

Positive: The media will report on test data once a year, rather than twice. (Erin)

Negative: People may perceive this as a response to the increase in scores. A way to shake things up so students don't get to used to the test.

I expect the conversation will crest again as the press reports the change. I, like many others, have lots of questions. I wonder how students feel about the change. Were students given the chance to respond to the survey? Angela wonders if it's time to combine ELA and Social Studies into one assessment. What about the impact on final exams, especially given the recent article in The Buffalo News about schools using SED scores in student averages? ... and more more.

Your thoughts?

Change or Slight of Hand?

According to news reports this past week, 46 states including New York State, have agreed to “the process and development of voluntary, common standards.” Said standards would be in draft form for review by July 2009 with grade-by-grade standards available in December 2009. States would have three years to adopt and implement those standards.

Now, I am not against national standards per se, I just want standards that are manageable, measurable and relevant. Having waited anxiously for the revision to the NYS ELA standards since last June, I find it hard to believe that a national group with appropriate representation is going to be able to reach consensus and produce something that will meet those criteria in a matter of a few weeks. Sadly, I became even more skeptical when I read the actual agreement drafted by the Council of Chief State School Officers and the National Governors Association Center for Best Practices which contains criteria for said standards, which will be:
- Fewer, clearer, and higher, to best drive effective policy and practice;
- Aligned with college and work expectations, so that all students are prepared for success upon graduating from high school;
- Inclusive of rigorous content and application of knowledge through high-order skills, so that all students are prepared for the 21st century;
- Internationally benchmarked, so that all students are prepared for succeeding in our global economy and society; and
- Research and evidence-based.


It has taken a year for NYS to review and develop a plan for their revision of state standards in ELA. The last time I was in Albany I was told the committee was still discussing them and “tweaking” what they had before roll-out for public comment, now scheduled for Fall 2009. We have had some sneak peeks at what to anticipate, such as the fact that we will now have a Literacy and Literature strand and that in addition to Reading, Writing, Listening and Speaking, we will now add Viewing and Presenting.

The latter two are very similar to the National Council for the Teachers of English (NCTE) Standards for the English Language Arts which frankly don’t look too terribly different from what NYS currently has in place. So the cynic in me is doubting whether there is going to be any real change when it comes to the standards. If NYS has already spent significant time and energy into developing these revised standards, yet have agreed to develop the national pieces (of which 85% should be adopted by the states voluntarily) – is there going to be real change or are we just going through the motions?

Technically Writing

I spent last week helping some of my teammates lead the regional scoring of the NYS Assessments in Mathematics. It was quite a relief to just have to "be there" this year as opposed to having to play an active role in the training! And that allowed me to really listen to and think about the conversations the teachers were having about student answers.

Being a social studies teacher with a passion for all things writing, I was struck by how much writing is required of students on the math assessments. Interestingly, it was often the writing that prevented students from receiving full credit on some of the answers. Many of the teachers complained about this, particularly as we moved onto the middle grade levels. Often, I heard comments like these:

"These kids clearly knew the answer - I don't know why we can't just give them full credit."

"These scoring guides penalize the students who can 'do math' in their heads and don't need to show their work."


I understand their arguments but I also know that NYS is trying to emphasize (through the standards as well as the assessments) the power of communicating in math. And in order to communicate well in math - they must do so using the very technical language of math. While these teachers found the issue to be one of math (and sometimes of reading), I really saw these as the students not being able to express themselves in the mathematical language. A few examples:

When asked to express their answer in exponential form, several students would provide the correct answer when writing "three to the sixth power" but could not receive full credit because they did not write it in correct form.

Some students would write to explain how they found a particular answer, but in a somewhat vague manner such as "because you have to find the straight 180 so you would subtract." Teachers would argue that it is evident that the students understood the notion of complementary angles, but in reality there is not enough detail in this statement. The straight 180 what? Subtract what? From what?



Writing in the disciplines is very content specific. I have long held the belief that each content area has its own literacy that goes beyond merely teaching students to use the English language. In social studies, students need to be able to read and communicate about maps using correct terms. They need to understand the symbolism in political cartoons and the trends in charts/graphs. In science, they need to understand scientific notation, the symbols in chemistry and how to write a chemical equation. I could go on and on but you get the picture. Each discipline requires a highly technical language and one that we must explicitly teach our students. Each of us truly is a teacher of literacy.

For my math friends, it didn't seem the appropriate time to share my thoughts about the difference between having students muscle through the math to come up with a correct answer and having them share their understandings of the process and relationship between numbers in written form. But you can bet that I will be learning more about the technical writing in other subjects so that I can help them teach writing there as well.

Cross-posted on Writing Frameworks.

EQ: Teacher Leadership

What does it take to truly develop teacher leadership?

When this question lept off the page for me at a recent Communities for Learning session, it seemed that three days worth of thinking had found a home. This post has been in draft form for a while but I have decided that examining the research and practice around this essential question will be one focus for me in the upcoming year. Since this post will capture my initial thinking around this topic, it is not heavy in the research but merely my attempt to capture the "problem."

Consider these scenerios that recently presented themselves in my work:

District A has adopted a new series and carved out a 90 minute literacy block. Teachers are struggling with the use of the block and despite having an onsite "coach" do not seem to be making good use of the time. In a planning meeting to discuss the development of the teachers, the idea of starting by coaching those teachers who were closest to the ideal in order to have them lead their colleagues was suggested. Building principal was not sure there were many teachers who were "close" and was concerned about how those teachers who were coached might be percieved by their peers.

District B has slowly been acquiring new technology resources for teachers to use and the building principal has been committed to providing teachers with the use of interactive whiteboards. As teachers see this equipment being used in the school, some are ready to embrace the technology (and the learning curve) and try out some lessons. Building principal asks the early users to showcase how they use the technololgy for their peers at a faculty meeting - none feel they have the expertise to do so and the computer teacher demos something instead.

In the same district, one new teacher (untenured) has slowly been integrating the technology even though she has not been given one of the interactive whiteboards. She researchs sites on the Internet, is taking a graduate course on media literacy, brings her class to the school lab weekly and has integrated quite a bit of technology. She even selected a technology based lesson for her observation with the building principal. She is not asked to present at the faculty meeting and has recently been passed by for the installation in her classroom of a new whiteboard purchased by the PTA.

In District C, a consultant has asked teachers who have been engaged in a long-term professional learning opportunity around discourse to share an instance where they took a risk and were successful. In the reflection around this question, teachers struggled to think of answers where they had been successful.

In each of these examples, I am certain that the district wants to foster teacher leadership and that there are teacher leaders available - yet they have not been tapped. What conditions must be in place in a school system for teacher leadership to be developed, and more importantly, to thrive in a sustained way? What dispositions must teacher leaders exhibit to be effective? In short, the essential question is what does it take to truly develop teacher leadership?

As I work to frame this question and my research better - I would appreciate any warm/cool feedback on the identification of the problem and question!!

Giving “Practice” Math Tests? Read me first. (Part 2)

Part 1 is here.

Usually, the most common reason for giving a practice math test is to identify students’ weaknesses. Hopefully the first post showed why it’s so critical to determine the weakness you’re talking about: familiarity with format, time, etc. If you’re worried about the math there are particular ways to approach the practice test.

First, ignore how the student did on the assessment. It sounds counter-intuitive but there is rationale reason for it, I promise. It’s more important how your students did on particular items than how they did overall. There are a couple of reasons for this:

NYS Assessments are based on a criterion-referenced model. Typically, when you give a student 25 questions, you mark the correct responses, determine a fraction of correct over total and come up with a score. Generally, we talk about these scores in percentages. Due to the complexity of the NYS assessments and the fact that they focus on performance in relation to a standard or criteria, scores are NOT reported this way. In fact, the number of raw points needed to demonstrate mastery shifts from year to year depending on the standard setting process.

It’s not the real deal. Regardless of the conditions we create, students know it’s not the real deal. Their performance may be inflated or deflated for that very reason and may not reflect their true performance.

NYS Test Design procedures. NYS follows a particular test design model that requires the test include items with varying difficulty. I’m sure you’ve noticed looking through the test that some questions “feel” easier than others. This isn’t a coincidence. Items are strategically chosen for the assessment that reflect a range of difficulty based on how students performed on them on the field testing. It doesn’t make sense to include 25 questions on Book 1 that were missed by most students during field testing. So, the test designers include items with a variety of difficulty – a few hard, a few easy and most middle of the road. This concept of item difficulty is called “p-value” – most simply put, what percent of students responded correctly to a question. In shorthand, we say items with high p-values are easy, while items with low p-values are hard for the particular group of students under discussion. So - two districts side by side may have different p-values on the same item. We need a neutral standard or benchmark to act as judge and jury around item difficulty. That's where the state data come in.

A great deal of data about NYS tests are made public every year – including p-values. These data can tell us which questions are easy and which are hard. It’s not a secret and requires only a smidge of background to use correctly. P-values are provided at a couple of levels. The one that is most important is for our purposes here Low Level 3. In this example, let’s talk about fifth grade. My mental model around scale scores and p-values is to picture a giant swimming pool filled with every fifth grader in the state of New York who took the state assessment last year. Floating above their head is their scale score. Students from the Bronx to Buffalo, from Long Island to Lake Placid. Students with and without disabilities. Levels 1, 2, 3 and 4.

I can look at how ALL the students did on items but included within the mix are students who really struggled and students who did really well (We assume most questions were hard for students at Level 1 while most were easy for students at Level 4.) So, I as the data lifeguard blow my whistle and call out every child who scores Levels 1 and 2. Same for the Level 4’s. Left in the pool are my Level 3’s – every child who met the standard. Because I want the data to be as clean and precise as possible, I’m going to boot out every child who scores above the minimum standard – which in Fifth grade in 2008 was 650. Left in the pool I have a few thousand students – all who met the minimum standard, AKA scale score 650. For each question these students took, I can look at how many got each question right and compare (or benchmark) my students to their performance. The graph below shows you what that looks like:

Out of ALL of the students who scored 650, only 18% of the students got question 7 correct. In other words, that was a hard question. My gut isn't telling me that. My students aren't telling me that. Students from across NYS are telling me that. Take a look and see how your students did on it. Odds are, they didn't do very well. It's not because you didn't teach it or they just weren't listening. It could be because the wording tripped them up - just like 82% of all students who scored a 650. The question is below:

Students are likely to pick A because it practically screams "PICK ME!" at them. Your students may know fractions inside out and sideways. Picking A and not C is an issue of testing sophistication, not mathematics. When reviewing similar problems with students, as much as possible, give them "PICK ME!" choices so they can learn what they look like and how to avoid their siren song.


At the other end of the difficulty continuum are easy questions. The Low Level 3's did pretty well on question 3. If you discover that your students didn't do well on questions like 3 (any item with a p-value higher than 80%), then your warning bells should start warming up.

However, before assuming it's a strength or weakness, look for other evidence that the students understand the concept. Formative assessment can really come in handy here. You can pose a similar question and ask students to respond on their way out the door. This time though, ask:

Anne has completed 87% of the race. What fraction represents that portion of the race she has NOT finished?

If students get the math, they should pick A. If they pick C, it's probably a testing issue. They slid past the NOT. Anyone who picks B or D may have a problem with fractions in general. How did they do on question 15 which taps a similar understanding? (I use Tinkerplots to answer these questions. It's one of my favorite data toys.) The students will form themselves into like-needed groups, depending on what the other instructional evidence shows.

So - if you're going to give the test to identify weaknesses:
  1. Consider how your students do on easy questions (high Low Level 3 p-values) versus hard (low Low Level 3 p-values) questions.
  2. Be aware of what wrong answers students give as that's often more interesting than what they got right.
  3. Consult other evidence (formative and summative) before confirming the students have a mathematical weakness.
I'd love to hear if any of these ideas contradicts what you've heard in schools. Feel free to drop me a line or leave a comment if you have any questions!

Giving “Practice” Math Tests? Read me first. (Part 1)

Similar to the weeks prior to the ELA assessment, many schools across the state are giving their students old copies of the state assessments to prepare their students for the big day. Based on conversations with schools and fellow professional developers, these practice tests serve several different purposes. Regardless of the reason for giving the assessment, there are several strategic moves that can be made to get the best return on your time investment and hopefully, minimize the impact on instructional time and students' sense of what school is all about.



First things first. Be honest about the reason you're asking students to take the old assessment. "To prepare them for the test" is a big broad topic. A common problem in test prep is trying to tackle two problems in one fell swoop. It's a given a that when you're teaching students a new strategy, you introduce it with familiar content or low level text. You wouldn't ask a middle school student to text-tag for the first time with a college level text. The same holds true for practice tests. It's not fair to ask students to "do their best" on the math content and expect them to notice the format and structure at the same time. Their brain is going to be busy with the math. I'm going to tackle a couple of common reasons for giving practice tests over the next couple of days and highlight the benefits of approaching different purposes in different ways.

If your goal is to expose students to the test format:

There are few students on the planet as test savvy as 8th grade students. They have been been tested since they were in fourth grade. They know what the test looks like. Some could even write it. If you work with middle level students, your time may be better served by telling them what's different in the grade 8 assessment (no editing, but extended writing). If the concern is that they really don't know the format, than give them time to do that - and only that. What do they notice about about the font? About the spacing and the structure? The set up of the questions? What might trip them up on the actual test? Make sure they know how to use the ruler, the protractor and the rules of getting as many points as possible on Book 2 and Book 3.

If your goal is to familiarize your students with timed testing:

Consider chunking the test. First, give your students the appropriate time to take Book 1 - and tell them the purpose of taking the practice test is to give a sense of how much time they'll have. Next, when they're done, take ten minutes to process what happened. A Behavior over Time graph (below) is a great tool for helping students process their stress level. Did they feel more stressed at the beginning of the test? At the end of the test? Finally, give them the support to develop a plan. If they freak out in the beginning, what can do they do to avoid the freak out? What helps them calm down? You'd be surprised the ideas that students generate during these types of conversations. It's also a nice way to reveal "rumors" that kids have heard.
Coming up tomorrow - how to tackle practice tests if your goal is to identify student weaknesses. I'd love to hear your thoughts on this!


Making a Difference: Three Cups of Tea



Reflecting upon the dreams of those who came before us and the dreams that we have for our children on the eve of the Presidential Inauguration, I wonder when in our history we took education for granted. As a social studies teacher, my students were always amazed to learn that education wasn't always mandatory. That children would often work in very dangerous conditions to help to support their families. That education was something that people strove to achieve - not something that was expected.

Today, we argue about whether we are preparing our students adequately for the world they will enter when they leave school. Since their inception with compulsory education, schools in the United States have operated under a factory model. We have bells, we have a set curriculum, we churn out graduates like Model T cars. Only we don't, do we?

I am not sure that I have the answer for education - in fact, I am pretty sure that I do not. But after reading Three Cups of Tea: One Man's Mission to Promote Peace...One School at a Time I have been thinking very differently about education. Throughout the book, which chronicled Greg Mortenson's struggles to build schools in Pakistan and Afghanistan against the backdrop of the internal and external conflicts of those countries, Mortenson talks about the power of education to bring peace. The communities where Mortenson built schools worked together to see them constructed - often carrying materials on their backs to the remote areas where the schools would be built. They challenged traditional and religious norms to allow their daughters to attend school. Those schools brought the world to remote villages, for all the good and bad that entails. It brought new perspectives and new opportunities.

We have plenty of schools in this country - but we don't have the passion for them to exist that is described in Mortenson's book. I am struggling with why.


Photo credit: Lewis W. Hine. Library of Congress Prints and Photographs Division.

Professional Discourse

So I'm minding my own business doing e-mail today, pulling together a database, having a grand old time listening to the Dr. Horrible soundtrack. I open an email from an international listserv on statistics in education. A member posted a question about a study, unclear on why a particular test had been used to analyze the data. Her question was neutral, well-reasoned and appropriate to the listserv. In effect, a professional said "I don't understand this. Can you people who are in the same field as me help me understand it better?"

The response to her inquiry a few minutes later?
Did one of the authors of this paper steal your boyfriend in grammar school by any chance? As to your question "Am I missing something here?", I would have to say professionalism, scruples, and a LIFE!

So maybe he is the woman's best friend and he has an odd way of teasing her. Perhaps he knows the authors and feels the need to defend their academic honor. Or perhaps he's provided yet another example of why we as a profession so often chose to struggle alone rather than revealing our challenges or struggles in front of others. In any event, I sincerely hope the listserv moderates call him out and give the original author some positive lovin'. Odds are they won't.

The 11th Hour

New York State opens the testing window the ELA grades 3-8 on Monday. My hunch is that most schools are starting on Tuesday and spending Monday doing a wide variety of "test prep" activities, both helpful and detrimental to students and teachers. I imagine there's going to be a run on Tums on Sunday night and some child will have a hard time falling asleep, convinced that if they fail the test their life, as they know it, is over. In time, I think the pendulum will swing toward the middle. In the meantime, my two cents on how to spend Monday.

Odds are good that by this point, the students are familiar with the format of the test and would not benefit from taking any additional practice tests. This is especially true for 8th graders who could probably write the test by now. An approach that may be more beneficial is to spend some time reminding the students of the purpose of the tests and the intended audience. It is not to find out if Jane or Jose is smart or is a good student or to determine their self-worth or to find out if their teacher is good. The audience is every student, Grades 3 through 8, in the state of New York. In other words, a student's self-check for a correct response might be: is this the best answer or is it best for me?

Monday may best served by reminded them how to translate all of that learning to one particular format. A useful strategy may be to provide the students with a Venn Diagram on the board or chart paper and have a discussion about the difference between Real World behaviors – all of those great things we do when we interact with texts in our “real lives” versus what we do on the day of the test that we do at no other time. To anchor yourself in this mental model, consider how you drove the day of your driver's test. How did you behave when driving while running errands or driving to work (assuming there weren't three inches of ice on the roads - yeah Buffalo!)? There are some things in common (text tagging, using the text to find an answer) but there are lots of differences that are worth highlighting:


The analogy may not work for all students - if that's case - and seriously (even in the most "test netural" schools Monday is a high stress day) what else are you going to be teaching/talking about? - spend some time Venn-ing out behaviors playing basketball versus football, watching TV versus watching movies or eating dinner at home versus eating dinner at a friend's house, then shift to testing and real world behaviors.

In any event, regardless of how the is spent, it's helpful to keep in mind the sage advice of Dr. Seuss in Hooray for Diffendoofer Day!

Miss Bonkers rose. “Don’t fret!” she said.
“You’ve learned the things you need
To pass the test and many more –
I’m certain you’ll succeed.
We’ve taught you that the earth is round,
That red and white make pink,
And something else that matters more –
We’ve taught you how to think.”