For me, blogging has been a way to put my thoughts out to the greater world. Sometimes I get feedback in the form of comments, sometimes I don't. (That makes me very sad by the way! Post a comment! Just don't post spam!) Either way - the very act of putting my question out there, of writing about my thought process helps me to clarify my work and the vision I have for that work.
When I began blogging, there was the initial fear of "what if I say the wrong thing" - particularly as I blogged about my work. Then I ran into the problem that my chose blogging platform was often blocked in district (very frustrating!) However, I persevered and my blogging continues.
I find that I blog less often than I used to - perhaps because I tweet more? - but that I am always reflecting on my work which will usually result in a blog post. As a result, blogging is one of the most important tools that I use.
If you are new to blogging, what are your concerns or questions about blogging? What most intriques you about blogging?
I am going to be the first to admit that the heat of the conversation and being limited to 140 characters did not highlight my ability to word things well - but what about those bigger questions?
- Is there a difference between a learner and a teacher? Between a learner and a student?
- What is a profession? Who are professionals?
Seven years ago, I was a new staff developer, fresh from the classroom and attending a series of workshops that my new office was sponsoring. The program was school based, so I sat among teachers that had a long history together and was privy to their student work, curriculum tasks, and conversations. The theme of the program was “Communicating Expectations” and when rubrics were first mentioned, it was as a tool, not as the end unto itself. After a series of activities around expectations and feedback, including a discussion around measuring work that seemingly can’t be measured, we started to work with a task the teachers had recently assigned. It was an authentic task that involved creating, exploring, communicating - a whole slew of skills and tasks. They brainstormed what they expected from their students, organized what students actually did by their approximation to their expectations, articulated the attributes of the work that met their expectations, and slowly but surely, built a rubric. Teachers then took the rubric they wrote, modified it for a future task, and came up with a plan for using it with students. When they returned to the next session, almost every teacher spoke of the improved quality of student work and clarity of language between teacher and students. They used the rubric as a gauge for assessing the distance between their work and what the task required. The teachers weren't using rubrics for all tasks and they weren't treating them as some sort of a holy grail.
I was hooked. Since then, I've seen numerous examples of high quality rubrics being used by students and teachers. I use them regularly in my work and will continue to advocate for taking the time to design high quality rubrics for worthy tasks. When I read blogs, tweets, and books that are anti-rubric, I almost always agree with their dislike of the things they are describing. But frequently, what I see people describing aren't rubrics, they're checklists. So to me:
- The rubric itself is the least important part of the process. The sheet of paper is the product of a process articulating expectations of student learning and work.
- Any rubric that hasn't been checked against student work, developed with students or gotten student feedback is still in draft form.
- The language describes that quality of a piece of work - not the quantity. Some, few, and many are quantitative terms and are slippery terms to define. To me, a rubric's purpose to is articulate expectations of success - so a student working on a task will know what they need to do to improve their work. The language needs to reflect that goal.
- The language of the rubric focus what is present, not just what is absent. ("Includes irrelevant material" versus "doesn't stay focused on topic")
- The highest level describes what exceeds the standard or expectation, and often includes language about "breaking rules" or "new and unexpected" approach to task.
- The task is worthy of a rubric. That's a value loaded statement, so to clarify - not all tasks need a rubric and a well-written rubric does take time to write. Generally speaking, I use rubrics for authentic, process tasks that are similar to real world tasks.
- We need to be critical consumers of rubrics that are available in the cloud.
For a more recent view from both sides, check out TeachPaperless' Why I Hate Rubrics/Rubrics Were Great (especially the comments) and Two Arguments for Using (Some) Rubrics and please share your thinking around the sticky wicket that are rubrics.
In my region, we have a strong history of regional scoring so these changes mean that my team will need to develop a new calendar of scoring dates to assist the districts. We have sent out a survey to our districts regarding participation and trying to determine the best way to handle the tight testing/scoring window. I am confident that with the input from our districts we will determine a way to get this task done, although I am concerned that we will lose the professional development aspect that has been the cornerstone of what we do.
Working in a position that requires me to pass along information from SED to our districts and then work to make their directives reality, I tend to walk a fairly careful line with my thoughts and actions. Often, I try to put the realities of a state office into perspective for our districts and get to what the true intent of their decisions are, not how they are actually implemented, funded or twisted by media coverage. I remind teachers about the importance of standards - while we wrestle with making meaning of their broad guidelines and inconsistency across grade levels. I remind administrators that the 3-8 testing system over time will give us important information about a cohort of students that we can use to address issues in curriculum as well as remediate using data, while bracing myself for the "Business First" month of coverage. I share, in a user friendly format, the regulations that deal with mentoring, AIS, RTI and every other mandate there is pushing my districts to think outside the box and find where they are already doing these things, while fighting off the complaints that these are all non-funded and how are districts/teachers supposed to do all of this.
But lately - I am feeling a little like the aftermath of the tornadoes that recently hit my area. Changes have come upon us with little warning, the path is unpredictable and the aftermath is going to require a great deal of clean-up.
I am still processing all of the information and trying to help our districts find a way out of the storm. But it is becoming harder and harder to defend the wizard.
All New York State students in Grades 3-8 will be taking the mathematics and English Langauge Arts assessments in May beginning next year.
The rumor was confirmed on the DATAG listserv with the following message:
Johanna Duncan Poitier just sent out a special edition issue of News and Notes which provides important updates from the June meeting of the Board of Regents to District Superintendents, Superintendents of Schools, Administrators of Charter and Nonpublic Schools, and Other Partners which included a confirmation of the Regents action earlier today moving the 3-8 ELA and Math assessments to May starting next year.I’m sure more will be released in the coming days, including guidance on how schools should handle administration, scoring, data reporting, and other aspects of the assessments. To hear what others were thinking, I connected with my PLN on Twitter and Facebook and the responses were similar. Lots of surprise that we went from survey to action so quickly, panic at the thought of 3rd graders sitting through 5 straight days of testing, and bafflement about what it will look like in practice. Talking through the consequences has been fascinating. Some of the comments from conversations are below. I liked to the author's blog when possible:
Positive Consequence: Teachers can now teach Math and ELA all year long. April Spring Break can provide a natural break between teaching content and teaching students test sophistication or test wise-ness.
Negative Consequence: Eighth graders may conceivably be testing (Math, ELA, Science, and SS) more than learning during the month of May. (Angela)
Positive: The assessments can be viewed as a one-shot deal that happens at the end of the year. A chance to show off what you know, like the big kids in high school.
Negative: A nine year old probably won’t see it that way. (Theresa)
Positive: Weather is less likely to impact testing administration.
Negative: Scoring all assessments at the same time might lead to more than one testing and assessment coordinator cowering in a corner, whimpering.
Positive: The media will report on test data once a year, rather than twice. (Erin)
Negative: People may perceive this as a response to the increase in scores. A way to shake things up so students don't get to used to the test.
I expect the conversation will crest again as the press reports the change. I, like many others, have lots of questions. I wonder how students feel about the change. Were students given the chance to respond to the survey? Angela wonders if it's time to combine ELA and Social Studies into one assessment. What about the impact on final exams, especially given the recent article in The Buffalo News about schools using SED scores in student averages? ... and more more.
Now, I am not against national standards per se, I just want standards that are manageable, measurable and relevant. Having waited anxiously for the revision to the NYS ELA standards since last June, I find it hard to believe that a national group with appropriate representation is going to be able to reach consensus and produce something that will meet those criteria in a matter of a few weeks. Sadly, I became even more skeptical when I read the actual agreement drafted by the Council of Chief State School Officers and the National Governors Association Center for Best Practices which contains criteria for said standards, which will be:
- Fewer, clearer, and higher, to best drive effective policy and practice;
- Aligned with college and work expectations, so that all students are prepared for success upon graduating from high school;
- Inclusive of rigorous content and application of knowledge through high-order skills, so that all students are prepared for the 21st century;
- Internationally benchmarked, so that all students are prepared for succeeding in our global economy and society; and
- Research and evidence-based.
It has taken a year for NYS to review and develop a plan for their revision of state standards in ELA. The last time I was in Albany I was told the committee was still discussing them and “tweaking” what they had before roll-out for public comment, now scheduled for Fall 2009. We have had some sneak peeks at what to anticipate, such as the fact that we will now have a Literacy and Literature strand and that in addition to Reading, Writing, Listening and Speaking, we will now add Viewing and Presenting.
The latter two are very similar to the National Council for the Teachers of English (NCTE) Standards for the English Language Arts which frankly don’t look too terribly different from what NYS currently has in place. So the cynic in me is doubting whether there is going to be any real change when it comes to the standards. If NYS has already spent significant time and energy into developing these revised standards, yet have agreed to develop the national pieces (of which 85% should be adopted by the states voluntarily) – is there going to be real change or are we just going through the motions?
Being a social studies teacher with a passion for all things writing, I was struck by how much writing is required of students on the math assessments. Interestingly, it was often the writing that prevented students from receiving full credit on some of the answers. Many of the teachers complained about this, particularly as we moved onto the middle grade levels. Often, I heard comments like these:
"These kids clearly knew the answer - I don't know why we can't just give them full credit."
"These scoring guides penalize the students who can 'do math' in their heads and don't need to show their work."
I understand their arguments but I also know that NYS is trying to emphasize (through the standards as well as the assessments) the power of communicating in math. And in order to communicate well in math - they must do so using the very technical language of math. While these teachers found the issue to be one of math (and sometimes of reading), I really saw these as the students not being able to express themselves in the mathematical language. A few examples:
When asked to express their answer in exponential form, several students would provide the correct answer when writing "three to the sixth power" but could not receive full credit because they did not write it in correct form.
Some students would write to explain how they found a particular answer, but in a somewhat vague manner such as "because you have to find the straight 180 so you would subtract." Teachers would argue that it is evident that the students understood the notion of complementary angles, but in reality there is not enough detail in this statement. The straight 180 what? Subtract what? From what?
Writing in the disciplines is very content specific. I have long held the belief that each content area has its own literacy that goes beyond merely teaching students to use the English language. In social studies, students need to be able to read and communicate about maps using correct terms. They need to understand the symbolism in political cartoons and the trends in charts/graphs. In science, they need to understand scientific notation, the symbols in chemistry and how to write a chemical equation. I could go on and on but you get the picture. Each discipline requires a highly technical language and one that we must explicitly teach our students. Each of us truly is a teacher of literacy.
For my math friends, it didn't seem the appropriate time to share my thoughts about the difference between having students muscle through the math to come up with a correct answer and having them share their understandings of the process and relationship between numbers in written form. But you can bet that I will be learning more about the technical writing in other subjects so that I can help them teach writing there as well.
Cross-posted on Writing Frameworks.
When this question lept off the page for me at a recent Communities for Learning session, it seemed that three days worth of thinking had found a home. This post has been in draft form for a while but I have decided that examining the research and practice around this essential question will be one focus for me in the upcoming year. Since this post will capture my initial thinking around this topic, it is not heavy in the research but merely my attempt to capture the "problem."
Consider these scenerios that recently presented themselves in my work:
District A has adopted a new series and carved out a 90 minute literacy block. Teachers are struggling with the use of the block and despite having an onsite "coach" do not seem to be making good use of the time. In a planning meeting to discuss the development of the teachers, the idea of starting by coaching those teachers who were closest to the ideal in order to have them lead their colleagues was suggested. Building principal was not sure there were many teachers who were "close" and was concerned about how those teachers who were coached might be percieved by their peers.
District B has slowly been acquiring new technology resources for teachers to use and the building principal has been committed to providing teachers with the use of interactive whiteboards. As teachers see this equipment being used in the school, some are ready to embrace the technology (and the learning curve) and try out some lessons. Building principal asks the early users to showcase how they use the technololgy for their peers at a faculty meeting - none feel they have the expertise to do so and the computer teacher demos something instead.
In the same district, one new teacher (untenured) has slowly been integrating the technology even though she has not been given one of the interactive whiteboards. She researchs sites on the Internet, is taking a graduate course on media literacy, brings her class to the school lab weekly and has integrated quite a bit of technology. She even selected a technology based lesson for her observation with the building principal. She is not asked to present at the faculty meeting and has recently been passed by for the installation in her classroom of a new whiteboard purchased by the PTA.
In District C, a consultant has asked teachers who have been engaged in a long-term professional learning opportunity around discourse to share an instance where they took a risk and were successful. In the reflection around this question, teachers struggled to think of answers where they had been successful.
In each of these examples, I am certain that the district wants to foster teacher leadership and that there are teacher leaders available - yet they have not been tapped. What conditions must be in place in a school system for teacher leadership to be developed, and more importantly, to thrive in a sustained way? What dispositions must teacher leaders exhibit to be effective? In short, the essential question is what does it take to truly develop teacher leadership?
As I work to frame this question and my research better - I would appreciate any warm/cool feedback on the identification of the problem and question!!
Usually, the most common reason for giving a practice math test is to identify students’ weaknesses. Hopefully the first post showed why it’s so critical to determine the weakness you’re talking about: familiarity with format, time, etc. If you’re worried about the math there are particular ways to approach the practice test.
First, ignore how the student did on the assessment. It sounds counter-intuitive but there is rationale reason for it, I promise. It’s more important how your students did on particular items than how they did overall. There are a couple of reasons for this:
NYS Assessments are based on a criterion-referenced model. Typically, when you give a student 25 questions, you mark the correct responses, determine a fraction of correct over total and come up with a score. Generally, we talk about these scores in percentages. Due to the complexity of the NYS assessments and the fact that they focus on performance in relation to a standard or criteria, scores are NOT reported this way. In fact, the number of raw points needed to demonstrate mastery shifts from year to year depending on the standard setting process.
It’s not the real deal. Regardless of the conditions we create, students know it’s not the real deal. Their performance may be inflated or deflated for that very reason and may not reflect their true performance.
NYS Test Design procedures. NYS follows a particular test design model that requires the test include items with varying difficulty. I’m sure you’ve noticed looking through the test that some questions “feel” easier than others. This isn’t a coincidence. Items are strategically chosen for the assessment that reflect a range of difficulty based on how students performed on them on the field testing. It doesn’t make sense to include 25 questions on Book 1 that were missed by most students during field testing. So, the test designers include items with a variety of difficulty – a few hard, a few easy and most middle of the road. This concept of item difficulty is called “p-value” – most simply put, what percent of students responded correctly to a question. In shorthand, we say items with high p-values are easy, while items with low p-values are hard for the particular group of students under discussion. So - two districts side by side may have different p-values on the same item. We need a neutral standard or benchmark to act as judge and jury around item difficulty. That's where the state data come in.
A great deal of data about NYS tests are made public every year – including p-values. These data can tell us which questions are easy and which are hard. It’s not a secret and requires only a smidge of background to use correctly. P-values are provided at a couple of levels. The one that is most important is for our purposes here Low Level 3. In this example, let’s talk about fifth grade. My mental model around scale scores and p-values is to picture a giant swimming pool filled with every fifth grader in the state of New York who took the state assessment last year. Floating above their head is their scale score. Students from the Bronx to Buffalo, from Long Island to Lake Placid. Students with and without disabilities. Levels 1, 2, 3 and 4.
I can look at how ALL the students did on items but included within the mix are students who really struggled and students who did really well (We assume most questions were hard for students at Level 1 while most were easy for students at Level 4.) So, I as the data lifeguard blow my whistle and call out every child who scores Levels 1 and 2. Same for the Level 4’s. Left in the pool are my Level 3’s – every child who met the standard. Because I want the data to be as clean and precise as possible, I’m going to boot out every child who scores above the minimum standard – which in Fifth grade in 2008 was 650. Left in the pool I have a few thousand students – all who met the minimum standard, AKA scale score 650. For each question these students took, I can look at how many got each question right and compare (or benchmark) my students to their performance. The graph below shows you what that looks like:
Out of ALL of the students who scored 650, only 18% of the students got question 7 correct. In other words, that was a hard question. My gut isn't telling me that. My students aren't telling me that. Students from across NYS are telling me that. Take a look and see how your students did on it. Odds are, they didn't do very well. It's not because you didn't teach it or they just weren't listening. It could be because the wording tripped them up - just like 82% of all students who scored a 650. The question is below:
Students are likely to pick A because it practically screams "PICK ME!" at them. Your students may know fractions inside out and sideways. Picking A and not C is an issue of testing sophistication, not mathematics. When reviewing similar problems with students, as much as possible, give them "PICK ME!" choices so they can learn what they look like and how to avoid their siren song.
However, before assuming it's a strength or weakness, look for other evidence that the students understand the concept. Formative assessment can really come in handy here. You can pose a similar question and ask students to respond on their way out the door. This time though, ask:
Anne has completed 87% of the race. What fraction represents that portion of the race she has NOT finished?
If students get the math, they should pick A. If they pick C, it's probably a testing issue. They slid past the NOT. Anyone who picks B or D may have a problem with fractions in general. How did they do on question 15 which taps a similar understanding? (I use Tinkerplots to answer these questions. It's one of my favorite data toys.) The students will form themselves into like-needed groups, depending on what the other instructional evidence shows.
So - if you're going to give the test to identify weaknesses:
- Consider how your students do on easy questions (high Low Level 3 p-values) versus hard (low Low Level 3 p-values) questions.
- Be aware of what wrong answers students give as that's often more interesting than what they got right.
- Consult other evidence (formative and summative) before confirming the students have a mathematical weakness.
Reflecting upon the dreams of those who came before us and the dreams that we have for our children on the eve of the Presidential Inauguration, I wonder when in our history we took education for granted. As a social studies teacher, my students were always amazed to learn that education wasn't always mandatory. That children would often work in very dangerous conditions to help to support their families. That education was something that people strove to achieve - not something that was expected.
Today, we argue about whether we are preparing our students adequately for the world they will enter when they leave school. Since their inception with compulsory education, schools in the United States have operated under a factory model. We have bells, we have a set curriculum, we churn out graduates like Model T cars. Only we don't, do we?
I am not sure that I have the answer for education - in fact, I am pretty sure that I do not. But after reading Three Cups of Tea: One Man's Mission to Promote Peace...One School at a Time I have been thinking very differently about education. Throughout the book, which chronicled Greg Mortenson's struggles to build schools in Pakistan and Afghanistan against the backdrop of the internal and external conflicts of those countries, Mortenson talks about the power of education to bring peace. The communities where Mortenson built schools worked together to see them constructed - often carrying materials on their backs to the remote areas where the schools would be built. They challenged traditional and religious norms to allow their daughters to attend school. Those schools brought the world to remote villages, for all the good and bad that entails. It brought new perspectives and new opportunities.
We have plenty of schools in this country - but we don't have the passion for them to exist that is described in Mortenson's book. I am struggling with why.
Photo credit: Lewis W. Hine. Library of Congress Prints and Photographs Division.
The response to her inquiry a few minutes later?
Did one of the authors of this paper steal your boyfriend in grammar school by any chance? As to your question "Am I missing something here?", I would have to say professionalism, scruples, and a LIFE!
So maybe he is the woman's best friend and he has an odd way of teasing her. Perhaps he knows the authors and feels the need to defend their academic honor. Or perhaps he's provided yet another example of why we as a profession so often chose to struggle alone rather than revealing our challenges or struggles in front of others. In any event, I sincerely hope the listserv moderates call him out and give the original author some positive lovin'. Odds are they won't.
Odds are good that by this point, the students are familiar with the format of the test and would not benefit from taking any additional practice tests. This is especially true for 8th graders who could probably write the test by now. An approach that may be more beneficial is to spend some time reminding the students of the purpose of the tests and the intended audience. It is not to find out if Jane or Jose is smart or is a good student or to determine their self-worth or to find out if their teacher is good. The audience is every student, Grades 3 through 8, in the state of New York. In other words, a student's self-check for a correct response might be: is this the best answer or is it best for me?
Monday may best served by reminded them how to translate all of that learning to one particular format. A useful strategy may be to provide the students with a Venn Diagram on the board or chart paper and have a discussion about the difference between Real World behaviors – all of those great things we do when we interact with texts in our “real lives” versus what we do on the day of the test that we do at no other time. To anchor yourself in this mental model, consider how you drove the day of your driver's test. How did you behave when driving while running errands or driving to work (assuming there weren't three inches of ice on the roads - yeah Buffalo!)? There are some things in common (text tagging, using the text to find an answer) but there are lots of differences that are worth highlighting:
The analogy may not work for all students - if that's case - and seriously (even in the most "test netural" schools Monday is a high stress day) what else are you going to be teaching/talking about? - spend some time Venn-ing out behaviors playing basketball versus football, watching TV versus watching movies or eating dinner at home versus eating dinner at a friend's house, then shift to testing and real world behaviors.
In any event, regardless of how the is spent, it's helpful to keep in mind the sage advice of Dr. Seuss in Hooray for Diffendoofer Day!
Miss Bonkers rose. “Don’t fret!” she said.
“You’ve learned the things you need
To pass the test and many more –
I’m certain you’ll succeed.
We’ve taught you that the earth is round,
That red and white make pink,
And something else that matters more –
We’ve taught you how to think.”