Authentic Assessment – Ben – 33

This rubrics/assessment thread is like trying to stop a freight train but I do feel honestly that we need a break from it as discussed here yesterday. Let me share what I am comprehending so far:

Jen:  “More fine-tuned descriptors I guess would show a more accurate picture of where the child is, but is that feasible in the process, like right in the moment? I don’t know.”

Ben: I see two problems with the above, jen: 1) the term “I guess” and the term “I don’t know.” If find it a supreme irony that the very person who INVENTED the jGR descriptors five years ago is now not sure about the idea of labeling observable non-verbal behaviors. Jen, in the same way that Krashen can’t prove his hypotheses yet we know they are true, so also, if we are not to trust our eyes and inner vision in class, then we abdicate our responsibility to the kids. Our WORK involves moment to moment loving assessment. It is just so new it is freaking everyone out that “such a complicated thing” could be so simple! (Even when the kid can’t focus, it is loving to say that she can’t focus because it is honest).  This is heart work, This is the end of, as Claire calls, the data turdification of children. YOU are as much about that as anyone I’ve ever met. And so now it is time to trust your heart and stop thinking about whether some (usually some handsome-but-dumb-white-man-who-was-given-the-principalship-by-a-superintendent-who-looks-like-him-and-talks-footaball-but-has-no-earthly-idea-about-what-assessment-even-means, right?)

So let’s take out the doubt you express above and rewrite what you said above with more certainty so we can dispel all doubt that may have crept in on us here in the past three or four days. If you sense some urgency in my words it is because we just spent almost a month opening up, in my own private opinion, a NEW FRONTIER in WL assessment and ain’t nobody gonna mess that up because it took us so long to get here. So, jen:

Jen (this is a suggested restatement on the idea of rubrics): “More fine-tuned descriptors WILL show a more accurate picture of where the child is, AND IT IS feasible in the process, like right in the moment.  I know.”

And why do you know? Because it is your job to know. You see, the TPRS highway is strewn with wreckages of hippy teachers who have no idea how to assess a kid’s progress in their classes. I am one. They have been using outmoded and grossly unfair quantification systems that, because we CAN NEVER KNOW how much a kid gets, or which words they get and which words they don’t get, etc. (Natural Order Hypothesis, Comprehensible Input Hypothesis, etc.). They have thus failed themselves while failing their students in reporting out to them how they are doing so this is a big deal.

And now we are getting closer to a way of formative on-the-spot OBSERVING of behaviors but WE CAN’T JUST SAY TO OUR ADMINISTRATORS, “Oh yeah I watch them in class. I can tell.”

I’d fire me if I said that and I did say it to my curriculum director and I now see I needed to be re-directed and need to be the one to apologize to that guy. Log in the eye thing. Said in a New Jersey accent, “For fifteen years I’m giving my kids tests and I never really know what they know and now finally I see that all I have to do is watch my kids and note in my mind or on an EASY EASY EASY form during or after class what I SEE SEE SEE and SENSE SENSE SENSE and then record that in a simple way and report out using that form/rubric to my principal and doing that would be accepted by my principal because she would get it IF I TOLD HER HOW I WAS USING THE RUBRIC.

When our boss very reasonably asks us to show documentation and all we have is rubrics with no teeth, validity, or reliability, “I don’t know” won’t cut it. I am speaking to myself here. I really need to get off the “IDK Train” and it is happening now this month in this discussion. Amazing! And if we don’t say what we see in specific terms, we will have blown our chance to show off what we do know: infinitely more about assessment than test-obsessed date losers or like Claire calls, them, oh you know.

We show what we know, what we listen for or see in our kids’ eyes or body language or gestures or drawings or whatever by writing it out in a rubric (even if we don’t fill out the rubric as Claire says). So why not show administrators what we are already doing?  THIS IS NOT TOO COMPLICATED AND THERE IS EVERY REASON TO KNOW if a kid is actually sitting up and participating BY LOOKING AT THEM, so YOUR FANTASTIC RUBRIC jGR should NOT be thrown out.

So I want to redirect this discussion, first to take a big summertime break here and pick it up again at our session (kindly offered to us by Carol) in TN and also to take all this way of looking at grading in the direction of the heart and that means compassionate assessment in the moment and reporting it in succinct and identifiable ways to our bosses with what we observe.

I don’t like it that our high IQ Fresno Bad Boy is expressing confusion here. He usually gets things, here in his first year of teaching, in like -5 seconds. So we need to get clear on this.

Now for Claire:

I have gathered that Claire is advocating simplicity. 

Ben to Lance: The idea of simplicity in formative rubric based assessment is music to my ears which YEARN FOR SIMPLICITY IN ASSESSMENT. Are you worried that we would have to use 5 rubrics every day and have to manage 27 different rubrics? We won’t!  Claire has offered 3 easy to use rubrics and said we don’t even have to fill them out, we could just use them as a talking point with administrators. It’s good news and very manageable even for weirdos like me.

Yesterday, Claire offered that jGR is really good because it doesn’t just use two words (Emerging/Practicing), it has actual descriptions. I think Alisa’s is great for younger kids but my principal would squirm in her chair where it all has to be super detailed in middle school. And I think we can all agree using jGR is not too confusing for anybody.

Lance you said,…the fewer levels we describe, the more accurate and clear…

I can’t agree. The more you describe, the more accurate and clear. This is what I never knew until this thread broke open. Fifteen years of ignorant assessing! Now we’re getting a little hold on this deal. Let’s keep it going.

Share:

Facebook
Twitter
Pinterest
LinkedIn

6 thoughts on “Authentic Assessment – Ben – 33”

  1. Alisa Shapiro-Rosenberg

    Let us not forget that as much as this new generation of rubrics will serve as a feedback tool for students and their parents, it is also a tool for educating the other ‘stake holders’ – admins & other teachers. It’s another conversation for Ben to have with his principal about what constitutes student progress in a class that ‘teaches’ an unconscious skill…

    Does it address “how to lesson/curriculum plan?” Not directly or in the traditional sense. But it lets admin know that we are constantly assessing for mastery, and that the ongoing comp checks are a feedback loop telling us when we can add more chocolate chips to the dough.

  2. One more thing. We cannot let others dictated how we assess. We are the professionals. But we don’t act like it. Why is this?

    WE ARE THE ONES TEACHING SO WE GET TO DECIDE HOW WE ASSESS. IS SOMEONE WHO DOESN’T EVEN KNOW OUR CRAFT OR THE RESEARCH IT RESTS ON GOING TO BE ABLE TO COME IN AND TELL US HOW TO ASSESS? REALLY? SOMEBODY THROW ME A BONE HERE! DEEP BREATH EVERYONE. WE MAKE THOSE DECISIONS. STAND UP NOW, STAND UP FOR YOUR RIGHTS. OH BOY WE GOTTA QUICK DOING THE “I SUCK AND I AM LITTLE” ACT. IT’S NOT ATTRACTIVE. AND IT ALLOWS FOOLS TO DICTATE TO US. NO SIR. WE WILL CHANGE THIS SO THERE IS NO DOUBT IN OUR SCHOOL BUILDINGS WHAT WE ARE DOING AND HOW WE ARE GOING TO REPORT IN ON WHAT WE ARE DOING. HELLO! WE’LL HAVE IT READY FOR THE FALL.

    Related:

    https://www.youtube.com/watch?v=X2W3aG8uizA

  3. I’m getting mixed messages; take a break, or keep it going :/ Part of my quote that’s been floating around was cut off in the original. It should’ve read:

    The fewer levels we describe, the more accurate and clear it is to assess (i.e. for us doing the work).

    As an example, some people think that grades should mean something and that they accurately reflect what a kid can do. I have no idea what the difference between what a kid with an 87 and a kid with an 86 can do. This grading example uses 100 levels (or 50 if that’s the lowest grade possible in your district). I would argue that no one can describe 50 levels of doing something, let alone 100. Maybe 6 levels is fine. When it comes down to where a kid is on a rubric, I can make that call a lot easier with fewer levels. Having only two is the most extreme case, but it’s the easiest way to go about things.

    Using rubrics vs. just having them on hand seems to change the discussion, which will be great to pick up. I’m sure we could get quite creative with rubrics that appear detailed and to check boxes. The question I’ll be pondering over the summer is whether we gain more from giving adminz what they think they want, than if we informed them of how unnecessary most of it is given how different language acquisition is from everything else.

    I’m not going to login here until after NTPRS. If anyone would like to contact me directly for anything, I’m at lpiantag@kent.edu

  4. Truth is, I do not have internet at my house. The result is more time with the family and my attempts at song-writing. I leave work where it is.

    “This rubrics/assessment thread is like trying to stop a freight train but I do feel honestly that we need a break from it as discussed here yesterday.”

    Yes. In order to stop a freight train we need time. I will be taking some personal time to reflect on assessments.

    “I don’t like it that our high IQ Fresno Bad Boy is expressing confusion here. He usually gets things, here in his first year of teaching, in like -5 seconds. So we need to get clear on this.”

    Well, I’m sure when I expressed confusion. I’m learning like everyone else and there is much information to take in. Thanks for the encouragement though.

  5. I originally posted this in the 2nd Grade Assessment thread, but here it is again:

    “Certs is a breath mint!”

    “Certs is a candy mint!”

    “Stop! You’re both right!’

    I feel a little bit like we’re in that famous Certs commercial.

    Lance is correct that having fewer “gradations of correctness” actually produces greater accuracy in placement – especially when we are comparing to “standard grading scales”. Can a teacher truly distinguish 59 degrees of failure? Or 41 degrees of “success”? Furthermore, how many times do teachers give tests and quizzes with exactly 100 items? They give quizzes with 5, 10, or 20 questions and “convert” that to “percentages” or hundred-point scales. The margin for error in placing a student is as much as two letter grades. How is that equitable?

    BTW, the 100-point scale did not become the norm until the widespread use of computers in grading. 100 is convenient for programmers; there are no sound pedagogical reasons for it. Schools adopted percentage grades because they fit the thinking and needs of computer programmers who were providing software to school districts. This grading scale has no basis in learning or education.

    The most common scales prior to the widespread use of computers were 3-6 points, and this seems to be the range of the most useful and accurate gradations. Obviously, the most simple categorization is “everyone”, but that tells you nothing. The next simplest is “pass/fail”, and that is used in universities. Three to six categories fits well with human thinking: we often group things in threes, and more recent cognitive science indicates that the human core memory repository is set up to handle four +/-one (i.e. 3-5) meaningful items in our conscious or working memory.

    So, I would agree with Lance that fewer categories are better, at least to a point. (BTW again, I base grades on a scale of 1 – 5.)

    However, we need criteria for placement in the categories, and that is where rubrics come in. We worked for a long time on jGR, and that allows us to give students a good idea of how well they are meeting these standards of interpersonal communication. And the rubrics need to be clear. This takes us back to the ideas of test validity. It has to be something that the teacher can use consistently and get consistent results, so it has to be sufficiently clear but not so detailed that it becomes unusable because it is so cumbersome. There are some pretty horrible rubrics out there, but there are some pretty good ones as well. I am just tossing out an idea, but it seems to me 3-5 items is a good number for a rubric if those 3-5 items are clearly formulated.

Leave a Comment

Your email address will not be published. Required fields are marked *

  • Search

Get The Latest Updates

Subscribe to Our Mailing List

No spam, notifications only about new products, updates.

Related Posts

CI and the Research (cont.)

Admins don’t actually read the research. They don’t have time. If or when they do read it, they do not really grasp it. How could

Research Question

I got a question: “Hi Ben, I am preparing some documents that support CI teaching to show my administrators. I looked through the blog and

We Have the Research

A teacher contacted me awhile back. She had been attacked about using CI from a team leader. I told her to get some research from

The Research

We don’t need any more research. In academia that would be a frivolous comment, but as a classroom teacher in languages I support it. Yes,

$10

~PER MONTH

Subscribe to be a patron and get additional posts by Ben, along with live-streams, and monthly patron meetings!

Also each month, you will get a special coupon code to save 20% on any product once a month.

  • 20% coupon to anything in the store once a month
  • Access to monthly meetings with Ben
  • Access to exclusive Patreon posts by Ben
  • Access to livestreams by Ben