Testing in the United States is a sick, diseased system. It is a malignant tumor that must be excised if we are to ever use testing results to improve students, teachers, schools,

Testing in the USA is NOT intended to help teachers or their students. It is only done to give a number that can be used or not, at the whim of the reader. Since most testing is for evaluative purposes, testing provides numbers to punish people with.

As a teacher, I get absolutely no useful information from standardized testing.

None.

On our "Local Common Assessment", I get to know a RIT range and a corresponding percentile, and breakdowns in "Algebraic Thinking, Real&Complex Number Systems, Geometry, Statistics and Probability."

Then, consider that we have our kids taking a test and one of the
categories is Real & Complex Number systems - Really? These are 9th
graders in algebra 1 ... is the score range of 233-245
based on their less-than-complete knowledge of real numbers combined
with no questions on complex numbers or is that 75%-ile based on
questions that they would have no reasonable knowledge of?

Okay ... I'm ready to adjust my teaching for Algebra 1 ... What changes should I make?

I see none of the questions, none of the individual responses. I have no idea what kinds of things the test-makers considered to be "Algebraic Thinking" nor do I have any sense of what my students might have replied or understood or didn't, except for the kids who told me they just clicked at random just to be finished more quickly.

Okay ... I'm ready to adjust my teaching for Algebra 1 ... What changes are appropriate? Does the kid who scored "LO" really not understand or is she just lazy?

Yeah, that's the breakdown measurement: LO, AV, HI. Useful? No.

And this is a Pre-Algebra class with some 9th and some 10th graders. I would hardly expect them to get anything other than LO. If they could, they wouldn't be in the class.

Okay ... I'm ready to adjust my teaching ... What changes to my pre-algebra curriculum are appropriate here?

But at least I got those few bits of data within a week, because it was a local assessment.

When it comes to SBAC and PARCC, the problems seem to be the same as for NECAP, and before that, the NSRE. Too few questions, coupled with long wait times for the scores (test in October, scores in April) and very dodgy scoring of the results for the constructed response questions ...

and we're still not allowed to see the questions, see the scoring, see the individual results ...

And there was no way you could trust those scores because of the
manipulation of the raw score conversion tables for "continuity
reasons."

Can't have a big improvement year to year because reasons. The first year of every test has to have similar results as the final year of the test we threw away, so yr1 NSRE was first 58% passing, but was re-scored so we only had 30% passing.

If we're getting rid of a test because it isn't working appropriately, why do we insist that the new test's scores match up with
the old test's scores?

And about those scores ... I have never understood how the entire public school populations of five New England states can show results in the way they did:

Highly Proficient: 3%

Proficient: 30%

Below Proficient: 40%

No Evidence of Proficiency: 27%

Really? 33% "passed" a test and you're looking at the teachers, not the test? Of all the kids in all the classrooms with all of the teachers (in VT, NH, ME, RI, and somewhere else that's escaping me right now), how is it possible that only 33% of the students passed a test?

At least the SAT is open ... maybe we should use it instead of paying Pearson far more for less information.

If you can so blithely manipulate scores so as to get a result that your statisticians declare appropriate, maybe the problem isn't in your teachers or your students ... your system needs to change.

If you can so blithely assume that the teachers are the only ones who are responsible for scores but shouldn't be allowed to see any of the test papers or any of the questions ... your system needs to change.

If you can so blithely assume that the students are always "participating fully" and that the results on this worthless and pointless (to them) test, then your system definitely needs to change.