She described the computer adaptive questions and how the program would be able to deliver a different next question based on whether the student got the previous one right. I'm good with that concept, actually. With a suitable scoring system (like the one for Olympic divers), it should work out quite well.
Then we got to the "performance tasks" which were much more extensive, taking 2 hours each for high school students to complete. The students will be responding with text, handwritten (onto a tablet - the goal is for 25% of tests to be taken on tablet), voice over documents, video, and anything else the test creators could dream up.
Here's the kicker.
The math framework and examples will be ready ... in a few months. The digital clearinghouse ... maybe by June 2013. The funding for technology ... "we're working on that". The assessment engine ... that'll be ready soon. The suitable scoring system I mentioned earlier ... "I'm not sure exactly how that'll work".
I point out that the questions they're displaying are problematic, but her response was to remind me that the exemplars had been posted for some time and teachers had been able to make comments. Oh, my bad.
Not inspiring confidence, here.
She put up a reading example about a science topic that was similarly flawed - if you knew the science already, you'd have a huge advantage and your "reading score" would be much higher. Fair enough, you might say, but the elementary teachers in the auditorium didn't know most of the words either. "Turbidity", for example.
|8th grade - they'll guess and check.|
I'm not sure how that's supposed to
measure algebra, though.
I love education.