It's not as if no one ever noticed this before, but I really wish that History Channel would stop with the reality shows junk. I don't want them to go back to WWII all the time, but come on, there's no history on during Prime Time on History.
Can't we have a "Vicarious Jobs Channel"?
Sunday 2/26
8/7c Ax Men: Fists of Fury
9/8c Ax Men: Wake-up Call
10/9c Full Metal Jousting: Death Sticks & a Coffin
11/10c Top Gear: Muscle Cars
Monday 2/27
8/7c Pawn Stars: Guns Blazing
8:30/7:30c Pawn Stars: Pawnocchio
9/8c American Pickers: Knuckleheads
10/9c Pawn Stars: James Gang Rides Again
10:30/9:30c Pawn Stars: Ring around a Rockne
11/10c American Pickers: Odd Fellas
Tuesday 2/28
8/7c Pawn Stars: Weird Science
8:30/7:30c Pawn Stars: Learning the Ropes
9/8c Top Gear: Dangerous Cars
10/9c Top Shot: Shotgun Showdown
11/10c Top Shot: Shotgun Showdown
Wednesday 2/29
8/7c American Restoration: Keep on Trucking
8:30/7:30c American Restoration: Tyler's Promotion
9/8c Only in America with Larry the Cable Guy: America After Dark
10/9c American Restoration: Cold War Crusin'
10:30/9:30c American Restoration: Close Shave
11/10c Only in America with Larry the Cable Guy: Larry's Favorite Stuff
Thursday 3/1
8/7c Swamp People: Divide to Conquer
9/8c Swamp People: Monster Marsh
10/9c Mudcats: Boiling Point
11/10c 10 Things Yyou Didn't Know About: Abraham Lincoln
11:30/10:30c 10 Things Yyou Didn't Know About: Benjamin Franklin
Friday 3/2
8/7c American Pickers: Knuckleheads
9/8c 101 Gadgets That Changed The World
11/10c Mudcats: Boiling Point
Saturday 3/3
8/7c Full Metal Jousting: The Ultimate Extreme Sport
9/8c Full Metal Jousting: Unhorsed
10/9c Full Metal Jousting: Death Sticks & a Coffin
11/10c Modern Marvels: World's Sharpest
Sunday, February 26, 2012
Fallacy of Relevance
Here is some material for an exploration ANYQs. You have to be careful with medical issues because kids misinterpret and will sometimes take this as a criticism of their lifestyle, parents, capitalism, world-view. They will also assume that I'm out to bash big Pharma (why they care is beyond me).
I start with the first article.
From the tone, it is obvious to someone that OMG OMG OMG OMG OMG
Aside: Pfizer pushes their drugs so hard that it incurred a $2.7 Billion fine for health care fraud (partly the largest criminal fine ever in any case in the US and partly for violations of civil law, the False Claims Act.) Pfizer sales in that same year: $48 billion. Pfizer profits in that same year, after paying that fine: $5.7 Billion. Much of that despite the fact that many other countries force Pfizer to charge far less for their product, but we digress ...
And then you find that the other, "much more acceptable", fake drugs have a fairly strong market as well:
That's the United States. Now multiply by 5 (maybe?) to get the Global ... $400 Billion for the US * 5 = $2 trillion.
75 million in (possibly) fake vs 2 trillion. Which means that fake-fakes are 0.00375% of the market. I'd interpret that as something to be improved but pretty damned good.Acceptable fakes (homeopathics), on the other hand are 0.295% of the market, nearly 80 times as big a problem.
Kinda makes that $75 million fade into insignificance, doesn't it?
I start with the first article.
From the tone, it is obvious to someone that OMG OMG OMG OMG OMG
Fake drugs are increasingly being sold on the Internet in a global counterfeit medicines market that has doubled in the last five years to more than $75 million. The medicines, many of which are life-threatening, have even turned up in the legitimate supply chain and found their way into pharmacies, according a review by Dr Graham Jackson and colleagues published in the March issue of the IJCP, the International Journal of Clinical Practice.Is that a problem? For most kids, that scares the germs right out of them until you line that statement up against this one:
And that's just for Lipitor, the cholesterol-lowering drug. So, if $26 Billion is only 6.5% of market, that means the market is worth about $400 billion.
Accounting for 6.5 percent of the total market share, statin drugs are the most widely sold pharmaceutical drugs in history. To date, Forbes Magazine tells us that statins are earning drug companies $26 billion in annual sales. ... Pfizer spends over $3 billion each year to convince us that we need more and more drugs to be healthy.
Aside: Pfizer pushes their drugs so hard that it incurred a $2.7 Billion fine for health care fraud (partly the largest criminal fine ever in any case in the US and partly for violations of civil law, the False Claims Act.) Pfizer sales in that same year: $48 billion. Pfizer profits in that same year, after paying that fine: $5.7 Billion. Much of that despite the fact that many other countries force Pfizer to charge far less for their product, but we digress ...
And then you find that the other, "much more acceptable", fake drugs have a fairly strong market as well:
The market for homeopathic and herbal remedies increased 17% from 2005-09 to reach $5.9 billion. As these once considered “alternative” remedies continue to transition into the mainstream, Mintel expects growth to continue at a steady rate, averaging 3.5% growth annually through 2015. This report explores the market for homeopathic and herbal remedies, looking at sales across all channels, as well as consumer usage habits and attitudes towards such remedies. Topics covered in this report include:..The report seems interesting, but I did not want to pay for that report to find out more details; the price was £2,464.53! "Add to Cart?" .... um, No.
That's the United States. Now multiply by 5 (maybe?) to get the Global ... $400 Billion for the US * 5 = $2 trillion.
75 million in (possibly) fake vs 2 trillion. Which means that fake-fakes are 0.00375% of the market. I'd interpret that as something to be improved but pretty damned good.Acceptable fakes (homeopathics), on the other hand are 0.295% of the market, nearly 80 times as big a problem.
Kinda makes that $75 million fade into insignificance, doesn't it?
Friday, February 24, 2012
NY's VAM is a Club that Gates Disapproves of.
Developing a systematic way to help teachers get better is the most powerful idea in education today. The surest way to weaken it is to twist it into a capricious exercise in public shaming. - Bill Gates in the NYTIt's even worse when the system in use is riddled with errors, has random fluctuations in scores that can make or break a teacher's reputation, doesn't have the support of those being evaluated and makes tenuous associations between scores and those responsible for them.
How the scores a student receives on a meaningless test can be much of an indication of the worth of a teacher who didn't take the test, didn't have that kid in class except for part of the current year, and rarely has much control over that student and his personal and academic life, is a mystery to me and a source of much bemusement.
My state doesn't have VAM yet but it does publish the NECAP scores and tries to shame schools into improvement. Our difficulty up here lies in the fact that these tests are taken in the 7th and 8th grade and then in the beginning of the 11th grade. That's it. This year's juniors took the test a month into the year and I had never had them in class before - what kind of measurement system is that and who is being measured, really?
It's interesting that the public sees numbers such as "64% of the students are not proficient" and still votes for our budgets every year at town meeting.
A quick note for all of you big city folk used to 3.5 million-student city-wide districts and mayoral control: Vermont districts are generally a few small towns banding together with schools ranging in size from 200 to 2000 students, heavily skewed towards the small end. Each of these districts has a school board for the district and one for each elementary school. At Town Meeting Day (yes, we still do that), each town votes on it's school budget and town budget, elects school board members, and decides other weighty issues including whether to pay $460 to the ambulance service.
Thursday, February 23, 2012
Meditation: Just Another Revolution/Fad
Labels:
Fallacy,
School Reform
I'll leave it to you to figure out which:
and now ... meditation.
If only it were so simple.
How Meditation Transformed a SchoolJust like that, POOOF! It's either SBG, or Peer Mediation, or Cooperative Learning, or moveable chairs, or a new building, or Literacy Across the Curriculum, or wellness day, or Parent-School Partnership, or unschooling, or pre-K for all, or free lunch,
Just five years ago, Visitacion Valley Middle School in San Francisco was notorious for high truancy rates and disciplinary and safety issues. Since then, the number of suspensions has been cut in half and the truancy rate has dropped 61%.
and now ... meditation.
If only it were so simple.
Schools should not be policing YouTube
Labels:
Bully,
Law,
School Policy
Huffington Post: "After two minors from Gainesville High School in Gainesville, Fla., posted a nearly 14-minute-long racist rant on YouTube, the girls are "no longer students at the school," WCJB-TV reports. Last week, eight police officers were brought to the campus in light of death threats the girls were receiving in response to their videos. The videos included comments like, "You can understand what we are saying, our accents, we use actual words. Black people do not."The school has no business trying to regulate what is said/posted on YouTube. To say that the girls are no longer at the school and then to say that the comments were not welcome certainly makes it sound as though they were expelled for their speech. Schools are doing too much of this over-reach into the private lives of their students.
"Gainesville High School principal David Shelnutt did not go into detail on the extent of the disciplinary action taken against the girls, but did tell WCJB that their comments were not welcome at the school. "There's no place for comments like that, that video here at GHS," Shelnutt told the station. "There's no place for that in the Alachua County Public School System, and my opinion, no place for that in society in general."Does he also expel a student who repeats what a candidate like Santorum says but in more extreme fashion, or one who repeats a Malcolm X rant? Perhaps a teacher who uses the word "nigger" in context during a class as part of a thoughtful discussion on race and Huckleberry Finn? (oh, wait ...)
Someone needs to be the bad example. |
This all seems to be an extension of the anti-bullying laws that do much the same thing. Having said that, however, it must be noted that many students are under the mistaken impression that they are immune to response and that anything they say in the privacy of their bedrooms is "nothing" - forgetting they are saying it to a video camera and posting it for all to see.
The parents are not blameless. The girls are minors and should not have had total and unquestioned access to all social media. Goofiness is typical and could have been over-looked, but they just found out that being racist and insulting has consequences and that being racist and insulting a second (or third) time becomes more than momentary brainlessness. I would imagine that the parent never once said anything approximating "I will occasionally check what you post, be nice."
It would have been easier than:
"While we can never take back the words and actions that these two children have said, we have to start to heal and forgive IMMEDIATELY. Stop the violent threats to our homes and our children, stop the anger, because this will solve absolutely nothing, and most importantly, look at yourself for change and love."Interesting that she feels that she can demand anything, that people need to forgive and forget IMMEDIATELY. Maybe the girls can at least be an object lesson for others.
My daughter has gone into a severe depression from what has happened and her remorse and sorrow is beyond description.Yes, she has become depressed because she has lost friends and someone spoke back in a nasty fashion, but she should be more depressed by the fact that she cast herself in a pit from which she will take years to climb out: peers, colleges, future employers checking her Facebook and YouTube accounts and reacting accordingly. She will live this down but not easily. A simple Google check will suffice because, while HuffPost and the TV station will not print her name, all of the kids at school will - especially those with an axe to grind. Her name will soon be linked to that video.
She wishes she could take it back, the girl said. “I’m not a racist person. I still don’t see someone and judge them because of skin color,” the girl said, but after the video, “no one is going to believe me anymore.”And why should they? In the video, she is replying to comments about an earlier racist rant. This apology is invalid. This girl knew about and pushed as many buttons as she could and demonstrated, as clearly as anyone can, that she is racist. You know the rule "Once is a fluke, twice is coincidence, but three times is a law of nature."
All teenagers should have a chance to go through a stupid phase and grow up to realize that they really don't agree with their stupid phase. The problem is that YouTube makes it permanent.
And then you have this from Britain's Daily Mail, of all places, about two OTHER white girls:
In this latest disturbing video, the girls start by saying white students 'turn black' as soon as they enter the school, claiming you 'catch the disease'. At one point they add: 'Guys, if you're watching this video now, and you have a weave, and you're black, please be offended - because we're making fun of you.' It ends with one of the girls saying: 'Don't post this on Facebook because all our friends are black.' They chuckle, flash peace signs and say 'peace and love'.Too late.
New Larson Algebra Textbook.
My first reaction was "Too many pages" and "Too much nannyism". PseudoContext is festering all over the place. Bloat is a problem, too. The textbook industry has jumped the shark.
HMH sent out invitations to view their new Larson books for Algebra 1,2 and Geometry and immediately it strikes me: Larson, really? Did Larson write any of those textbooks? I doubt he even opened a single chapter document, never mind actually write one, but I digress ...
Let's take a look at this in this new and shiny, Common Core Standards Compliant Algebra II textbook that is so extensively resourced as to be exhaustive ... a curriculum ad nauseum. The images show the incredible list of stuff that comes with this thing.
Bloat: 1220 pages - really? Even if you exclude the 300 pages of appendices, answers, introductory material and glossary, you still have 900 pages of information and material. That's about 5 pages a day, six if you don't count all those exam days and assemblies and such. Seems excessive.
I like having the Pre-AP and Remediation guide because my Guidance Department can't seem to schedule students intelligently and everyone is under the impression that dumping every level of ability into the same room is appropriate.
The Spanish Assessment Book seems wrong for many reasons. First, Spanish is a common second language but it's by no means the only one. Second, the rest of the materials are in English so having a test in Spanish doesn't seem necessary:
Seems that any teacher who is dealing with these kids on a day to day basis would be able to handle that in English. If you're able to teach the material, then you're able to create a test that the kids can understand. Do we really need to translate "Factor" when none of the big measurements will (Regents, SAT, ACT, GRE, GED, AP, IB)? I remember taking languages - the instructions were always in French (or Latin, or German, or Japanese - I took lots of language courses) - you just deal with it and then suddenly, it's no longer a problem.
The multi-language Visual Glossary looks really cool. I'd get one of those. The same glossary in ten common languages: English, Spanish, Chinese, Vietnamese, Cambodian, Laotian, Arabic, Haitian Creole, Russian, and Portugese. Quick aside: I once had two students who couldn't speak English - a Chinese girl and a Japanese girl - they could write to each other but couldn't understand the other's spoken language. The Kanji for many things was identical, even though the words for the pictograms were different. Try it - it really brings together two groups who don't ordinarily get along and gives them a feeling of security in a sea of doubt.
384 pages for a note-taking guide? Couldn't they have simplified this a little? Anything greater than 2 pages is too much for a guide to taking notes. "Isn't this the antithesis of note-taking?" I asked myself, so I went and previewed it.
I ran into what I would term "educational nannyism", by which I mean materials that try to do too much, that attempt to write every letter, word, symbol and expression. The teacher would photocopy these pages for the students who fill in a few blanks and then save them to their binders, completely eliminating the necessary mental steps of assimilation and consolidation that comes from the note-taking process. It's very much like writing a lab report by filling in the blanks in a prewritten template.
The whole thing is done for the student except for a few blanks and an extraneous table at the end - it has no purpose in this problem and there isn't any room to put in the extra three problems which would make the rows necessary. Weird.
I figured that I'd find PseudoContext but I was surprised at how much of it there was and how easy it was to find it. I randomly entered page numbers in the preview thing but never had to go looking very far - mostly I went to the word problems (the "problem-solving" section ) of whatever topic I landed on and was able to find at least one whopper in every set.
Seems a bit more accurate than most mountain climbers could accomplish and I really don't like the casual use of the mountainside in the picture. It's too much of a visual to be ignored and it only causes confusion in the viewer: is this the line required? Most students are going to want to put the altitude on the y-axis as the picture shows; why did they label it this way? Then, the instructions are more detailed and specific than they need to be. A simple list of the data and a question about the temp at the top would have sufficed. Also, way too many data points - they're bashing you over the head with 4° per 500 ft and way too much regularity in the points - how about base of the mountain, edge of the meadow and treeline? BTW, how many mountains go from sea level to 14000' like that? That picture is maybe 6000 feet. Get a better image.
Wow, I hate this question. (1) A modern manufacturing plant can make a few more than 60 printers a day. (2) The profit is in the ink, not the printer. Some printers are sold below cost for just that reason. (3) The labor estimates are way off. (4) You don't just turn off a laser printer line and turn on the inkjet line at random.
Get some better numbers dammit. This math was developed to streamline manufacturing processes so don't shy away from reality.
Stop it. Give the regression equation if you must (but 9 data points would be cool, too) but then just ask the question. This is not an equation that is particularly good for synthetic division and factoring (See it on Wolfram Alpha). There are so many questions that could have been asked with this but weren't. Shame, really.
I just laughed at that. "The cooling rate of beef stew is r = 0.054" - How in the hell is anyone supposed to know that and why would anyone in RealLife™ approach it that way? Wouldn't three time-temp data points have been better, more realistic?
This is so back-asswards. The height of the dock above the ground is related to it's width? It's usually 48" but differences in truck bed height will change that - the RealWorld™ makes this problem stupid and knowledge of the RealWorld™ gets in the way. The ramps are differently sloped - has the writer never actually seen a ramp before? This is a simple volume problem made unrecognizable to any RealWorld™ dweller. The only reason it's here is to somehow bring polynomials into a problem that doesn't actually need them.
Finally. A RealWorld™ problem. If only they had done three resistors in parallel ...
That's all I have on this book. It's still better than what I have to deal with, so I'd get it if the choice were mine. Replace the stupid questions, install better things using my website and various online resources.
Could work.
HMH sent out invitations to view their new Larson books for Algebra 1,2 and Geometry and immediately it strikes me: Larson, really? Did Larson write any of those textbooks? I doubt he even opened a single chapter document, never mind actually write one, but I digress ...
Let's take a look at this in this new and shiny, Common Core Standards Compliant Algebra II textbook that is so extensively resourced as to be exhaustive ... a curriculum ad nauseum. The images show the incredible list of stuff that comes with this thing.
Bloat: 1220 pages - really? Even if you exclude the 300 pages of appendices, answers, introductory material and glossary, you still have 900 pages of information and material. That's about 5 pages a day, six if you don't count all those exam days and assemblies and such. Seems excessive.
I like having the Pre-AP and Remediation guide because my Guidance Department can't seem to schedule students intelligently and everyone is under the impression that dumping every level of ability into the same room is appropriate.
The Spanish Assessment Book seems wrong for many reasons. First, Spanish is a common second language but it's by no means the only one. Second, the rest of the materials are in English so having a test in Spanish doesn't seem necessary:
Seems that any teacher who is dealing with these kids on a day to day basis would be able to handle that in English. If you're able to teach the material, then you're able to create a test that the kids can understand. Do we really need to translate "Factor" when none of the big measurements will (Regents, SAT, ACT, GRE, GED, AP, IB)? I remember taking languages - the instructions were always in French (or Latin, or German, or Japanese - I took lots of language courses) - you just deal with it and then suddenly, it's no longer a problem.
The multi-language Visual Glossary looks really cool. I'd get one of those. The same glossary in ten common languages: English, Spanish, Chinese, Vietnamese, Cambodian, Laotian, Arabic, Haitian Creole, Russian, and Portugese. Quick aside: I once had two students who couldn't speak English - a Chinese girl and a Japanese girl - they could write to each other but couldn't understand the other's spoken language. The Kanji for many things was identical, even though the words for the pictograms were different. Try it - it really brings together two groups who don't ordinarily get along and gives them a feeling of security in a sea of doubt.
384 pages for a note-taking guide? Couldn't they have simplified this a little? Anything greater than 2 pages is too much for a guide to taking notes. "Isn't this the antithesis of note-taking?" I asked myself, so I went and previewed it.
I ran into what I would term "educational nannyism", by which I mean materials that try to do too much, that attempt to write every letter, word, symbol and expression. The teacher would photocopy these pages for the students who fill in a few blanks and then save them to their binders, completely eliminating the necessary mental steps of assimilation and consolidation that comes from the note-taking process. It's very much like writing a lab report by filling in the blanks in a prewritten template.
The whole thing is done for the student except for a few blanks and an extraneous table at the end - it has no purpose in this problem and there isn't any room to put in the extra three problems which would make the rows necessary. Weird.
I figured that I'd find PseudoContext but I was surprised at how much of it there was and how easy it was to find it. I randomly entered page numbers in the preview thing but never had to go looking very far - mostly I went to the word problems (the "problem-solving" section ) of whatever topic I landed on and was able to find at least one whopper in every set.
Seems a bit more accurate than most mountain climbers could accomplish and I really don't like the casual use of the mountainside in the picture. It's too much of a visual to be ignored and it only causes confusion in the viewer: is this the line required? Most students are going to want to put the altitude on the y-axis as the picture shows; why did they label it this way? Then, the instructions are more detailed and specific than they need to be. A simple list of the data and a question about the temp at the top would have sufficed. Also, way too many data points - they're bashing you over the head with 4° per 500 ft and way too much regularity in the points - how about base of the mountain, edge of the meadow and treeline? BTW, how many mountains go from sea level to 14000' like that? That picture is maybe 6000 feet. Get a better image.
Wow, I hate this question. (1) A modern manufacturing plant can make a few more than 60 printers a day. (2) The profit is in the ink, not the printer. Some printers are sold below cost for just that reason. (3) The labor estimates are way off. (4) You don't just turn off a laser printer line and turn on the inkjet line at random.
Get some better numbers dammit. This math was developed to streamline manufacturing processes so don't shy away from reality.
Stop it. Give the regression equation if you must (but 9 data points would be cool, too) but then just ask the question. This is not an equation that is particularly good for synthetic division and factoring (See it on Wolfram Alpha). There are so many questions that could have been asked with this but weren't. Shame, really.
I just laughed at that. "The cooling rate of beef stew is r = 0.054" - How in the hell is anyone supposed to know that and why would anyone in RealLife™ approach it that way? Wouldn't three time-temp data points have been better, more realistic?
This is so back-asswards. The height of the dock above the ground is related to it's width? It's usually 48" but differences in truck bed height will change that - the RealWorld™ makes this problem stupid and knowledge of the RealWorld™ gets in the way. The ramps are differently sloped - has the writer never actually seen a ramp before? This is a simple volume problem made unrecognizable to any RealWorld™ dweller. The only reason it's here is to somehow bring polynomials into a problem that doesn't actually need them.
Finally. A RealWorld™ problem. If only they had done three resistors in parallel ...
That's all I have on this book. It's still better than what I have to deal with, so I'd get it if the choice were mine. Replace the stupid questions, install better things using my website and various online resources.
Could work.
Monday, February 20, 2012
Difference is Night and Day
Ever notice how all the ads on TV at night are for medications for every obscure problem (including the risk of death, intestinal bloating, heart attack, lung failure, skin cancer, internal bleeding - all spoken in a cheery "don't mind me, just doing my job" voice) and all the ads during the day are for lawyers begging for clients to join their class action lawsuits for drugs that caused accidental death?
The things you notice on vacation. Truly bizarre.
On a side note, can any true science fiction fan watch that commercial for the sleep drug and not think of an alien parasite landing softly on the victims back while she sleeps?
The things you notice on vacation. Truly bizarre.
On a side note, can any true science fiction fan watch that commercial for the sleep drug and not think of an alien parasite landing softly on the victims back while she sleeps?
CT is Raising the Bar, etc.
Labels:
21st Century Schooling,
School Reform
The CT Association of Public School Superintendents CAPSS is raising the bar, overturning the paradigms, reinventing education, thinking outside the box and generally saying any words and phrases that sound vaguely like a 21st Century Edubabble Buzzword. Here's their report, which will tell you all this.
This is the first big blast:
That does look impressive. Instead of simply providing access to education, they're going to educate them with high standards, with customized learning plans and many pathways. They're not using a patchwork of educational standards, no sirree. They're using coherent ones (probably Common Core because those are the fad of the month) and they plan on using technology to transform teaching.
Laudable goals, if you haven't been doing any of that for the past two decades. The problems with this little bit of legerdemain lie in the details.
I've always enjoyed the "authentic learning" thing, except when the people using it have no idea of what, exactly, it represents. It's not the opposite of what Dan Meyer called "pseudocontext" -- it doesn't really mean anything in particular. Certainly, it's not RealWorld learning. I know that seat time isn't a good measure of anything, but it's a little amusing to see that "direct measures" is now the standard. Didn't these used to be called tests and haven't we been doing this for a while now? Maybe not in CT public schools.
That bit about the "begins at different ages" being transformed to "begins at age three for all students" is interesting, too. I used to think that most kids started first grade at 6, or maybe 5 years, with kindergarten coming a year earlier, but this pre-K for all seems a bit of overkill. Did the people of CT agree to this?
This graphic cracked me up a bit: the grammatical error and the missing discipline. Maybe math is considered a science, now. (Nope. Further on they have: "Base accountability on the four core disciplines - language arts, science, mathematics and social studies.) At least they got the phrase "globally competitive" into it. "Internationally benchmarked" would have looked so silly if it were the only 21st century edubabble buzzphrase.
Oh, yeah. "Education is available whenever and wherever the students are ready to learn." Gotta put in that high-tech online learning part, too.
It's a classic Superintendent's Report. "Full of Sound and Fury... signifying nothing."
This is the first big blast:
That does look impressive. Instead of simply providing access to education, they're going to educate them with high standards, with customized learning plans and many pathways. They're not using a patchwork of educational standards, no sirree. They're using coherent ones (probably Common Core because those are the fad of the month) and they plan on using technology to transform teaching.
Laudable goals, if you haven't been doing any of that for the past two decades. The problems with this little bit of legerdemain lie in the details.
I've always enjoyed the "authentic learning" thing, except when the people using it have no idea of what, exactly, it represents. It's not the opposite of what Dan Meyer called "pseudocontext" -- it doesn't really mean anything in particular. Certainly, it's not RealWorld learning. I know that seat time isn't a good measure of anything, but it's a little amusing to see that "direct measures" is now the standard. Didn't these used to be called tests and haven't we been doing this for a while now? Maybe not in CT public schools.
That bit about the "begins at different ages" being transformed to "begins at age three for all students" is interesting, too. I used to think that most kids started first grade at 6, or maybe 5 years, with kindergarten coming a year earlier, but this pre-K for all seems a bit of overkill. Did the people of CT agree to this?
This graphic cracked me up a bit: the grammatical error and the missing discipline. Maybe math is considered a science, now. (Nope. Further on they have: "Base accountability on the four core disciplines - language arts, science, mathematics and social studies.) At least they got the phrase "globally competitive" into it. "Internationally benchmarked" would have looked so silly if it were the only 21st century edubabble buzzphrase.
Oh, yeah. "Education is available whenever and wherever the students are ready to learn." Gotta put in that high-tech online learning part, too.
It's a classic Superintendent's Report. "Full of Sound and Fury... signifying nothing."
Value-added measures don't measure up for Evaluation
Labels:
Testing
Just the basic premise that you can differentiate teachers based on VAM is flawed. If I have a group of students that improves a lot this year but a different group that doesn’t do as well next year, are we to assume that I’ve been slacking off and just need a goad, a little taste of the whip to perform better or should we assume that my teaching is so variable that I can be bad, then great, then merely good?
If my students improve from a Level Equivalency of grade 4 to grade 8 in one year (even though no test can honestly make that claim in any accurate way) and my colleague raises his students from grade 10.2 to 11.3, which of us has done a better job? I may have convinced them to work harder at the end of the year but not actually done much teaching.
If I have a class with “issues” and they only improve from 9.5 to 9.8, that might be a tremendous leap for them but it wouldn’t show that way to the outside observer.
I find it troubling that we have this blind trust in a standardized testing program.
What are they good for?
VA measures are useful to me in a classroom, provided I get them in a reasonable amount of time, disaggregated so I know detail instead of a vague "You Suck" or "You're Great", and the high-stakes are left off it.
Selling newspapers is not a good use.
If my students improve from a Level Equivalency of grade 4 to grade 8 in one year (even though no test can honestly make that claim in any accurate way) and my colleague raises his students from grade 10.2 to 11.3, which of us has done a better job? I may have convinced them to work harder at the end of the year but not actually done much teaching.
If I have a class with “issues” and they only improve from 9.5 to 9.8, that might be a tremendous leap for them but it wouldn’t show that way to the outside observer.
I find it troubling that we have this blind trust in a standardized testing program.
What are they good for?
VA measures are useful to me in a classroom, provided I get them in a reasonable amount of time, disaggregated so I know detail instead of a vague "You Suck" or "You're Great", and the high-stakes are left off it.
Selling newspapers is not a good use.
Income gap and Academic Gap is Linked. Well, duh.
Labels:
Testing
Scott McLeod's Mind Dump: " The test score gap between the richest 10 percent and poorest 10 percent of students has grown by about 40 percent since the 1960s, according to a study by Stanford University sociologist Sean F. Reardon. That's twice the testing gap between blacks and whites, which shrunk significantly in all income levels, he said."
Which makes sense because the income gap between the richest 10 percent and poorest 10 percent has grown since then, too.
If only someone could figure out why. I have my thoughts, and thoughts, and thoughts, but there's no hard evidence for the mechanism. We just know that income correlates to scores really well, a direct correlation of (.95).
Update: Dan Pink coincidentally chimes in here, too: How to Predict a Child's SAT Scores. Look at the parents tax return.
Which makes sense because the income gap between the richest 10 percent and poorest 10 percent has grown since then, too.
If only someone could figure out why. I have my thoughts, and thoughts, and thoughts, but there's no hard evidence for the mechanism. We just know that income correlates to scores really well, a direct correlation of (.95).
Update: Dan Pink coincidentally chimes in here, too: How to Predict a Child's SAT Scores. Look at the parents tax return.
Firing Teachers based on Test Scores.
Labels:
School Reform
Joanne Jacobs noted that Weak teachers fail in New Haven, but not many.
Are we so sure that every state test is perfect? I can't remember a NY teacher who feels the Regents is perfect ... and it's better than the NECAP or the NSRE.
What of the normal variations in the students? My students scores' have been all over the map in the past 30 years or so. Did I improve as a teacher and then get worse, then better, then better, then really worse? Nope. I teach 9th-grade prealgebra and 11th grade consumer math, AP Calculus, and everything in between. In years I taught AP physics and AP Calculus, I was apparently a really good teacher. Other years, the bad years, my juniors hadn't started algebra II when the test occurred and I would be considered "a terrible teacher".
What other "measures of learning" are being used? None. The administrators who couldn't evaluate their teachers before are, I guess, magically better at evaluating them now. I've seen many attempts at evaluation - most of the "new" methods are silly. The only one that works is a comprehensive, across-the-board look at classes, relationships with students, students' future successes. That rarely happens.
Here's a few predictions:
Not now. Wait for it.
The teachers will get bloodthirsty enough.
New Haven’s unionized teachers gave up job security for better pay and benefits, writes New York Times columnist Nicholas Kristof. "With a stronger evaluation system, tenure no longer mattered and weak teachers could be pushed out. Roughly half of a teacher’s evaluation would depend on the performance of his or her students — including on standardized tests and other measures of learning.Really? Half of my evaluation would be based on how someone else's kids do on a single test? Are those scores any reasonable measurement of teacher quality? Not in my state. 32% of the students passed the state test for RI, VT, NH, and ME with a “proficient” rating. Only 3% passed as “Highly proficient”. Roughly a third passed … Either all of the math teachers in four states suck or this test is inappropriate ... yet Vermont has one of the best education systems in the country and consistently ranks in the top five.
Are we so sure that every state test is perfect? I can't remember a NY teacher who feels the Regents is perfect ... and it's better than the NECAP or the NSRE.
What of the normal variations in the students? My students scores' have been all over the map in the past 30 years or so. Did I improve as a teacher and then get worse, then better, then better, then really worse? Nope. I teach 9th-grade prealgebra and 11th grade consumer math, AP Calculus, and everything in between. In years I taught AP physics and AP Calculus, I was apparently a really good teacher. Other years, the bad years, my juniors hadn't started algebra II when the test occurred and I would be considered "a terrible teacher".
What other "measures of learning" are being used? None. The administrators who couldn't evaluate their teachers before are, I guess, magically better at evaluating them now. I've seen many attempts at evaluation - most of the "new" methods are silly. The only one that works is a comprehensive, across-the-board look at classes, relationships with students, students' future successes. That rarely happens.
Teachers were protected by a transparent process, and by accountability for principals. But if outside evaluators agreed with administrators that a teacher was failing, the teacher would be out at the end of the school year. Last year, the school district pushed out 34 teachers, about 2 percent of the total in the district. The union not only didn’t object, but acknowledged that many of them didn’t really belong in the classroom. Fifty more teachers out of 1,800 in the district have been warned their teaching must improve or they’ll be fired.What this really shows is that administrators weren't doing their jobs in the first place. How can I say that so definitively? Simple. Those 94 teachers were hired by administrators, had been there for years being evaluated by administrators and were still teaching. Tenure does not protect an incompetent teacher - there is a three year probationary period during which a teacher can be let go - but none of these were. Where was that evaluation system then?
Mayor John DeStefano Jr. of New Haven says that the breakthrough isn’t so much that poor teachers are being eased out, but that feedback is making everyone perform better — principals included. “Most everybody picked up their game in the district,” he said.What utter blather. "Feedback is making everyone better." Feedback doesn't make people better - feedback pressures people to conform to the will of the commenter. If the commenter is a thirty-year math teacher or even just a thirty-year teacher, then comments matter to me and feedback is useful. If the commenter is a thirty-year-old journalist who's never been in a classroom before, then I couldn't care less.
Here's a few predictions:
- No more "Literacy across the curriculum."
- Teachers will retreat into safe-zones and teach only the proscribed methods in only the proscribed ways. There will be no innovations, no change to the "One True Path". Once a method has been shown to work, no one will deviate. Of course, no one will ever seriously stick their neck out to find a better method, so any educational fad promoted by the principal will suffice.
- Test prep will run rampant because that's the current principal's answer to everything. Topics like Conics will disappear.
- Schedule rigging will become the new Sport of Kings and those who can't ass-kiss their way to a better roster will be fired. Students will be "failed out" of math classes and shuffled into the classes of those lame-duck teachers who are on their way out anyway. No one will ever see an admin's toady taking the fall. The bottom 50% of the students will become poison pills in a fairly high stakes poker game.
- Teachers will not propose or take on courses that cater to low-level or poorly motivated students. I will only teach AP or Honors, and nothing below algebra II (because a kid who has made it that far is, by definition, a better student).
- Mentors and other experienced teachers will be less likely to help a newbie look good. Some will help students they don't actually have in class, but many will not.
Your role-model: Little Boots. |
- Cut-throat competition will win out over cooperation when everyone realizes that it is possible to sabotage other teachers ... $5000 is pretty powerful motivator. "What benefit do I get by doing _____?" will replace "What benefit will the students get if I do ____?"
- When you monetize the evaluation, you get worse results for education. If I know that I can get $2000, or $5000 bonus, then I will do whatever it takes, fair or foul, to get that extra cash. Since the outside evaluators have to agree with the admin, the admin will become the focal point of efforts. Anything the admin wants, good or bad or indecent, will become the goal. Teachers will never speak up in faculty meetings or try to improve the system because that would be criticism of the admin and that would negatively impact your chances at a bonus.
- For now, all of the school energy will be focused on math and English because those are the only courses that matter, but soon this mentality will permeate through all disciplines, including art, history and languages. I'm not worried, though, because I'm a white male math teacher with a degree in engineering. It's the minorities who should worry.
Not now. Wait for it.
The teachers will get bloodthirsty enough.
Monday, February 13, 2012
Raising the Stakes Causes Inflation
Labels:
Testing
Joanne reports:
I can't say I'm surprised to read that "They described how principals used credit recovery to boost their schools’ statistics and how students opted for it as an easier way to collect credits." What's a Highly Ineffective Principal to do?
"Principals are evaluated based on graduation rates, providing an incentive to lower standards."
I shouldn't even toss this into the HIPster line of posts because this is too depressing and I can't really laugh about it.
It's a simple thing: when you threaten someone's job, they will react defensively. Sometimes you like the result and sometimes you won't.
New York is looking into charges that credit recovery programs make it too easy for students to blow off schoolwork, earn credits for doing very little and pick up a diploma. Principals are evaluated based on graduation rates, providing an incentive to lower standards. (Students can earn P.E. credits online.) Read teachers’ comments on Gotham Schools. It’s not just a New York City thing. Teachers all over the country have been complaining about credit recovery.
I can't say I'm surprised to read that "They described how principals used credit recovery to boost their schools’ statistics and how students opted for it as an easier way to collect credits." What's a Highly Ineffective Principal to do?
"Principals are evaluated based on graduation rates, providing an incentive to lower standards."
I shouldn't even toss this into the HIPster line of posts because this is too depressing and I can't really laugh about it.
As you raise the stakes ...Every time we raise the testing stakes, more cheating will result. While I expect teachers to be more honorable than bankers and hedge-fund operators, I assume they are subject to similar inducements and pressures. Just because teachers are not too big to fail doesn't make them immune from self-interest. Especially as no student suffers from getting a better score than he otherwise might have gotten. - Deborah Meier
Sunday, February 12, 2012
Raising standards will do what, again?
Labels:
Math Reform,
Testing
Here's the bad news:
This is a four-state test given to all juniors in RI, ME, NH, and Vermont. 6,000 in Vermont. The 4 point scale is
That's right. 3% got the high score (2/3 correct or better, appr. 70%). (edited 2/13:800) 180 kids out of 6000, spread out across the state. Three year totals of 600 out of 20,000. The passing score is roughly 50% and only a third of the kids got that.
Either every single math teacher in Vermont is screwing up and not doing their jobs or we have a test that isn't appropriate. (and after missing that number in the previous paragraph, maybe it's me.)
If it were just one or two schools, or a single county, you might have a point to make about teachers or demographics but not if the problem is statewide ... and the numbers for Rhode Island, New Hampshire, and Maine are exactly in line with Vermont's.
If I were to write that test with that kind of passing rate, I'd be excoriated for making it too difficult. Instead, our Commissioner says that we should make the test harder:
"Commissioner Vilaseca stated: "We are gathering more information about what Math courses all students are required to take, and will carefully consider whether it is time for Vermont to increase our graduation requirements in mathematics."
Uh, dude, they don't pass the test now. What exactly do you figure raising the cutoff will do?
This is a four-state test given to all juniors in RI, ME, NH, and Vermont. 6,000 in Vermont. The 4 point scale is
- Not proficient
- Nearly Proficient
- Proficient
- Proficient with distinction
That's right. 3% got the high score (2/3 correct or better, appr. 70%). (edited 2/13:
Either every single math teacher in Vermont is screwing up and not doing their jobs or we have a test that isn't appropriate. (and after missing that number in the previous paragraph, maybe it's me.)
If it were just one or two schools, or a single county, you might have a point to make about teachers or demographics but not if the problem is statewide ... and the numbers for Rhode Island, New Hampshire, and Maine are exactly in line with Vermont's.
If I were to write that test with that kind of passing rate, I'd be excoriated for making it too difficult. Instead, our Commissioner says that we should make the test harder:
"Commissioner Vilaseca stated: "We are gathering more information about what Math courses all students are required to take, and will carefully consider whether it is time for Vermont to increase our graduation requirements in mathematics."
Uh, dude, they don't pass the test now. What exactly do you figure raising the cutoff will do?
Saturday, February 11, 2012
The best things in life are real
Labels:
Culture
Caught an ad for Nature Valley Granola Bars: "The best things in life are real things ... like Nature valley Granola Bars." The scenes show two actors obviously walking in front of a green scene of nature trails.
Do they get the irony?
Do they get the irony?
Wednesday, February 8, 2012
Habits of Highly Ineffective Principals ... Fair Weather Discipline
Labels:
Ineffectiveness
The true HIPster always has a weather-eye open for trouble ... and stays away.
Whenever there's a problem, he's nowhere to be found. Faculty who hear the ruckus come running to break it up, split the combatants, and escort the drama queens to the office. Since the teachers were there, they must be responsible for the problem. This is, of course, obvious.
When things are calm and peaceful ... there he is, smiling as if he did something to be proud of. "My presence here has made all the difference."
Whenever there's a problem, he's nowhere to be found. Faculty who hear the ruckus come running to break it up, split the combatants, and escort the drama queens to the office. Since the teachers were there, they must be responsible for the problem. This is, of course, obvious.
When things are calm and peaceful ... there he is, smiling as if he did something to be proud of. "My presence here has made all the difference."
He's like the Anti-Fire Department ... Always Missing.
Thinking about exams.
Labels:
Testing
Real Teaching Means Real Learning wants to
Re-evaluate exam week, something that he feels he hasn't thought about deeply enough. After he bemoans the fact that students aren't collaborating on these exams, he throws up a couple of strawman arguments and takes some huge liberties with student motivation.
"If a kid can fail the exam and still pass, should he take the exam?" Apparently not, which seems like a crazy idea to me. It's been my experience that kids often don't put their knowledge all together in one coherent package until they sit for the exam. "I don't know what I am doing" becomes "Oh, that was easier than I thought. Now I get it." Additionally, how many kids want to only get a 60% passing grade if they could get a 80% ?
Further, he seems astonished that a kid with an average of 28% can't pass the class and is required to sit the exam anyway. I look at it this way: most kids who have a mark in the twenties are sabotaging themselves; they're rarely stupid. If they take the exam and score well, they have that to build from the next time they take the course - not everyone can do it in the same allotted time.
You can even make the case that scoring well on the exam is the ONLY criteria for passing a course -- if the exam is comprehensive. It takes a fairly complete knowledge to bring together all of that knowledge in one place, for one exam, during two hours.
Up pops strawman #2. "As most exams are multiple choice" the student can guess and get a 25%. It's been a while since I gave a MC exam. Tests and quizzes? Yes, there's almost always a MC section. Final exam? No.
RTMRL wants to take some of the kids out of exams and give them intensive one-on-one tutoring instead. He's ignoring the reality that they weren't working for the previous six months; expecting them to suddenly catch up on all of that during the time the rest are taking exams is a little foolish.
"Do we take these students, who are obviously struggling with the course, and test them again or do we teach them? Imagine the learning which could occur with 1-1 help in a 3 hour block with a weak student?" Um, probably not much.
Up pops strawman #3, "If the answer is the latter, then I suggest you be upfront with your stakeholders and put a sign outside your school saying “For an entire month, two weeks during each semester, your child will not learn at this school”.
He is pretending that the kids are losing 2 weeks at a time for exams. I have been working in education, public and private, for nearly 30 years. Not one school has ever taken this much time for exams. Not only that, there is the idea that learning can't happen at exam time.
Enough of that. Are exams worth it? I feel they are, for all students.
It is a summary evaluation. For the entire course, we've broken things down and scaffolded them up. We've looked at pieces of the course in isolation. Projects have been deliberately limited in scope and the students are analyzing and committing to long-term memory the nuts and bolts as well as the grand themes.
The final exam is the time when you can make problems that bring the whole course together, when the student needs to demonstrate ability and knowledge of a whole vast subject, when the notebook is useless and memory is critical, when a single question requires a deep understanding of parts of 10 or more months of work.
Some kids crash and burn and still learn that lesson, and come out of the fire tested and ready to do better the next time. Some students realize how much they really do know (and surprise the hell out of themselves). The rest consolidate their knowledge and prove to themselves how much they understand -- or don't.
You can't fake that.
Re-evaluate exam week, something that he feels he hasn't thought about deeply enough. After he bemoans the fact that students aren't collaborating on these exams, he throws up a couple of strawman arguments and takes some huge liberties with student motivation.
"If a kid can fail the exam and still pass, should he take the exam?" Apparently not, which seems like a crazy idea to me. It's been my experience that kids often don't put their knowledge all together in one coherent package until they sit for the exam. "I don't know what I am doing" becomes "Oh, that was easier than I thought. Now I get it." Additionally, how many kids want to only get a 60% passing grade if they could get a 80% ?
Further, he seems astonished that a kid with an average of 28% can't pass the class and is required to sit the exam anyway. I look at it this way: most kids who have a mark in the twenties are sabotaging themselves; they're rarely stupid. If they take the exam and score well, they have that to build from the next time they take the course - not everyone can do it in the same allotted time.
You can even make the case that scoring well on the exam is the ONLY criteria for passing a course -- if the exam is comprehensive. It takes a fairly complete knowledge to bring together all of that knowledge in one place, for one exam, during two hours.
I have never used one. Seems like more trouble than it's worth. |
RTMRL wants to take some of the kids out of exams and give them intensive one-on-one tutoring instead. He's ignoring the reality that they weren't working for the previous six months; expecting them to suddenly catch up on all of that during the time the rest are taking exams is a little foolish.
"Do we take these students, who are obviously struggling with the course, and test them again or do we teach them? Imagine the learning which could occur with 1-1 help in a 3 hour block with a weak student?" Um, probably not much.
Up pops strawman #3, "If the answer is the latter, then I suggest you be upfront with your stakeholders and put a sign outside your school saying “For an entire month, two weeks during each semester, your child will not learn at this school”.
He is pretending that the kids are losing 2 weeks at a time for exams. I have been working in education, public and private, for nearly 30 years. Not one school has ever taken this much time for exams. Not only that, there is the idea that learning can't happen at exam time.
Enough of that. Are exams worth it? I feel they are, for all students.
It is a summary evaluation. For the entire course, we've broken things down and scaffolded them up. We've looked at pieces of the course in isolation. Projects have been deliberately limited in scope and the students are analyzing and committing to long-term memory the nuts and bolts as well as the grand themes.
The final exam is the time when you can make problems that bring the whole course together, when the student needs to demonstrate ability and knowledge of a whole vast subject, when the notebook is useless and memory is critical, when a single question requires a deep understanding of parts of 10 or more months of work.
Some kids crash and burn and still learn that lesson, and come out of the fire tested and ready to do better the next time. Some students realize how much they really do know (and surprise the hell out of themselves). The rest consolidate their knowledge and prove to themselves how much they understand -- or don't.
You can't fake that.
Friday, February 3, 2012
Subscribe to:
Posts (Atom)
A comment on a Joanne Jacobs article:
And here I thought I was supposed to be teaching math.
I don’t believe VA is anything on which to base bonus or termination. “Seem to correlate” does not mean “cause” … and that’s for the best measurements.
What of all the poor ones? “Some of the early VAM methods were highly unstable” ("Unstable" is a charitable term for "Any resemblance to a consistent reality is neither implied nor intended.")
It means that the results cannot be trusted for grading the student who took them (that's stated plainly and explicitly in the administrator's notes) and it means that the test are worse at evaluating the teacher who didn't take them.
There are many issues with any kind of testing. What exactly are we supposed to be teaching and what results do we want out of it? What will we consider to be a success? Do the tests measure what we think they're measuring and does that result resemble the state of the student?
I am given a curriculum that I am to follow. The test is written for a different curriculum. Don't judge me based on something you tell me not to use.
This graphic to the right cleverly pretends that measuring a child's height is exactly analogous to measuring his grade level. Unfortunately, the accuracy possible in the one is not possible in the other. I would note with some amusement that the books he's standing on make even that height measurement into an exercise in systematic error.
Then, the tests claim to be able to discern between fractions of a grade level but the random error in such a measurement is a full grade level or more. The test to test changes on one of the best-known measurement systems, the SAT, can be as high as 100 points. They don't report scores, they report a range (520-540). The test is 600 points and the variation is 100 points. Now imagine the variations on your typical state test.
States routinely tell the testing company to instruct the scorers that averages HAD to be in a certain range - any test scoring that ran counter to that pre-determined result was wrong. As Todd Farley describes it, accuracy is a fantasy.
It doesn't make sense to evaluate me based on a test given to a fifteen year-old kid who has only had me for a short while, who has failed again and again, who has attendance "issues", who's strung out on something ("self-medicated"), using a test that pretends to accuracy but fails miserably at it and rarely is aligned to the same curriculum that I've been required to follow.
What about Value-Added?