Saturday, October 27, 2012

Evaluating Teachers Part Five

The MINNESOTA VIVA TEACHERS REPORT  was released this week.  It's generally pretty good except for its very glaring sidestep/misstep  of the issues related to measuring student achievement.  The report seems to assume that all measurement of student achievement is created equal.  The report talks about VAMs (Valued Added Measures) as if everybody knows and understands VAMs when in reality VAMs are relatively new to education and not widely understood.  The report rightly excludes VAMs as not appropriate for decisions about teacher employment issues but allows them as a helpful tool to determine whether curriculum or teaching strategies have improved student achievement.

The problem with this is that value added measures, as they currently exist, are really only useful to measure a very limited scope of curriculum and  teaching strategy.  Value Added Measures are numbers that reflect how a student does taking a test compared to a previous time taking the same or a similar test.  For the most part, we only have tests in reading, math, and science.  There's a lot more to K-12 education besides reading, science and math.  And, there's a lot more than one person responsible for an individual student's learning even in reading, science, and math.

The big missing piece is that the VAMs don't look at what the teacher, or teachers, or school, or anybody else does between the times the student takes the test.  That's the teaching part.  The report does a fair job of explaining the many variables and limitations of measuring Student performance which is an obvious reason why they shouldn't be used to make decisions about teacher performance.

I know that I'm making things more difficult for those that want an easy way to measure teaching and learning.  We tend to think it should be easy because all of us have made subjective measures of teachers and teaching since we entered our first classroom.  And, we've frequently found plenty of other subjective reports to support our measurements.  But, individual subjective assessment and even a collection of those individual subjective assessments is not the same as professional objective assessment of the art of teaching that is consistent across an entire school district or beyond.

A good place to start with making this very complex situation more manageable would be to start focusing on formative assessment instead of summative assessment.  If we do enough formative assessment and are careful about our recording and communication of that assessment, summative assessment becomes unnecessary.  We won't need standardized tests.  All teachers have always done formative assessment, but only recently have we had the ability to record and communicate those observations, quiz results, and homework grades effectively.  Getting a good score on the test at the end of the year or even the end of a unit is not the same as learning.  With the tools we have available, we don't need to have students take standardized test; we have the ability to record and communicate student learning as it occurs.

Saturday, April 7, 2012

Evaluating Teachers: Part Four - Assessment For Learning

(At the end of this post, I've added Parts One, Two, and Three in reverse order.)

It's been about 8 months since I wrote about new technologies that can change teacher evaluation. Since then, I've been in conversations with lots of people about the different technologies and ideas for implementing those technologies. (1) It's very clear that the tools exist to dramatically reform how teachers are evaluated; it's not so clear who is going to take responsibility for implementing the new tools. The tendency to do things the way they've always been done is powerful; change can be difficult.

One of the big problems with evaluating teachers is that not very many people know how to do it on the scale that's necessary in order to make it fair and replicable across large schools or districts. The current seniority system makes sense because of the value we've traditionally and justifiably placed on experience. It also makes sense because it avoids the necessity for a more complex method of evaluating teaching which wasn't really practical until the advent of current technologies for recording and sharing information.

The lack of skill and experience necessary for thorough teacher evaluation is complicated by the wide differences in motivations for doing the evaluations. Everybody claims the best interest of students and their achievement; it's the methods for getting the increased student achievement that distinguish the competing camps.

Increasing collaboration between teachers is key to the approach described by JAY ANDERSON in Friday's Star Tribune commentary piece. Anderson worries that ending seniority protections will cause teaching to become a competition to keep a job and reduce the collaboration that produces good learning environments. My own experience confirms Anderson's view. Teaching is a collaboration not only between a teacher, students and their parents, but also between the many varied professional roles in today's schools.

Recording teacher planning and teacher performance with video, audio and word processing will enable comparisons and evaluations that heretofore have not been possible. The commentary of peers on the planning, performance and observation by administrators will empower the improved teaching that is the goal of Anderson's group's collaboration. Commentary by parents and students could also be beneficially included.

Linking student achievement to the decision making and action process that most teachers have always done can now also be part of any new teacher evaluation system. Breakthroughs will come when we start measuring what happens after student learning is measured. The real skill in teaching is determining what to do with the information that a teacher gets when they assess student learning. How do the results of the unit quiz inform the next unit's instruction? Once a year standardized tests don't work for assessment that is used to improve learning. Once a year standardized tests don't work for lots of things because learning isn't standardized and neither are students. The most useful kind of measurement of student learning is the kind that is done frequently and has been created by teachers with input from students and the teacher's collaborators. Some content is standard, but the best way for individual students to learn that content is what teaching is about. Measurement of student achievement needs to be incremental and flexible enough to be used by all the different teaching professionals that contribute to the collaborative learning environment. And, measurement needs to be timely. Announcing the results of standardized tests given in March at the state fair in August or after students have moved to another classroom in September doesn't do much for improving teaching or learning. The good news is that we don't need to keep doing it like that.

(1) One of those conversations led to my current work with Naiku. The views expressed here are mine and do not represent the views of Naiku which are available on the Naiku blog.

Evaluating Teachers Part Three
 (from July, 2011)

Well, it's pretty clear that the legislators over in St. Paul have not been reading my blog. As I said in my previous two posts, using student's test scores on standardized tests is an ineffective way to evaluate teachers. There are links in both of those posts to well articulated reasons why test scores are not a good idea. To be fair, according to the StarTribune report today, the legislators have not specified that test scores need to be used to determine student academic growth. "School districts will also have to start new periodic evaluations of teachers based on a loose set of guidelines. Thirty-five percent of that evaluation must be based on student academic growth. If districts and unions cannot agree to an evaluation plan, they must use one outlined by the education commissioner."

I'll hold out a little bit of hope that at least one or two districts in the state will attempt to determine student academic growth by using something other than standardized test scores. But even if a couple of districts manage to use authentic assessment methods, most districts will rely on some standardized test score. While ineffective and expensive, it's easy and accepted. What's likely to happen is that we'll end up with a hodge podge of ways to determine student academic growth and a hodge podge way of applying that information to teacher pay schemes. I think we're well on our way to making the current mess even messier.

You know, of course, that I'm a Moodle advocate. It's a tool that can be used for most of the many aspects of teaching and learning. Most significantly, for this discussion, Moodle provides the means to assess student academic growth in a whole bunch of ways that can be about as detailed as you want to make them; it also provides a real way to assess or evaluate the work of teachers while actually making authentic correlations of the teacher work to student academic growth. The final outcome could even be enumerated, if necessary, which is what is so attractive to the folks pushing the idea of using test scores.

One of the big reasons, I think, that the currently accepted ways of measuring academic growth is going to be pitiful, in addition to the reasons listed in the articles I've linked to in my two most recent posts, is that teaching and learning is changing, and it will be continuing to change very rapidly, and the standardized testing method doesn't have a prayer of keeping pace with the changes that are happening in learning and teaching.

Take for example Grovo, one of the new companies that will be a force in changing teaching and learning. Grovo isn't in schools, yet, and it's 'only' teaching new high tech types of things, so far. But, it's a method of student centered 24/7 kind of learning that will be what students of all kind are looking for soon. Well, students want Grovo lessons, now. Grovo, so far, doesn't have a way to report student work. I heard a rumor that they might be working on some kind of way to do that - I would recommend using Moodle. Even without built in reporting tools, Grovo and Grovo types of learning environments could easily be ported to electronic Portfolios, which is the way that academic growth will eventually be measured. Yes, portfolios are more complicated than a raw score/percentile score/growth score report, but then teaching and learning is way more complicated than a raw score/percentile score/growth score, isn't it?

And then we have Sophia, my local favorite to make a real dent in how teaching and learning happens in the next months and years. Sophia has actually taken a step toward assessing and evaluating the work of teachers. The posted lessons (learning packets in Sophia speak) are rated by people who already have credentials or experience in the topic area. Sophia has taken the all important step in teaching and learning of including a discussion board and grouping tool. (Full disclosure: I know they're thinking about ways to report out student work to portfolio tools because they picked my brain for a few hours re: Moodle etc. The Sophia folks are very talented, creative, and brave, and I don't think they'll mind me saying that they're open to any ideas you might have, too.)

Socrative is small and new and way out there in New York City by Grovo, but this tool has legs, IMHO. They've leap frogged a lot of the people trying to do something new in education by going right to mobile devices of any kind. Because the Socrative tool is limited in the way that Twitter is limited, it's simple and small, it also has the flexibility to be adapted to all kinds of teaching and learning. Student and teacher work is immediately portable and quantifiable. The issue with Socrative is getting admin people, teachers, too, up to speed on how to actually use all of that raw real data. Tangentially, I think there will also be a learning curve in how to use the tool for optimum pedagogical effect, but it should be a quick curve.
 is already big and not so different from the way we've always done things in the past. It's really just a big online folder full of reproducible lessons - a black line master lesson book on the web. The good news is that there's hardly anything a teacher needs to learn about using that's different from the way things were done in the 1970s, and that's the bad news, too. It's an example of how to use technology to keep doing things the same old way. But, it still has enough variety to really mess up a plan to use standardized tests.

I'm trying to think of the right way to tie-up this post to say that standardized tests are already obsolete before the laser printed bills over in St. Paul cool down to room temp. They were obsolete even before that.

July 25, 2011
A really cool thing about blogs is that you can edit them. This morning I received a Tweet (from Knewton) with a link which said it was about their online learning platform. Since I'd just done a brief review of some eLearning platform type tools, I was curious to learn more about what Knewton was up to. It turns out that the article under the link is actually an interview with George Siemens about how new technologies, like Knewton and the ones I mentioned above, can change teacher evaluation and several other aspects of education, as well. That's what I was getting at in these last three posts.

I've admired and commented about George's work before. His thorough and articulate analysis of the possibilities of eLearning is most definitely worth a read.

Evaluating Teachers Part Two

As I said in my previous post, using student's test scores on standardized tests is an ineffective way to evaluate teachers. If you want more on why it's ineffective try Joanne Barkan's op-ed in TruthOut. She also does a great job of explaining a few of the other big challenges facing public education these days.

I argued in that last post that evaluating the work of both teachers and students will be easier when that work is done electronically, at least recorded electronically, most likely on the web using some kind of cloud implement. If you want to see an example of what that might look like, check out either of the 
Bring Your Own Technology Webinars done by Classlink about its LaunchPad service. Cloud computing will enable school districts to get out of the business of supplying computers for learning, a business at which most school districts are very bad, and it will enable students and teachers to use the tools the rest of the planet is already using to communicate and create.

Moodleshare, where I posted the teaching and learning unit on Bird Observation, is one of the many new repositories of learning content that can be managed via cloud computing tools. The fact that learning content will soon be mostly in the cloud, or at least off the shelf, is fostering a whole host of various ways to create, deliver, and manage teaching and learning content.

I like Moodleshare for lots of reasons beyond the fact that they paid me to be part of the ARRA grant to produce Moodle units. Other reasons would be: they're right here in Minnesota, they're some really smart people doing the work there, and they focus on Moodle, the teaching and learning tool I've been using for the last four years. Jon Fila, one of the masterminds of Moodleshare recently reported that "Between Late-August 2010 and Mid-June 2011,there were 180,606 page views from 55,662 unique visitors. These visitors were from 179 different countries/territories. 39,000 of which were from the United States. Of the U.S. visits, 6,255 of them were from Minnesota. Two of the top three courses accessed on MoodleShare were from the grant uploads." It would seem that I'm not alone in liking Moodleshare.

Now, you might be saying, "yeah, but the units on Moodleshare will require lots of time and work by teachers and students and people to do the evaluating of the teaching and learning before we know if they're any good." Aha, you've just hit on one of the best kept secrets about education - it takes a lot of time and work, and it's complicated, but it's all very doable, especially with the corps of teachers we've already got in classrooms. Some of it is rocket science; some of it is just plain science. A lot of it is about reading and writing. And, in case you haven't seen an iPad, or T-Mobile, or Kindle commercial lately, reading and writing is now being done electronically on a whole bunch of different kind of devices that access the web or the cloud.

The valued added measures,VAMs, being touted everywhere these days are certainly a small incremental improvement over the old ways of 'measuring' teaching and learning. But they're still just resorting the same numbers that were generated in the same way they've always been. 
Defining and evaluating teaching and learning in this 21st Century can't be done by simply downloading sets of numbers that were generated by students filling in bubbles on either screens or pieces of paper. That will take us down a path that will get us to about the same place that the NCLB has gotten us; we'll still just be looking at the final score as it's printed in the next days paper. In order to actually see the 'game' of teaching and learning, or better yet, participate, we'll need to use the literacy and measuring tools that are concurrent with the world in which we live.
Some of the new organizations with great ideas that I think can make a difference in teaching and learning and evaluating teaching and learning are:


The next post will be a more in depth look at how those tools will make a difference.

And, I'm always open to new suggestions...

Evaluating Teachers Part One

I recently had a lesson plan that I'd created evaluated, so I was very interested in the announcement this last week that Maryland will use a qualitative assessment of lessons plans in their overall evaluation of teachers. Principals will be doing the assessing of lesson plans along with their assessment of the environment of the classroom and "other factors that the local school system can determine." Those 'other factors' are intriguing, and I'd like to know more about what that means. I'm betting that Baltimore teachers will want to know what those 'other factors' are, too.

The headlines are reporting not about the lesson plan assessment, though. The headline writers are excited that student test scores will also be included in the overall evaluation. The article in the Sun mentions that 75% of teachers teach things that aren't measured by the standardized tests, which is going to be a huge problem. That very obvious problem, along with the fact that standardized tests are lousy tools to measure student learning, makes their effectiveness at measuring the teaching, a very different thing than learning, doubtful at best.

I don't think it needs to be that hard. The process that was used to assess my lesson plan could also be used to assess student learning. My lesson plan, for a whole unit on science and writing for upper elementary students, will even be available for you to assess, too. It will be posted later this week on Moodleshare. I created this unit as part of the the District 287 Ed Tech ARRA grant (don't try to view this overview if you're on a Minneapolis Public School connection that still blocks YouTube.) When this unit is taught, the student work can also be assessed. Other teachers can use this unit, too. They can comment and make additions and modifications after they've tried it with their students and had their principals assess it. It will be possible for anyone to also assess the student work (with appropriate permissions, of course.) I'm sure that the unit will get better the more it's used and shared.

Jon Fila, the brilliant architect and administrator of this project, did what good administrators are supposed to do. He pushed me to stretch my practice, to go beyond what I'd done before. I've used Moodle for four years in a blended elementary classroom setting, but I've not had much experience with an online only elementary course. Jon pointed out that since this course was to be designed to be used online only I needed to add and change some things to accommodate for the fact that I wouldn't be present in the same room with my students every day to check for understanding and clarify expectations like I'm used to doing. I'm also looking forward to comments from other teachers who might use all or parts of this unit with their students.

Online and or blended learning (using online tools in a F2F classroom) enable the kind of assessment that's being wished for in the Maryland plan but is not likely to happen if principals do what they've sort of always done - flip through spiral lesson plan books on desks, look at posters and charts on the wall, and listen and watch for 30-50 minutes, maybe, from some uncomfortable spot in the room with clipboards and pens in hand. And then, maybe catch the teacher with a note about the 'observation.' That's what passes as assessment by a principal in most of today's classrooms. It's not really surprising that the ideal of this spiral bound plan book, poster, and clipboard method never really gets done thoroughly, certainly not consistently across buildings and districts. It's not effective in the best of circumstances, and a real waste of lots of people's time in most circumstances. Real qualitative assessment and collaboration between administrators, teachers, and students can happen a lot more easily if current tools are used, but that's going to require a sea change that will take longer than the hype that is being hoped for in Baltimore. Maybe the Maryland people are thinking about Moodle as one of the 'other factors.' We can hope.