It's been a busy break, and I haven't been able to blog.
There has been time for reflection, which I'm hoping will result in many cogent thoughts and account for my month-long break from blogging.
For the record, I have been an advocate for both testing and high standards. I have published views in favor of testing in the past, so I hope these thoughts aren't dismissed as a mere confirmation of opinions I had before Tennessee's first round of TNReady testing.
I was a TNCore coach this summer, so I felt confident in my knowledge of how the ELA test would run. We would test writing at the 60% point of the term. We would test literacy at the 90% mark. (In previous years, writing had been tested in February. Teaching with the block schedule, it mean that my first-semester students had a full term of 11th-grade writing experience, while my second-semester students had little more than a month of prep.)
I was also eager to see testing move to the computer from the #2-penciled-in bubble sheets. Headaches with grading and late scores would disappear, I hoped.
Of course, I knew that the standards would be tested. That was a good thing.
Why? Teaching time is really important to me. My units (three per semester now) are carefully planned out. If I have extra time in a semester, I prefer a field trip, not a test-prep session. I guess that might be considered '#oldschool.'
Here's how testing looked in ELA. After an initial scare about disruptive scheduling, covered in an earlier set of blogs, tests were scheduled for consecutive 90-minute blocks. (This fit in well with the block schedule, but standard-period teachers will find the school schedule disrupted come testing time this winter.) The goal was to write an essay each day, based on a separate text. From what I could see, students were engaged the first day. All but one appeared to really examine the source text, highlight it and compose essays.
Eighty-five minutes were given to writing the first essay. Students couldn't log off if they finished early All but one were finished in 45-55 minutes. One girl used the entire period to finish her composition--something that didn't surprise me. This girl faced an earlier challenge. Her laptop came unplugged before she was finished, and the computer shut down. We plugged it back in, logged in, and we were grateful to find that her work had been saved on the server. She continued on without a hitch.
The second writing day, students returned to their stations, logged in, and were assigned a 2nd reading and essay (over the two days, students wrote a combination of informative, argumentative, or narrative essays).
The literacy test came later, the Friday before we got out of school. We were allotted two full blocks for this one (180 minutes), although 100 minutes would have been plenty. Again students seemed engaged and working hard. I felt that the technological enhancement prevented a lot of filling-in-the-bubbles or "Christmas trees" that we often saw on the pencil exams. Students had to interact with the test--and the text--more.
Still, this last test was the most frustrating for me. Partially it was my own fault. Because I had taken the initiative with the schedule for the first test, it was left up to me to schedule around the teacher who shared my room during my planning period as well as the other teacher who used my room for testing. I ended up testing the Friday before finals week.
Also, three hours was just too long for my students to take a literacy test, technology or not.
The technology has certainly made scheduling easier, but it hasn't yet made testing more efficient. The times didn't really change from the bubbled-in tests to the online ones. They should. The goal should be to cut testing by 1/3rd for next year, if not 1/2. This could be accomplished by
I asked my daughter about the written exams she had taken in college last semester. Most were one hour, although there was one four-part written test in an honors class for which they were given two hours. Yet all Tennessee students wrote multiple essays over a three-hour period, while only a small fraction of these will face such conditions in college or career.
My response, when students asked me, "Is this going to count?" was to get on my knees and beg, "It matters to me! Think of my TVAAS score!" Not the best motivation a teacher might call upon, but it was all we had this year.
As long as testing is a measure of teachers and schools, not students and families, it won't matter, and we need to do away with the improvement charade altogether. This is the job that community and state leaders haven't done, and one they show no signs of taking up.
An alternative would be to go to local testing and grading. Teachers would be required to submit batches of completed essays for reviews by their peers--to be graded in groups. Results would be posted to the districts. Administrators would sanction teachers who had not assigned written tests (for example, an ELA teacher who didn't have argumentative essays). And in the grading process, teachers could identify common needs and address school-wide initiatives that would reform particular subject deficiencies.
This approach would benefit local teachers and schools far better than carving out a block of time, submitting writing to faceless out-of-state graders, and hoping to use results to actually improve learning.
One caveat: this blog reflects only my experience and not the school as a whole. I'm sure our school's librarians could describe their frustration with having the media center occupied for testing from the final week of October all the way through to the end of the school term, December 20, but I will leave that for other bloggers.
These are my ideas after the first round of TNReady ELA exams. There is a lot of room for improvement. There is a great need to make testing more efficient and to hold students accountable for their performance. There are opportunities to return testing to the local level to benefit learners--but these would be possible only with a reversal of policies that have focused in recent years on sanctioning teachers through testing, not improving them.
There has been time for reflection, which I'm hoping will result in many cogent thoughts and account for my month-long break from blogging.
For the record, I have been an advocate for both testing and high standards. I have published views in favor of testing in the past, so I hope these thoughts aren't dismissed as a mere confirmation of opinions I had before Tennessee's first round of TNReady testing.
The Goals
TNReady lauched this year after the Tennessee state legislature elected not to go with PARCC testing as my state had planned to do when it adopted Common Core State Standards in 2009. Like the standards, the goal of testing was meant to ensure that students were college and career ready and an adequate measure of the new standards. (In 2013 and 2014 Tennessee had simply continued on with the previous contract that tested earlier standards.)I was a TNCore coach this summer, so I felt confident in my knowledge of how the ELA test would run. We would test writing at the 60% point of the term. We would test literacy at the 90% mark. (In previous years, writing had been tested in February. Teaching with the block schedule, it mean that my first-semester students had a full term of 11th-grade writing experience, while my second-semester students had little more than a month of prep.)
I was also eager to see testing move to the computer from the #2-penciled-in bubble sheets. Headaches with grading and late scores would disappear, I hoped.
Of course, I knew that the standards would be tested. That was a good thing.
Test Prep
I prepped students this year using the web site that Measurement Incorporated had set up for student practice. I have always stressed teaching over test-prep, so I only went through one practice test with students as a means to get them used to the technology. I didn't use it as formative assessment, which is possible with the new technology.Why? Teaching time is really important to me. My units (three per semester now) are carefully planned out. If I have extra time in a semester, I prefer a field trip, not a test-prep session. I guess that might be considered '#oldschool.'
The Actual Tests
President Obama has already weighed in on the issue of excessive testing. Unfortunately, his remarks at the end of October 2015 were long after these tests were in the pipeline. With a completely new test, I wanted to keep an open mind, but I found the testing time weighing down both students and teachers.Here's how testing looked in ELA. After an initial scare about disruptive scheduling, covered in an earlier set of blogs, tests were scheduled for consecutive 90-minute blocks. (This fit in well with the block schedule, but standard-period teachers will find the school schedule disrupted come testing time this winter.) The goal was to write an essay each day, based on a separate text. From what I could see, students were engaged the first day. All but one appeared to really examine the source text, highlight it and compose essays.
Eighty-five minutes were given to writing the first essay. Students couldn't log off if they finished early All but one were finished in 45-55 minutes. One girl used the entire period to finish her composition--something that didn't surprise me. This girl faced an earlier challenge. Her laptop came unplugged before she was finished, and the computer shut down. We plugged it back in, logged in, and we were grateful to find that her work had been saved on the server. She continued on without a hitch.
The second writing day, students returned to their stations, logged in, and were assigned a 2nd reading and essay (over the two days, students wrote a combination of informative, argumentative, or narrative essays).
The literacy test came later, the Friday before we got out of school. We were allotted two full blocks for this one (180 minutes), although 100 minutes would have been plenty. Again students seemed engaged and working hard. I felt that the technological enhancement prevented a lot of filling-in-the-bubbles or "Christmas trees" that we often saw on the pencil exams. Students had to interact with the test--and the text--more.
Still, this last test was the most frustrating for me. Partially it was my own fault. Because I had taken the initiative with the schedule for the first test, it was left up to me to schedule around the teacher who shared my room during my planning period as well as the other teacher who used my room for testing. I ended up testing the Friday before finals week.
Also, three hours was just too long for my students to take a literacy test, technology or not.
Reflections on TNReady Testing, Round 1
Technology
My enthusiasm for technology--"Your tests will be graded between the time you click "Submit" and the minute you get up from the chair," I had bragged to students--was apparently misplaced. I still haven't seen the results of either of the first semester tests.The technology has certainly made scheduling easier, but it hasn't yet made testing more efficient. The times didn't really change from the bubbled-in tests to the online ones. They should. The goal should be to cut testing by 1/3rd for next year, if not 1/2. This could be accomplished by
- Cutting the writing tests down to one--these are college-ready tests, yet I don't know of any college tests or exams that involve three-hour writing prompts.
- Reducing the number of literary prompts in the literacy test and adding additional questions to each.
Disconnect from other significant College/Career Tests
Is this really supporting college & career? I never took a test of three hours in my undergraduate or graduate work. The LSAT and the GRE were more than three hours in length, but my recollection is that they were divided up, as is the ACT, which would be a good benchmark for testing time in Tennessee. The ELA portion of the ACT, if you combine the reading and the English portions, is 80 total minutes, divided into 45- and 35-minute subtests. Why would a state-given test take more than twice this amount of time? The total ACT takes 175 minutes, or roughly the amount of time given to just one portion of TNREady's ELA testing, not taking into account math, social studies and science testing.I asked my daughter about the written exams she had taken in college last semester. Most were one hour, although there was one four-part written test in an honors class for which they were given two hours. Yet all Tennessee students wrote multiple essays over a three-hour period, while only a small fraction of these will face such conditions in college or career.
Political Environment
Support the test. Our district board voted a resolution that TNReady tests results would not be applied to student grades, and an administrator at my school even repeated this several times in front of students in the context of, "The test doesn't matter."My response, when students asked me, "Is this going to count?" was to get on my knees and beg, "It matters to me! Think of my TVAAS score!" Not the best motivation a teacher might call upon, but it was all we had this year.
As long as testing is a measure of teachers and schools, not students and families, it won't matter, and we need to do away with the improvement charade altogether. This is the job that community and state leaders haven't done, and one they show no signs of taking up.
Put Teachers back in Teaching AND Testing
There is a need to shift to local testing--particularly for the writing test. I don't know when the writing test results will be back. They won't be used in my school (last year's results are printed out and stacked in the teachers' lounge, barely half have even been looked at). If statewide testing doesn't improve teaching at the local level, then it is indeed a waste of time.An alternative would be to go to local testing and grading. Teachers would be required to submit batches of completed essays for reviews by their peers--to be graded in groups. Results would be posted to the districts. Administrators would sanction teachers who had not assigned written tests (for example, an ELA teacher who didn't have argumentative essays). And in the grading process, teachers could identify common needs and address school-wide initiatives that would reform particular subject deficiencies.
This approach would benefit local teachers and schools far better than carving out a block of time, submitting writing to faceless out-of-state graders, and hoping to use results to actually improve learning.
One caveat: this blog reflects only my experience and not the school as a whole. I'm sure our school's librarians could describe their frustration with having the media center occupied for testing from the final week of October all the way through to the end of the school term, December 20, but I will leave that for other bloggers.
These are my ideas after the first round of TNReady ELA exams. There is a lot of room for improvement. There is a great need to make testing more efficient and to hold students accountable for their performance. There are opportunities to return testing to the local level to benefit learners--but these would be possible only with a reversal of policies that have focused in recent years on sanctioning teachers through testing, not improving them.
Interesting! I really value your insights. Working out the kinks is so frustrating, especially when other people are mainly responsible for getting them worked out.
ReplyDelete