Wednesday, May 14, 2014

Cutting down feedback cycles

I want to build independence in my students.  I want students to be self-directed, self-motivated learners.  I want to give students freedom.  I took most of this away from my Geometry students in the past two units to see what would happen if I micromanaged their progress and refused to let them move on until I personally verified their homework as completed correctly.  The idea was to shorten the feedback cycle for students doing practice problems as much as possible.

Background: as a department, we historically gave our students a complete solution key to the homework problems so they could check their work as they went.  We took time to talk through strategies on how to effectively use the keys as checks, not crutches, and why copying the key was a waste of time for their final grades.  In the past year, we all started cutting back to only the odd solutions because students were becoming too dependent on the keys.  This quarter, I went even further and pulled the solutions, leaving only the odd answers (not full solutions) so students could simply check themselves as right or wrong and seek help for wrong answers in class.  The problem was that many never bothered to check answers -- who could blame them when we grade on completion?  So my next step was to start checking random problems from each assignment for accuracy.  My student assistant took on the brunt of the work and skimmed through PDFs for a couple of answers per assignment.  The results were pretty sad -- many students got odd answers wrong along with evens.  In fact, the homework scores were pretty good predictors of quiz and test results, so we were finding the kids who would struggle based on their homework quality.  The problem was that by the time we knew this, it was too late to do much for that student.  On the test we took before I moved to the new system, 48% of my 1st block students were proficient (above 80% on the test) and only 18% of my 4th block students were proficient.  This is not far from some historical proficiencies, but my one class is still the worst ever recorded in the department.  Extra conveniently we took this test just in time for me to get it in the gradebook for parent conferences.  The point is that I desperately had to do something different because the wheels were falling off.

2013-14 Proficiencies on Algebra Expressions test (traditional flipped)
(first 3 are each different teachers, and my classes are in red):
46%  22% 22% 48% 18%

To change things up, I created worksheets with 6-8 problems each.  A page was a single type of problem.  I made videos that explained how to do 2 of the problems per page, but the videos were optional -- if students knew how to solve it on their own, they didn't have to watch it.  After completing a page of problems, I came over to their desks and checked them off in my gradebook.  You didn't get to move more than a page ahead without getting checked off, and if you did it wrong, you had to redo the work (if a student was repeatedly failing, I would take extra time to walk them through).  Before each quiz, students took a practice quiz of similar style and length to the actual quiz.  This allowed students to do a final check and still get group and teacher help (the primary benefit of a group quiz) before jumping into the first graded assessment on the material (an individual quiz).  After the quiz, they repeated this cycle through four major sections of four worksheets each before the unit test.  All students tested on the same day.

The logistics were chaos, I was running around class with my head cut off to get to everyone, and I don't think I would have survived one of my two classes if I didn't reassign the student assist who used to grade homework to be an in-class checker and tutor.  But all I cared about were results -- if the craziness made kids learn, I didn't really care about much else.  The good news: I hit 78% with my stronger class, and more surprisingly, I made it to 69% with my struggling class.

2013-14 Proficiencies on Algebra Equations test (checked problems, quiz when ready)
(first 3 are each different teachers, and my classes are in red):
83% 44% 28% 78% 76%

Before seeing the test data, I knew some things were going right based on the change in the conversations I overheard from students.  Students would ask very specific questions about how to do parts of the problem instead of the time-tested "I don't get it".  They would yell at each other for doing problems wrong and actively point out errors in each other's logic.  They would get mad at me for trying to run away from them and not stay long enough to help (and still do).  The class was still crazy and energetic, but there was a sense of purpose in the air.

Going forward, I knew I had to adjust the logistics to keep things a little bit more sane.  Initially, there were papers everywhere.  Halfway through the first new unit, I started making stapled packets for each of the parts that kids could write on and keep.  Now, I adjusted the system to use a stamp.  Every correctly completed page of student work gets a "Mr. Pethan approved" stamp which I find incredibly efficient and students find amusing (I just have to make sure I don't leave it around or I find people stamped with "my" approval).  Students also find the "click" sound of the stamp to be a good feeling after solving a triangle the wrong way for 20 minutes and finally getting it right.  When students are ready to quiz, I take their stamped packet and give them a quiz to take on the side of the room.  I can record and grade everything pretty quickly.

A few additional things to note: I would not completely call this approach flipped mastery, since I do not allow retakes on any assessments.  I just don't let students take assessments until they demonstrate understanding on their practice problems, so I no longer see many failures.  In fact, only one student in either class failed this test, mainly because he didn't finish some of the final sections before the test and I didn't realize how far off he was before throwing him into it.  To help students stay on track, I allow them to come in at lunch or before / after school.  Lunch is by far the most popular option.  When students get more than a day behind, they get to send an email to me and their parents to let everyone know where they are at and their plan to get caught up.  This helps as a prevention from falling behind and accountability if the student is behind.  When needed, I can also require that students come to my room at lunch for longer-term catch-up and help.

With only one unit done and tested, a lot of this is still up in the air.  That said, I have some early quantitative and qualitative validation that things are working and I've smoothed out some of the kinks in the logistics.  I will keep posting on how things progress through the rest of May and what I plan to do with this heading into next year.




The Lean Startup

I'm obsessed with audiobooks.  Before them, I probably read 20 books in my life outside of school.  With them, I crank through two pretty decent sized books each month, around a variety of topics, but mostly psychology, entrepreneurship, faith, and education.  Thanks to Audible, I even read the entire Bible in a few months last year.  Most of my reading occurs on my drive to and from work (20 minutes each way at 1.5x speed is an hour of reading per day!).  Part of each summary is my key themes from the book.  The rest is how the book has reshaped or reinforced my thinking.

The Lean Startup (by Eric Ries) redefines the metric of success for startup businesses.  Traditionally, we might look at revenue or users to judge early success, but Ries calls these "vanity metrics", as they are more useful as bragging tools than as proof that you understand your customer and their wants and needs.  Instead, he recommends using "validated learning" as the key metric.  Validated learning comes from taking an assumption that is key to the success of the business, running an experiment with minimum cost and time, and demonstrating whether or not the assumption is actually true.  One concrete example comes from the founder of the online shoe store Zappos.com: the key assumption is that people would be willing to buy shoes online.  To validate this assumption, the founder created a small website that looked like a legitimate retailer of shoes and promoted it online.  When people actually ordered from the site, he drove down to a nearby department store, bought the shoes at retail price, and shipped them to his customer.  Every transaction lost money and was time consuming, but through this process he was able to figure out exactly what people wanted from an online shoe retailer without the financial risk of investing in a warehouse of shoes.  If the website completely failed, he would have moved on to other ideas with no major loss other than a few weeks of time and a few bucks.  Instead, he bought himself lots of time to tweak the website and run additional mini experiments that tested different designs and policies to see how they affected sales and customer satisfaction.  When he finally did commit to having a warehouse, suppliers, and employees, he already had a lot of the validated learning done and put his investment money into the right areas to quickly grow the business.

Taking a step back, I want to clarify that a startup is not a new business using a proven model, such as a pizza restaurant in a new town.  It is an organization creating new products or services for a new customer.  Many startups begin with hunches around a pain point or problem, but they don't know exactly what they are going to make or who is going to buy it.  This is why advanced planning and business plan development are a massive waste of time for an early startup -- they have no idea what they need to do yet.  Until they have validated that they have a product that a certain customer segment wants to buy, and they have even taken pre-orders for these half-fake products, they should not be wasting any time executing on the details that do not validate core assumptions of the business.  This is hard for designers and developers with a product vision, which is why one of the huge themes of the book is discipline in business practices.

As an educator, I want to spend more time challenging the core assumptions that our school and curriculum are based on using experiments across different classrooms and with different teachers.  True split testing experiments are difficult in a single classroom with one teacher, but there might be creative ways to move kids between teachers in the same block or give different tasks to half of a class in a more flexible environment.  Below are a few assumptions that I and/or our team is already starting to question, but need better testing methodologies to fully understand:

  • We should give students the solution key to self-monitor their progress: my experience tells me that most students are poor self-monitors due to a mix of not understanding the difference between solution key answers and their answers, not knowing why they made a small mistake that actually occurred from a major misconception, and not caring if they are doing the problem right.
  • The most efficient way to group material is by similar topic: After years of math was developed, we found efficient ways to compact the material.  We put all of the factoring into a factoring unit and teach 4 methods for factoring different types of problems.  When I rebuilt my Stats class, I placed content around the types of problems they could help you solve instead of in the traditional syllabus order so real-world projects could motivate and reinforce content work.
  • Students need two quarters to complete a math course: This seems like an obviously flawed assumption.  Despite this, I know of almost no schools that offer a continuous course called "math" that lets kids move at whatever pace they need to.  Everyone has courses that advance at a single pace dictated by the content that needs to be covered and the time allotted to cover it.
  •  The current sequence of courses and topics is the ideal way to teach them: Teach number operations, then algebra, then geometry, then more algebra, and then calculus.  Teach polynomials and all of their operations, then teach factoring in its many forms, and then jump into rational expressions and equations.  Why couldn't you teach calculus concepts (it's just area and slope) to 3rd graders and give them tools to handle the nasty algebra?
  • Following the standards as prescribed is necessary: I don't know anyone crazy enough to ignore the state's written standards, as a whole district, and teach what they think is best.  If I were in charge, any factoring without a computer might get kicked out to free up time to build mathematical models of real-world systems.  Just ignoring the standards related to factoring would eliminate an entire two week (block schedule) unit in Algebra 1 and Geometry and would save another week or more in the rational expressions and quadratics units.  Kids would (probably) get more MCA problems wrong as a result and we would be intentionally ignoring a state mandate.  I think this extra time could strengthen everything else we do and leave more time for meaningful application.
  • 1:1 iPads are a tool for learning: I think this depends on the task.  In the "solve these same problems" environment that we created for our kids, the iPad doesn't have nearly as much to offer.  It does allow for videos to quickly viewed on demand and saves paper, but a multi-purpose device that has so many built-in distraction tools makes it hard for kids to stay on task during long problems.  Jury is still out here and we have little data to beyond teacher observations to go on.
  • A group-paced course is a better environment for the typical student: Before a student is allowed to attempt Algebra 1/2 or Geometry credit recovery from an independent-paced math class, they need to fail or nearly fail a course.  We do this because a computer-based math class costs more than using our own curriculum and we're afraid that it might be easier than our curriculum (kids trying to take the path of least resistance could avoid our material).  What if everyone had a flexible-paced environment using our curriculum and cost wasn't a factor -- would that be worse than having large groups of kids moving through together?
  • Subject-separated courses taught by content experts is the best way to organize middle and high school: This assumption drives me nuts.  I have little proof on this being wrong and it is such a deeply ingrained conventional wisdom that is reinforced by the state's teacher licensing system that it would be hard to change even if there were experiments done that proved it wrong.
There are plenty more assumptions that I think stand on shaky ground.  The Lean Startup has energized me to think more about these assumptions and think of creative ways to validate them as objectively true or false.  As a statistics teacher, I especially feel like it is my calling to help in the design of experiments that can assess more causation than we're used to in education.