You have probably heard about John Hattie. Specifically, you may have heard about his research on the factors that affect student achievement. Hattie uses effect sizes to show the relative impact of each factor. An effect size of 0.4 is regarded as average or typical. His work is ongoing. To my knowledge, his results were 1st published in 1999. They became well-known after he published a book in 2008 called Visible Learning. His results were last updated in late 2016. This Hattie effect size 2016 update summarizes these new findings in the context of what went before.
Hattie Effect Size 1999 Results
I first heard about John Hattie and his work on effect sizes in 1999 when he published his article Influences On Student Learning.
At the time, Hattie was at pains to point out that nearly everything we do in the classroom helps students to learn. Put another way, every teaching strategy worked, at least to some degree. Therefore, research needed to focus on what works best rather than what works.
- Some of the factors that had a high impact included students cognitive ability (IQ), Direct Instruction and feedback.
- Some of the factors that had a lower than average impact included repeating students, parental involvement and ability grouping.
Hattie Effect Size 2008 Results
In 2008, after growing this database grew to include over 800 meta-analyses, he published the book Visible Learning. Soon afterwards, the phrase Hattie effect size become an incredibly popular search term.
- New factors that had a high impact included teacher clarity, formative evaluation and acceleration.
- New factors that had a lower than average impact included inductive teaching, inquiry learning and teaching test taking.
Hattie Effect Size 2016 Results
Since then, he has continued to add to his database to include over 1200 meta-analyses. The latest effect sizes were published in 2016.
- New factors that had a higher than average impact included collective teacher efficacy, conceptual change programs, teacher credibility, response to intervention, cognitive task analysis and particular types of classroom discussion.
- New factors that had a negative or lower than average impact included depression, corporal punishment in the home, web-based learning, and juvenile delinquent programs
Of interest, a new item service learning had a moderate effect.
Here are the updated Hattie effect sizes for 2016. Just hover over each bar to see its effect size.
I will help you unpack what some of these factors mean in practical terms in future articles throughout the year.
The 6 Super Factors
There were some new and some old favourites at the top of the list. Six of these had such a strong effect that they would distort any attempt to graph them. I call these super factors. Including them in the graph would distort the important differences between the other 188 factors. So I have listed them here.
The 6 super factors were:
- Teacher estimates of achievement (d = 1.62). Sadly, this reflects the accuracy of teachers’ knowledge of students in their classes, not “teacher expectations”, so this is not a factor teachers can use to boost student achievement.
- Collective teacher efficacy (d = 1.57). This is a factor that can be manipulated at a whole school level. It involves helping all teachers on the staff to understand that the way they go about their work has a significant impact on student results – for better or worse. Simultaneously, it involves stopping them from using other factors (e.g. home life, socio-economic status, motivation) as an excuse for poor progress. Yes, these factors hinder learning, but a great teacher will always try to make a difference despite this, and they often succeed.
- Self-reported grades (d = 1.33). Again, this is a factor that teachers can’t use to boost student achievement. It simply reflects the fact that students are pretty good at knowing what grade they will get on their report card before they read it.
- Piagetian levels (d = 1.28). This is the third super factor that teachers can do nothing about. It simply means that students who were assessed as being at a higher Piagetian level than other students do better at school. The research does not suggest that trying to boost students’ Piagetian levels has any effect.
- Conceptual change programs (d = 1.16). This is a promising one. The research refers to the type of textbook used by secondary science students. Some textbooks simply introduce new concepts. Yet, students have already formed their own understanding of the world around them, often including many misconceptions. These misconceptions can hinder deeper levels of learning. Conceptual change textbooks introduce concepts and at the same time discuss relevant and common misconceptions. While the current research is limited to science textbooks in secondary school, it is reasonable to predict that when teachers apply this same idea to introduce any new concept in their classroom, it could have a similar impact.
- Response to Intervention (d = 1.07). This is a structured program designed to help at-risk students make enough progress and ideally achieve comparable results to their peers. There is plenty of commercial literature and material to help schools use RTI, but basically, it involves screening students to see who is at risk, deciding whether supporting intervention will be given in class or out of class, using research-based teaching strategies within the chosen intervention setting, closely monitoring the progress, and adjusting the strategies being used when enough progress is not being made. While the program is designed for at-risk students, the principles behind it are the same advocated by John Hattie as being applicable for all students. Note – Response to Intervention (RTI) is increasingly being referred to as Multi-Tier System of Supports (MTSS). The two terms mean the same thing.
Here are the other 188 factors. Simply hold over each bar to view the effect size.