Most things that teachers do have a positive impact on how well students do at school. The same holds true for the other people involved in a child’s education.

Whether you use *inquiry learning *or *explicit teaching *– you will help children progress further than they would have without you. While there are a few educational practices that have a negative impact on students’ learning (e.g. long summer holidays), research shows that nearly everything works – at least to some degree.

When deciding which approach to use, this puts you in a perilous predicament. Should you:

- Be a
*guide on the side*or a*sage on the stage*? *Group students by ability*or*mix them up*?*Set clear learning goals*for students or*let the students take more control*of their learning?

The reality is that there is research showing that all of these options have a positive impact on student learning.

Therefore, you need measures that allow you to tell how good a particular strategy is, and how that compares to other strategies.

### Comparing Options By The Amount of Impact They Have

In cases like this, where you need to choose one option or the other, you need to know how much impact each option has on students’ subsequent achievement. You then choose the option that has the most impact.

For example, according to John Hattie, being a *sage on the stage* (he calls it Activator) has far more impact than being a *guide on the side* (he calls it facilitator). He gauges how much impact a given option has using **a statistical measure of improvement called effect size (d)**.

There will be other times when you may be considering implementing two options, rather than just one or the other, for example:

- teaching
*phonics*__and__*comprehension strategies* - using
*punishments*__and__*rewards*to manage behaviour

While each strategy has its own effect size (d), the combined effect is not the sum of these individual effect sizes. To assign effect sizes in such circumstances, you need the original research to have measured the impact of both strategies used together.

You must also be aware what *effect *the research is measuring. In the first example, John Hattie was measuring the effect that different approaches to teaching have on student results. In the second example, Robert Marzano was measuring the effect that each approach had on reducing misbehaviour.

While effect size (d) is one of the most common ways to measure the amount of impact a particular strategy has, it is not the only way to measure this. Other common methods include:

- Percentile gain achieved
- Months gained (used by the Educational Endowment Foundation)

Here is a classic example of how researchers use percentile gain.

The Educational Endowment Foundation shows impact by using a crude, but intuitive measure of month’s gained. For example, they found that setting homework for young children leads to a 1 month gain in achievement, while setting homework for secondary students leads to a 5 month gain.

This method has been criticized for its accuracy, but from a practical standpoint it is easy to see the relative impact of different options.

### Comparing By Chance of Success

The common language effect (CL) is another measure to help you decide which educational options are worth pursuing. Rather than looking at the amount of impact each option typically has, it shows you how likely each option is to succeed.

Despite our desire for it to be otherwise, there are no strategies that work for all students all of the time. However, some are far more likely to succeed than others.

For example, a class of students taught using a *whole language *approach to reading has a 52% chance of been more effective than a class taught using a random mix of strategies. By contrast, a class of students taught using *reciprocal teaching *has a 70% chance of being more effective than a class taught using a random mix of strategies.

Put another way, whole language is likely to work with about half the students in your class, while reciprocal teaching is likely to work with just under three-quarters of your students.

### Comparing Across Measures

Sometimes you will need to compare apples and oranges. For example (hypothetical):

- One study may tell you that
*highlighting*is an effective study technique as it is likely to improve student progress by 3 months. - Another study may tell you that
*practice testing*is an effective study technique as it has an effect size (d) of 0.56.

To make a fair comparison of these two strategies, you need convert one of them so that they are both using the same measure.

In this hypothetical case, *highlighting *has an effect size (d) of 0.22, while *practice testing *has an effect size of 0.56. Alternatively, you could convert both options to the number of months students are likely to progress. In this example, *highlighting *is likely to lead to 3 months progress, while *practice testing *is likely to lead to 7 months progress.

There are complex formulas to work out these conversions, but you don’t need to learn them. You can easily look it up in our Effect Size Conversion Cheat Sheet.