This post is quite long, so I’ll summarise:
- The PIAAC 2011-2012 statistics do not give enough information to say if Tasmanians are more or less literate then the Australian average.
- The definition often quoted of functional illiteracy is vague and possibly misleading.
- The figures were widely misrepresented (directly before the Tasmanian state election) and used to incorrectly imply a failing of the Tasmanian education system.
The story hit the headlines
The ABC broke the news in September 2013:
One in two Tasmanians aged 15 to 74 are functionally illiterate, and more than half are functionally innumerate.
No source for the figure here, but the next report gave a bit more information; Background Briefing 22 September 2013:
A report by the Australian Bureau of Statistics for 2011-2012 shows half of all Tasmanians aged 15 to 74 are functionally illiterate, and more than half are functionally innumerate—meaning they don’t have the skills needed to get by in the modern world, like filling out forms, or reading the instructions on their prescription.
The Guardian were soon onto it:
To those mainlanders whose notions of Tasmania conjure boutique wineries, posh B&Bs and MONA weekends, recent news that half of adult Tasmanians are functionally illiterate and innumerate was an enormous shock.
SBS picked up the story too, from AAP:
Australian Bureau of Statistics figures show half of adult Tasmanians don’t have the basic literacy and numeracy skills needed to get by in the modern world.
It even made it overseas:
New research has revealed shocking statistics on literacy and numeracy levels in Tasmania.
Half of all Tasmanians are functionally illiterate and innumerate.
Then it became folklore:
The moronic, overpaid, subhuman animals running Australian education seem to be consistent. Recent findings indicate that about half of the state of Tasmania is functionally illiterate and innumerate, and the rest of the country’s not much better.
And since then I’ve heard it repeated over and over. Usually with a look of quiet desperation – as a reason for simplifying wording on a sign, or an explanation as to why we can’t expect the masses to vote with any sense.
The bad news
So, I went searching for the data. What I found on the Australian Bureau of Statistics (ABS) website was the Programme for the International Assessment of Adult Competencies (PIAAC). The ABS give an excellent explanation – basically they survey a sample of people testing their literacy, numeracy and problem solving in technology-rich environments (PSTRE). This is then combined with demographic information about the participant and their score is used to place them into one of 5 levels.
The ABS site has overall figures and breakdown by sex, age, and state or territory.
Ah, now I was getting somewhere, and there was the grim graph with Tasmania’s shame in black and white (or purple and orange).
Proportion at literacy Level 3 or above, By state or territory of usual residence and sex—2011–12
It certainly looks like 50% of us are illiterate.
Now time to download the data. The ABS provide a series of Excel spreadsheets with the cold hard numbers, broken down in more ways than I needed, but it was all there.
|Level 1 and below
So, looking at this if we combine anything less than level 3 we get 48.8%. That’s near enough to half for a headline.
How do we stack up against our smarter cousins to the North? Looking at the Australia wide figures:
|Level 1 and below
So for the whole country we have 44.7% below level 3. Clearly we’re under-performing.
There are some other numbers amongst all those ABS spreadsheets too, though. These things called RSE of Proportion (%) and 95% MOE of Proportion (± percentage points). What do they tell us? Luckily the ABS explain this all very clearly too. I refer the reader to the ABS site for the full explanation, but here are some juicy bits:
Two types of error are possible in an estimate based on a sample survey: sampling error and non-sampling error. Since the estimates in this publication are based on information obtained from a sample, they are subject to sampling variability. That is, due to randomness in the composition of the sample, the estimates may differ from those population values that would have been produced if all dwellings had been included in the survey. One measure of the likely difference is given by the standard error (SE). There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all dwellings had been included, and about 19 chances in 20 (95%) that the difference will be less than two SEs. […]
In contrast to most other Australian Bureau of Statistics (ABS) surveys, PIAAC estimates also include significant imputation variability, due to the use of multiple possible assessment tasks and the complex scaling procedures. The effect of this on the estimation can be reliably estimated and is included in the calculated SEs.
To cut a long story short, the MOE gives us the range of values that we can be 95% sure the real figure falls between.
When we plot the literacy percentages with the MOE shown as error bars, we get a slightly different picture:
Now we can see that in every case the Tasmanian figure brackets the Australian figure. That is, we don’t know exactly where either figure lies, but the Tasmanian and Australian figure may well be the same, or either may be higher than the other.
If we once again look at the scores for less than level 3, we can say with 95% certainty:
- Tasmania 41.9% to 55.7%
- Australia 40.1% to 49.3%
And that’s all I can derive from these figures. No assuming one is higher than the other, or taking the middle of the range. The point of these margins of error is that they tell us 95% certainty. If we want to be less certain, then we can reduce the margins, but we can’t have it both ways. One chance in 20 that we’re wrong is reasonably certain, although not definite.
I’m not going to delve into the figures more deeply here, but I did note that in some age groups the Tasmania percentage is higher than the Australian one. Where Tasmania performs well is in the 15 to 19-year-old and 55 to 64-year-old groups. These might be statistical anomalies or they might be to do with the demographic mix of the Tasmanian population, with some young people going interstate to work.
I hope that the 15 to 19-year-old figure shows that our education system is working well and that all this was a lot of media hype, coincidentally just before the state election.
Before I close, I couldn’t find a reference to ‘functionally illiterate’ on the ABS site. I did find the descriptions of each level. As they’re lengthy, I’ll only quote level 2 and level 3, as this is the cut-off I’ve focussed on. You decide whether this is the point of functional illiteracy.
Level 2 (226 to 275)
At this level, the medium of texts may be digital or printed, and texts may comprise continuous, non-continuous, or mixed types. Tasks at this level require respondents to make matches between the text and information, and may require paraphrasing or low-level inferences. Some competing pieces of information may be present. Some tasks require the respondent to:
- cycle through or integrate two or more pieces of information based on criteria;
- compare and contrast or reason about information requested in the question; or
- navigate within digital texts to access-and-identify information from various parts of a document.
Level 3 (276 to 325)
Texts at this level are often dense or lengthy, and include continuous, non-continuous, mixed, or multiple pages of text. Understanding text and rhetorical structures become more central to successfully completing tasks, especially navigating complex digital texts. Tasks require the respondent to identify, interpret, or evaluate one or more pieces of information, and often require varying levels of inference. Many tasks require the respondent to construct meaning across larger chunks of text or perform multi-step operations in order to identify and formulate responses. Often tasks also demand that the respondent disregard irrelevant or inappropriate content to answer accurately. Competing information is often present, but it is not more prominent than the correct information.
I have intentionally not looked at some other sources of data mentioned in some articles. I have also focussed on the literacy figure rather than numeracy, as this is the one I have heard quoted. If someone with stronger statistical skills than me would like to correct my analysis I’m most happy to hear from you.