5.1 Overall approach to research
The research was split into three key stages:
- scoping meeting between IGT stakeholders and CBSR researchers;
- quantitative research with the members of the general population in Australia aged 16 years or older (Stage 1); and
- a dedicated quantitative study of required non-lodgers (Stage 2).
A scoping meeting was held on Friday 2 May. This scoping meeting involved a discussion of:
the aims and objectives of the project;
background to the topic;
methodology for the study; and
a discussion of draft survey questions.
During this scoping phase, key issues regarding the study were addressed and agreement about many aspects of the survey design and implementation was obtained. The scoping phase was invaluable to ensure that the survey collected information in a way that maximised its usefulness to IGT.
5.2 Quantitative research approach
A Computer Assisted Telephone Interviewing (CATI) was used to administer the survey for Stage 1. The sample for the survey was a random selection from the electronic White Pages and Random Digit Dialling. The overall sample size for the survey was 800 persons with members of the community aged 16 years or older. All members of the general community aged 16 years or older were eligible to participate regardless of their tax return lodgement behaviour.
The following sections discuss the quantitative survey methodology.
Fieldwork - Stage 1
Fieldwork for the survey was conducted by an experienced fieldwork team, who are fully accredited with Interviewer Quality Control Accreditation and have undergone training set out by these standards. A briefing, including a practice interview, was held with all interviewers and the field supervisor prior to the commencement of interviewing.
Fieldwork for Stage 1 of the survey was conducted between Tuesday 12th May and Monday 26th May 2008. Respondents to the survey were obtained in the following way:
when someone at a selected household answered the telephone, the interviewer asked to speak to someone in the household 16 years or older (the "respondent");
when the respondent came to the telephone, they were asked the survey questions.
Table 26 below shows the call data for the survey.
|Category||Number of people|
|Total telephone numbers called||6661|
|Subtotal - eligible numbers||4301|
|Declined to participate (eligible)||3501|
The final response rate is the number of interviews completed as a proportion of eligible members. Thus the final response rate for the survey was 800 / 4301 = 18.6%. The average length of the survey was 7 minutes 30 seconds. Interviews ranged from 4 minutes 24 seconds to 11 minutes 22 seconds.
Fieldwork - Stage 2
Stage 2 comprised a survey specifically with required non-lodgers. The entire purpose of Stage 2 was to ensure that between Stages 1 and 2 an adequate sample of required non-lodgers could be drawn in order to conduct robust and reliable analysis. Stage 2 therefore acted as a 'booster' sample for the required non-lodger group responses collected in Stage 1 - the general community survey.
Prior to commencement of Stage 1 it was expected by both CBSR and IGT that the incidence of required non-lodgers within the community would be low and most likely less than 10%. Stage 1 confirmed this - the estimated incidence of required non-lodgers in Australia was determined to be 7.9%. Because of this, CBSR and IGT agreed to undertake the dedicated survey of required non-lodgers (Stage 2) using an online survey - the low incidence of the target respondent would significantly increase the costs of a CATI methodology (as it would be significantly harder to achieve interviews in comparison to the general community survey).
The online survey was administered by CBSR utilising Colmar Brunton's online panel. Colmar Brunton's online panel consists of 100,000 'active' panellists (i.e. undertake surveys on a regular basis) with thousands more defined as 'inactive' (i.e. irregularly undertake surveys). This panel is demographically and geographically representative according to ABS population figures.
Percentages and averages
Respondents who completed a survey but did not answer a particular question are excluded from the tabulation of results and calculation of statistics for that question.
Percentages are generally rounded to whole numbers. Some percentages may not add to 100 percent due to rounding.
Some survey questions asked respondents to give a rating from 1 to 10. The classification used with likelihood ratings was as follows:
- a rating of 1 or 2 is classified as very unlikely;
- a rating of 3 or 4 is classified as slightly unlikely;
- a rating of 5 or 6 is classified as neither likely or unlikely;
- a rating of 7 or 8 is classified as slightly likely; and
- a rating of 9 or 10 is classified as very likely.
The classification used with agreement ratings is as follows:
- a rating of 1 or 2 is classified as strongly disagree;
- a rating of 3 or 4 is classified as slightly disagree;
- a rating of 5 or 6 is classified as neither agree nor disagree;
- a rating of 7 or 8 is classified as slightly agree; and
- a rating of 9 or 10 is classified as strongly agree.
Average ratings are rounded to one decimal place.
Note that average ratings cannot be translated into percentages. For example, an average rating of 7.3 out of 10 cannot be interpreted as meaning 73% of people.
Sorting of results
In all tables, rows are sorted from most frequent response to least.
Most percentages shown in this report are rounded to whole numbers for ease of readability. Some percentages may not add to 100 percent due to rounding.
The exception to this is where IGT have indicated they may wish to extrapolate the percentages into estimates of real population figures.
To ensure the survey results are representative of the Australian population, they were adjusted, or weighted, using population information from the Australian Bureau of Statistics. This is done because the sample data on its own is biased. For example, in telephone surveys typically greater proportions of females participate than males, when compared to the proportion of females in the population. Similarly, we need to adjust because approximately the same numbers of people were interviewed in each state, whereas the population of Australia is distributed unevenly by state.
What weighting does is adjust the proportions of these demographics in the sample so they are the same as the proportions in the wider population. For example, in the general population survey about 53.1% of the respondents to the survey were female with 46.9% being male. In the Australian population the actual figures are approximately 51.2% females and 48.8% males. In weighting the sample we ensure that the responses of females have only 51.2% influence over the total rather than 53.1%.
The results from the general population survey (stage 1) were weighted by gender and age as the sample collected differed slightly from that within the total Australia population aged 16 years or older. The sample collected was representative of the population distribution by state and therefore the results were not required to be weighted by this variable.
The following table shows how weights for this survey were calculated and applied. Column A shows how many interviews were achieved among men and women in each age bracket. Column B shows the total male and female Australian population aged 16 years and older in each age bracket according to the Australian Bureau of Statistics 2006 Census data. Column C shows the proportion of the Australian population represented by each cell. The adjusted sample size, correcting for any disproportions in the sample, is shown in Column D. Column E shows the needed weight factor to achieve the proportionate sample shown in Column D.
N.B. The Australian Census of Population and Housing is the official count of population and dwellings and collects details of age, sex, and other characteristics of the population. The Census aims to measure the number and key characteristics of people in Australia on Census Night. All people in Australia on Census Night are in scope except for foreign diplomats and their families. Visitors to Australia are counted regardless of how long they have been in the country or how long they plan to stay. Australian residents not in the country on Census Night are out of scope of the Census.
Following Stage 2, an analysis was undertaken to determine whether the characteristics of the sample collected from required non-lodgers in Stages 1 and 2 were similar. The two samples were very similar and thus when analysis of this group using the combined data was undertaken in order to derive the results specifically for this group for this report, the results are unweighted. The sample collected in Stage 1 for required non-lodgers was treated as the 'true population' and therefore the Stage 2 respondents, who were similar in characteristics, were not required to be weighted for the purpose of analysis.
However for some analysis, particularly when trying to determine whether particular demographics were influencing particular behaviours or attitudes across the total sample collected across both stages, Stage 2 data was weighted when combined with Stage 1. The combined data was weighted to ensure that the required non-lodgers only had 7.9% influence on the sample (the incidence of the determined required non-lodger group in the general community survey) in addition to the same age and gender weights applied to Stage 1. Without weighting the required non-lodger sample collected in Stage 2, the overall sample results would have been completely biased towards the perceptions and behaviours of the required non-lodgers in these instances.
Why do researchers weight data?
The raw data from the survey is biased and therefore it would be misleading to use it as a basis of coming to an understanding about the topic at hand. For example, the sample has a greater proportion of female respondents than male respondents. As female respondents may have different activities or views than male respondents, reporting on raw data would lead to a bias towards what females do or think. Weighting the data overcomes this problem because it ensures that the results are representative of the target population.
The weighting approach adopted by Colmar Brunton Social Research is used by the ABS for its many population surveys and they always publish weighted results rather than raw data.
All surveys are subject to errors. There are two main types of errors: sampling errors and non-sampling errors.
The sampling error is the error that arises because not every single member of the population was included in the survey. Naturally it is simply not feasible to survey the whole population to avoid this type of error. One can, however, estimate how big this error component is, using statistical theory. This theory indicates that with a sample of 800 people from a population of 100,000 people or more, the maximum margin of sampling error on an estimate of a proportion is 3.5%.
The way this can be interpreted is as follows. The survey results estimate that 9.2% of respondents who were required to submit a tax return for 2006/07 financial year failed to do so. The maximum margin of error on this estimate is 3.5%. Hence, one can be 95% confident that the actual proportion of people in the population that were required to submit a tax return but failed to do so last financial year is between 5.7% and 12.7%. Another way to phrase this is: if CBSR had taken 100 samples of 1,000 people, 95 of those samples would yield an estimate of the proportion of people that were required to submit a tax return but failed to do so last financial year is between 5.7% and 12.7%. Hence, one can be very confident in our estimate of the proportion of people who were required to submit a tax return but failed to do so last financial year.
In all tables in this report, groups are compared against each other and, where possible, differences are tested for statistical significance at the 95% confidence level.
In tables, where a result is bolded in red this indicates that this score is significantly higher than others. The subscript beside that score indicates which group the score is significantly higher than. For example a score of 28ab indicates that this score of 28% is significantly higher than the scores in column a and b.
All surveys, regardless of whether they are samples or censuses, are subject to other types of error called non-sampling error. Non-sampling error includes things like interviewer keying errors and respondents misunderstanding a question.
Every attempt has been made to minimise the non-sampling error in this study. For example, use of Computer Assisted Telephone Interviewing (CATI) reduces the number of keying errors and ensures interviewers ask the right questions. However, some types of error are out of the control of the researcher. In particular, the study is reliant on accurate reporting of behaviours and views by respondents. For example, a respondent may forget that they played tennis nine months ago and fail to report this activity.
39 Includes numbers that were for businesses, mobile phones, persons who could not speak English and households without a person 16 years or older, once the quota was met.
40 These are numbers where no contact could be made with the selected respondent within the survey period. At least 3 unsuccessful attempts - at different times and days - were made to contact these numbers.