Why do the polls differ?
We note the comments made by several parties about the Market Facts & Opinions (MFO) poll conducted last week and the comparison made with the MORI poll. This is our official response:
1. The two polls were done at different times: the MORI poll was done before the full public debate, and the MFO poll was done last week. The public would therefore be responding in different time frames with different perspectives, as was evident in their response to knowledge of the bill.
2. The two polls sought to answer different questions: the MORI poll sought to establish the public support for the principle of the three proposals, while the MFO poll sought to establish the public opinion on the ongoing debate.
3. There was a comment about the sizes of the sample and the degree to which they could be relied upon. The sizes of both the samples are statistically appropriate and can be used to project the national opinion. Gallup, one of the most reputable polling businesses, uses 1,000 people in their national USA poll! The key issues in deciding whether or not one can rely on a sample size or not are:
a. whether or not the right people—each member of the public had equal opportunity to be asked, ie, how the sample was arrived at—were asked. MFO poll declares its sample properties; MORI poll’s demographics are not declared;
b. whether or not the method of reaching the target people was appropriate and did not leave out any group. The MORI poll simply noted it was telephone-based, while MFO declared the full details of its methodology, ie, land lines and mobile phones were called. The latter ensures all people could be reached;
c. whether or not the questions were neutral, ie, did not lead the respondent to a predetermined response. MFO’s questions were all clearly reported;
d. whether or not there was a large variance in the results, ie, how close the public responses are divided. This is important to determine the degree of the reliance one can put on the results. In MFO polls, the balance of the public opinion reported was very clear, which increases the reliance that can be put;
e. The reliability of a poll result is determined statistically and should be stated by the pollster in the form of the margin of error and the confidence level. Unfortunately, there are polls which tout low margins of error and do not state the confidence level and, therefore, informed people cannot independently verify their claims. MFO has always, for the past 29 years, disclosed fully all the details of its study to allow for independent observers to make informed judgment on their validity.
4. The significant similarity between the two polls is the issue of information held by the public: two days after the parliamentary debate, 16 per cent of the public felt informed about the reform proposals, while two weeks after MFO reported this proportion had grown to 26 per cent. In the MORI’s poll, 42 per cent indicated they had heard about the proposals, while two weeks after this is reported by MFO to have risen to 77 per cent. In short, even though more people became aware of the proposals, the majority of the population did not feel informed sufficiently.
It is sad the client of MORI undertook the survey but never waited to get the results before taking action. This is akin to throwing money away. It is even more telling the client kept the results close to their chests and did not ponder on the report that the public was more concerned about other issues and that constitutional reform was listed at least eleventh place in rank of importance.
Public opinion polls are important for the population, but apparently there is much work still to be done to explain their use to both the public and the population.
chairman, Market Facts & Opinions