Q automatically conducts significance tests on tables and does so by taking into account many different aspects of the table and the data, such as Question Type and whether categories are NETs (see Statistical Testing in Q for more information). Although there are options within Q to control how the testing is performed (see Applying Significance Tests), from time to time it is useful to override the results of Q's automatic tests. This can be done using rules.
Method
Situations where it may be appropriate to override the in-built significance tests
- There is a need to modify some aspect of the setting that cannot be modified using Statistical Assumptions or by modifying how the data is set up (e.g., by changing Question Type or modifying the table).
- To replicate results from another program (see Why Results in Q Are Different to Those From Another Program).
- Because there are properties of the data that Q is not able to discern.
- To overcome limitations of Q's tests.
The risks/dangers of modifying the tests
In general, we do not recommend modifying tests. This page and associated pages describing how to use rules to modify tests have been created to assist users who wish to modify the tests, but they should not be interpreted as indicating that we regard the modification of tests as being desirable. The general experience of our support team is that more often than not, modifications of testing performed by users seem to be likely to reduce the validity of the users' analyses.
Care needs to be undertaken if modifying Q's tests using rules. Some particular aspects that need to be addressed are:
- That the test that is to be used is actually better. That is, the automatic tests in Q are designed to take the following factors into account, and if over-riding these tests it is important to verify that these issues are still being addressed appropriately:
- Multiple Comparisons (Post Hoc Testing).
- Weights.
- Dependence.
- Statistical power.
- Robustness.
- Edge cases (e.g., variables with zero variance).
- Numerical precision.
- That users do not inadvertently apply the modifications to the wrong data (e.g., if applying a rule to a specific table, care needs to be taken to avoid inadvertently copying that rule and applying it to other rules).
- Avoiding inconsistencies. For example:
- If you modify the way that tests are conducted on tables, this will not have any impact on other areas where Q conducts testing automatically, such as with Planned Tests Of Statistical Significance and Smart Tables.
- Different implications from tests shown in the main section of the table versus those shown in Statistics - Right and Statistics - Below.
How to modify significance tests using rules
Rules in the online library
A number of rules are available in the Online Library, beginning with the words Significance Testing. In all cases, these rules have been added at the request of users and the presence of these rules should not be interpreted as indicating that they are in any sense better than those conducted automatically in Q; in the vast majority of cases, using these rules will result in statistical tests that are inferior to those conducted automatically by Q.
Creating your own rules
For example, Significance Testing in Tables - Column Comparisons on Grids with Lots of Missing Data can be used to perform column comparisons on the example above. And, the following script changes the font colors and puts arrows into significant cells. The basic logic of the script below is:
- Access all the data from the table that is needed to perform the tests (e.g., averages and standard errors).
- Compute the desired results.
- Overwrite the values computed by Q with the required results. Note that on any normal table to write the results you need to modify:
- Font color (if you are showing it).
- Arrow length (if you are showing them).
- p
- z-Statistic
- Column comparisons (if you are showing them).
- Whether or not the cell is flagged as being significant.
- t-Statistic and d.f. and the various standard errors (if applicable).
It is also worth keeping in mind that even once you have modified all of these things, you will likely be performing a test that is considerably less sophisticated than Q's default tests. In the example below, for instance, the test:
- Is conducted at the 0.05 level, whereas Q by default presents multiple levels of significance.
- Ignores the degrees of freedom and thus is only valid with large samples.
- Does not deal with weights properly.
- Performs no Multiple Comparisons (Post Hoc Testing).
For these reasons, the various methods for modifying tests described above are generally preferable to using JavaScript.
// getting all the arrays that need to be modified
var significant = table.cellSignificance;
var z = table.get('z-Statistic');
var averages = table.get('Average');
var se = table.get('Standard Error');
var p = table.get('p');
var font_colors = table.cellFontColors;
// modifying the arrays to contain the preferred significance testing results
var t = table.cellSignificance;var z = table.cellSignificance;
var arrows = table.cellArrows;
for (var r = 0; r < table.numberRows; r++) {
var diff = averages[r][1] - averages[r][0];
var left_se = se[r][0];
var right_se = se[r][1];
var pooled_se = Math.sqrt(left_se*left_se + right_se*right_se)
var t = diff/pooled_se;
z[r][0] = -t;
z[r][1] = t;
if (Math.abs(t) > 1.96){
significant[r][0] = significant[r][1] = true;
p[r][0] = p[r][1] = 0.05;
font_colors[r][0] = t < 0 ? "red" : "blue";
font_colors[r][1] = t > 0 ? "red" : "blue";
arrows[r][0] = t < 0 ? -0.5: 0.5;
arrows[r][1] = t > 0 ? -0.5 : 0.5;
}else{
significant[r][0] = significant[r][1] = false;
p[r][0] = p[r][1] = 1;
font_colors[r][0] = font_colors[r][1] = "black";
arrows[r][0] = arrows[r][1] = 0;
}
}
//assigning the modified arrays back to the table
table.set('p', p);
table.set('t-Statistic', z);
table.cellSignificance = significant;
table.cellFontColors = font_colors;
table.cellArrows = arrows;