When doing statistical significance testing, you may run into the multiple comparisons problem (post hoc testing problem), whereby the more tests that are conducted, the more false discoveries(false positives) that are made.
Multiple Comparison Correction (MCC) attempts to fix the false discovery problem by requiring results to have smaller p-Values in order to be classified as significant. See the Technical Notes section below for more examples of how MCC is affected by the structure of the table.
This is more likely to arise in situations where there are many comparisons (tests) run in a single table, such as tracking surveys where the number of wave columns in a table increases each round and when creating crosstabs with banners with lots of columns, versus individual variable sets with fewer columns. If the significance testing results in a table change when adding new columns or waves, or are inconsistent across tables using some of the same variable sets, examine the multiple comparison correction approach you're using and determine if it's appropriate for your analysis.
This article describes how to go from a table showing significant differences at a 95% confidence level:
To a table showing significant differences at a 95% confidence level AND with false discovery rate correction applied:
Requirements
- A project containing a table or visualization showing significant differences.
- An understanding of the concept of multiple comparison problem, see our Multiple Comparison Problem (Post Hoc Testing) article for a detailed explanation.
Method
- Select the table or visualization in your document.
- Right-click the table and select Table Options > Statistical Assumptions
- Set the Multiple comparison correction based on what type of test you are showing:
- Arrows, Font colors, or Arrows and Font colors use Exception Tests. On the Exception Tests tab, select Multiple comparison correction > False Discovery Rate (FDR) and select a Significance symbol:
- Compare columns uses Column Comparisons sometimes also called pairwise comparisons. On the Column Comparisons tab, select one of the algorithms in the drop down for Multiple comparison correction.
- Arrows, Font colors, or Arrows and Font colors use Exception Tests. On the Exception Tests tab, select Multiple comparison correction > False Discovery Rate (FDR) and select a Significance symbol:
- [OPTIONAL] If you only want the correction to run based on the number of cells within a span, check Within row and span. Otherwise, the correction will be made based on the number of cells in the entire table. This is most relevant when doing significance testing on BANNERS where there are multiple groups of columns.
- Select Apply to Selection to apply it to just this table or Apply as Default to make it the default for all tables in the document.
The results are as follows:
Restore all of the fields to their default values
- Click the Restore button
Technical Notes
The extent to which the MCC makes it harder for a result to become significant is mostly due to the number of tests (cells) shown on the table or within the span if Within row and span is checked in the Advanced Statistical Testing Assumptions for the specific test type shown on the output. This is to say, you can have different MCC settings for Exception Tests versus Column Comparisons.
The correction that MCC uses may take other data into account when figuring out how much to raise the threshold for something to be significant, such as the range of values across the table. In most cases, when you increase the number of columns or rows on a table, the MCC will make it harder for a result to become significant as compared to the same table with fewer cells.
Another example of when you may notice differences is when you are working with a tracking study using column comparisons. When you get additional waves of data, you may see historical significance results change as more columns are added to the table.
Next
How to Show Column Comparisons to the Right of Values in a Table
How to Compare Significant Differences Between Columns
How to Do Planned ANOVA-Type Tests