Comparison Reports allow you to compare and combine like-questions across multiple surveys in order to compare the means using a one-sided t-test. It can be valuable to compare means to track changes in responses over time (longitudinal analysis), between interventions (pre/post tests), or aggregating data to get a general sense of how students are responding across contexts (e.g., using question bank questions in a variety of surveys across programs and departments).
Once you have requested that Support create your Comparison Report, AND you have created a View of your Comparison Report, you can begin to review the results.
Overview of the results
When reviewing your results, each View is going to look a little different based on the kinds of questions you pulled into the report. However, there are going to be some consistencies across results, and it is important to understand what each column and feature means.
Likert Scale Questions
Likert scale questions in the Comparison Report results will display by default with a visual graphic, as well as each data set's mean, the difference between the main segment and each project segment, and the Standard Deviation, Number of respondents, percentage that responded with the most popular choice, percentage that responded with the least popular choice, and Rank (highest mean = 1). It will also display a table to easily compare response % across each project and average you added to your View. The main segment is also highlighted in tan (it may no always be at the top of the graphic)
If there is a significant statistical difference (that is, the probability that the main segment's mean is different due to random chance is less that 5%, typically phrased as "p<.05") found between the main segment you are comparing to and the other segments, the difference will be bold, colored, and starred*. Red indicates that the main segment has a lower mean than that segment; green indicates that the main segment has a higher mean.
Notice in the example above that there is little obvious difference between the means of the different segments. Seeing your results compared side-by-side allows you to see your data in context, allowing you to more accurately reflect on just how meaningful these differences between data sets are.
The bar graph can be helpful it allowing you to quickly notice differences between the means, but only the * differences will tell you if something is statistically significant. Also bear in mind that the bar graph does not always start at 0 for each question.
Similar to likert scale questions, except only the table with response % will display. By default, the graph will not display, but clicking on the Graph button will generate a bar graph that allows you to visual compare the response %.
Notice in the example above, Copied Residence Life Survey 11-12 and Copied Residence Life Survey 2012-2013 are displaying 0.00% for responses. This is because this particular question ("Which best describes your current living arrangement?") was not asked on those two surveys. Anytime you see 0.00%, it is an indicator that that question was not asked in that specific segment.
Making sense of it all
How you decide to use your data is completely up to you! Even if you do not find a statistically significant difference between means when you expected to, it does not mean that the data is useless. Consider the number of respondents (N) between the segments--could that account for the lack of significance? Are there any serious outliers, or surprising results, that are worth investigating further with additional assessment methods? Your Comparison Report results tell only a small portion of the story--but hopefully a compelling portion!
If you are interested in viewing an in-depth tutorial on creating Comparison Reports, and how to interpret the results of your view, we recommend viewing this short video.
If you have any questions about this process, please feel free to contact Campus Labs Support by calling 716-270-0000, or emailing firstname.lastname@example.org.