While longitudinal data and analysis are essential, taking stock of the questions asked and reviewing them for necessary changes can be just as important. With that in mind, our Consultants and Survey Design Specialists compiled three question types to remove or improve to enhance your surveys.

  1. NPS Style questions: Questions like “On a scale of 1-10, how likely are you to recommend X to a peer” have become standard in the business world. NPS questions are helpful on an enormous scale; they need many responses to be accurate and overcome the scale bias present with purely numerical scales. Even then, they are generally used to compare other companies and industries as opposed to driving action. We will occasionally send out NPS surveys as a business level metric – not to collect actionable data.
  1. Generic Follow-Up Questions: These are often provided after a question or series of questions to allow for thoughts, context, or a more nuanced response. Examples include: Comments? Thoughts on the above statements? Or Why did you answer that way? They are more of a placeholder for an open-text response than an actual prompt. While you may uncover an occasional gem with multiple generic questions. You will gain quality data more consistently with more specific and guided questions.
  1. Self-Reported ‘Knowledge Checks’: Direct Assessment has power and cuts to the heart of what a learner has gained. However, it can be time-consuming to do well in a survey. Questions that are self-reported ‘knowledge checks’ like “Do you know the name of your Complex Coordinator?” may seem like direct assessment. Still, it is not asking the respondent to show their knowledge which makes it a flawed measure of authentic learning.

These questions may seem good on the surface, yet each has drawbacks that limit continuous improvement efforts by limiting the data we use to make decisions. We want quality, actionable assessment data, so the key is to focus on quality instead of quantity. Use the tips and suggestions below to take these questions to the next level.

  • Improving NPS Style Questions: The issue with purely numerical scales is that respondents must decide how to interpret the scale using their context; this leads to respondents who view the highest score as unrealistic under any circumstances being aggregated with respondents who would feel bad if they gave the lowest score. What we suggest for improvement is to get more specific with your questions and remove purely numerical Likert scales altogether. Rather than relying on respondents to use their context – provide something more specific like:
    • Very likely
    • Moderately likely
    • Neither likely nor unlikely
    • Moderately unlikely
    • Very unlikely

While not perfect, it does provide more context and consistent understanding. The exact wording should change based on what you are asking. If you focus more closely on the purpose of your program or event, you can connect questions to it more effectively. For example, asking, “How likely are you to recommend this event to a peer?” can become, “How likely are you to recommend this event to peers looking to write better learning outcomes?” or even going a step further and asking, “How effective was this event at helping you write better learning outcomes?”.

  • Improving Generic Follow-Up Questions: Survey fatigue plagues learners across the globe, and generic questions beget generic responses. To improve this, think about questions as a layered approach to assessment and think beyond any single survey. After a series of single-response questions (great at gathering data across a broader area and effective for longitudinal comparisons), we decide on a subset to dig deeper into or a theme to focus on. An example of this would include asking, “In what ways did the Front Desk Staff provide a welcoming environment on move-in day?” after asking a series of move-in day satisfaction questions. Having a specific prompt guides respondents to provide feedback for a particular theme. The data from that question will be more thorough and actionable than a generic “Provide feedback on Move-In Day here:” The goal here is to dig deeper into things that matter rather than throw out a large net and hope to catch something.

Surveys in Baseline are housed at the organizational level and not by personal accounts. Hence, you maintain access if people change positions. This allows you to focus questions within a specific survey for a deeper dive and then pull the data together from multiple instances of the survey across terms for a more complete picture without concern for losing data over time. 

  • Improving Self-Reported ‘Knowledge Checks’: Similar to generic follow-up questions, we should take a less-is-more approach. If every indirect question were converted to direct, response rates would plummet as the cognitive load increases. Instead, pick a theme for each assessment to dig deeper into and then focus on the questions within that theme that can be easily converted to direct assessment. While this approach would provide a complete picture only after it was administered multiple times and therefore requires a longer-term approach to assessment, it gives more depth and a more accurate representation of learning. You can still have indirect knowledge check questions for comparison and longitudinal reporting while digging deeper into a specific theme with each administration. This provides the best of both worlds.

The overall approach for better assessment is to be specific and focused. Making the changes above will improve your assessment data. We have staff eager to continue this conversation and identify ways to improve your efforts. If your institution has a Baseline License, you have access to both Survey Design Specialists and Consultants.

  • Survey Design Specialists provide specific feedback on any survey distributed through Baseline, including scales, question order, and more. Take advantage of this service through the Request Project screen – more information about this here.
  • Consultants focus less on individual surveys and more on product usage and approaches to assessment. They provide suggestions on topics ranging from using the different Baseline features to developing a more nuanced and tailored approach to assessment to how Baseline connects to other Anthology products. You can submit a consultation request here.
Have more questions? Submit a request