Skip to main content

This post provides advice and guidance for academic staff on writing strong test questions. Good practice here is essential to ensure high-quality assessments that are both pedagogically sound and technically efficient.

The efficacy of test questions that can be auto-marked is an area of some debate within Higher Education. For some members of the academic community, the concept that multiple choice questions (MCQs) or other, short-style questions could effectively test the reasoning and synthesis skills of their students seems difficult to credit. Evidence tells us, however, that auto-marked tests can be effective for higher-level learning, if they are well constructed.

This guidance will highlight how such questions can contribute to student learning, as well as efficiency of assessments. Although this guidance was originally created with Faculty of Science staff in mind, the principles obviously apply across all faculties and disciplines within the university.

Students on Campus

Question Creation Principles

Bear in mind the following principles when writing test questions in general: 

  • Alignment with Learning Outcomes 
    In line with the university’s Principles for Learning Teaching and Assessment, assessment questions should align with the specific learning outcomes of the module, a concept known as Constructive Alignment. Questions should assess the knowledge or skills the students are expected to acquire (Biggs and Tang, 2011).
  • Cognitive Level 
    Consider where the students are in their learning journey. Questions can test anything from basic recall through to higher-order thinking skills such as analysis, evaluation and synthesis of ideas (Anderson & Krathwohl, 2001). Referring to learning taxonomies, like Bloom’s Taxonomy, can help you create questions at the appropriate cognitive level. See the section below on Example Question Prompts for some suggestions.
  • Accessibility and Universal Design 
    In line with the university’s Inclusive Learning Design Principles, assessments need to be accessible in both content and delivery. As with any other online content, test questions should be fully accessible for all users, so you should consider good practices such as alt-text and descriptions for any images and diagrams used. Online test tools also provide options for delivery, such as printing out tests or allowing extra time for students with additional requirements. Plan in advance to ensure all your students can undertake any assessments in your module.
  • Regular Review and Updating 
    Regularly review and update questions to ensure they remain relevant and accurate. Asking colleagues to review your questions can help you to spot potential issues and identify other common misconceptions you might want to address. Incorporating feedback obtained from previous students’ MEQ responses will help you refine your question set (Nicol & Macfarlane‐Dick, 2006).
Student Writing on a Page

Writing Questions

Things to consider when writing your test questions: 

  • Clarity and Precision 
    Questions should be clear, unambiguous, and concise. Avoid complex sentence structures and jargon unless they are essential for the subject matter (Haladyna, & Downing, 1989).
  • Structuring Multiple Choice Questions (MCQs) 
    If you use MCQs, ensure each question has a clear stem and only one correct answer (unless set as a multiple response question type). Distractor answers should be plausible and reflect common misconceptions (Tarrant & Ware, 2010). The Writing Good Multiple Choice Test Questions guide from Cynthia Brame at Vanderbilt University provides a helpful steer in this area.
  • Avoiding Common Answer Pitfalls 
    Avoid using “All of the above” or “None of the above” as answer options, as these can be ambiguous or misleading. Ensure options are homogeneous in length and complexity (Haladyna et al., 2002). Well-structured MCQ questions can also help students identify information relating to plausible but incorrect alternatives, forcing them to engage in productive retrieval of correct answer options relating to their module material (Little et al. (2012).
  • Testing Higher-Order Thinking 
    Select appropriate types of questions (such as assertion-reason questions or application questions) that push students’ understanding beyond rote memory by asking them to apply concepts in new contexts or solve problems. (Fellenz, 2004).
Students Walking Across Campus

Example Question Prompts

The following example question prompts show how questions may be constructed for auto-marking in ways that test students’ understanding on various levels. Use the question stem, then add in a relevant example from your subject area to set up a question that interrogates knowledge at the appropriate level. 

Comprehension
  • Which statements support [insert hypothesis]?
  • Which is the best answer to [insert situation or problem]
  • A summary of [insert topic] would be [insert answer options]
Application
  • What would result if [insert scenario]?
  • Which approach would you use to [insert task]?
  • What alternative way might you plan to [insert task]?
Analysis
  • How would you classify [insert topic]?
  • Identify the different parts of [item/theory].
  • What is the relationship between/distinction between [insert items x and y].
Synthesis
  • In [insert scenario], what would happen if [scenario with alternatives]?
  • Predict the outcome of [scenario or problem]
  • What needs to change in [scenario or problem] to achieve [desired outcome]?
Evaluation
  • Why was it better that [insert outcome] happened?
  • How would you prioritise the following [insert items to be prioritised]?
  • Would [insert outcome] be better if [insert event]?

References

Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.

Biggs, John. & Tang, Catherine. (2011). Teaching for Quality Learning at University (4th ed.). Open University Press.

Brame, C. (2013) Writing good multiple choice test questions. Retrieved [05/09/2024].

Fellenz, M. R. (2004). Using assessment to support higher level learning: The multiple choice item development assignment. Assessment & Evaluation in Higher Education, 29(6), 703-719.

Haladyna, Thomas M., Downing, Steven. M., & Rodriguez, Michael. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-334.

Haladyna, Thomas M. & Downing, Steven M. (1989) A taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 37-50.

Little, Jeri L., Bjork, Elizabeth L., Angello, Genna & Bjork, Robert A. (2012) Multiple-Choice Tests Exonerated, at Least of Some Charges: Fostering Test-Induced Learning and Avoiding Test-Induced Forgetting. Psychological Science, Volume 23, Issue 11.

Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

Tarrant, M., & Ware, J. (2010). A framework for improving the quality of multiple-choice assessments. Nurse Education Today, 30(6), 515-520.

Williams, Jeremy. (2006) Assertion-Reason Multiple-Choice Testing as a Tool for Deep Learning: A Qualitative Analysis. Assessment and Evaluation in Higher Education, 31(3), pp. 287-301.