Main content
Course: Content creation best practices > Unit 1
Lesson 6: Cognitive ComplexityCognitive Complexity
Placeholder
Writing to levels of cognitive complexity
Defining Cognitive Complexity
Cognitive complexity in this context refers to the depth of thought process required to answer a question. The most common way to operationalize cognitive complexity is through Bloom’s taxonomy (Bloom, 1956). This categorization of domains describes an increasing level of complexity. A revision of the taxonomy emphasized the actions a person takes at each level (Anderson et. al., 2001). The original nouns and paired verbs are as follows:
- Knowledge—Remember
- Comprehension—Understand
- Application—Apply
- Analysis—Analyse
- Synthesis—Synthesize
- Evaluation—Evaluate
It has been shown that students who experience assessments demanding higher-order thinking are subsequently more likely to adopt more holistic approaches to their study as opposed to more surface-level or rote learning approaches (Jensen et al., 2014). Many standards require students to engage in cognitively complex tasks. Therefore, we want to ensure that we are able to write items across levels of cognitive complexity.
Have a Plan
Have a plan for the level(s) of cognitive complexity you want to target for the items in the exercise you are writing:
- Your plan for writing an exercise should include the level of complexity you are targeting for each item. This should largely be dictated by the standard you are addressing.
- However, if a standard calls for a relatively high level of complexity, you may also want to have some items that target lower-complexity foundations. to ensure students have the knowledge needed to engage in some of the higher-level thinking tasks.
Item Types
Use the existing principles in choosing item types rather than basing item type on cognitive complexity requirements.
- Despite common misconceptions, multiple choice items can be written to require higher levels of cognitive complexity!
- Items that require students to synthesize and evaluate can be challenging to create, but items that test application and analysis are definitely possible.
Strategies for constructing items at different levels
Use targeted verbs to guide the complexity level of a question
For each level of complexity, certain verbs are associated with the kinds of thinking in that level. This is not a fool-proof method, as some verbs span levels and much depends on the context of the item, but it is a good starting point. Verbs related to levels are as follows:
Knowledge—Remember
- Define
- Name
- List
- Identify
Comprehension—Understand
- Explain
- Summarize
- Describe
- Restate
Application—Apply
- Apply
- Compute/ Solve
- Predict
- Use
- Demonstrate/ Show
Analysis—Analyze
- Analyze
- Compare
- Contrast
- Distinguish
Synthesis—Synthesize
- Compose
- Construct
- Create
- Design
- Plan
Evaluate—Evaluate
- Appraise
- Assess
- Evaluate
- Judge
To use these verbs with multiple choice items, sometimes they will be turned into nouns and used with “Select” or “Identify.” For example, if you want students to summarize or interpret, you might say, “Select the best summary” or “identify the most accurate interpretation.”
Use novel examples and context to increase cognitive complexity
Why?
- Context-dependent items are often more cognitively complex. This is because they require students to draw out the important elements of the scenario, and apply knowledge and skills in new ways. If the examples and content are the same ones that were used in instruction, they will not require that application in a new way!
Can you show me an example?
Sure! In the following example (from Scully, 2017), flipping the target concept “formative assessment” to the options and adding an example, moves the item from knowledge to comprehension.
Original:
Which of the following best describes what is meant by formative assessment?
(A) Is based on the student’s attitudes, interests, and values
(B) Is designed primarily to evaluate learning
(C) Is usually high stakes
(D) Provides information to modify teaching and learning
(B) Is designed primarily to evaluate learning
(C) Is usually high stakes
(D) Provides information to modify teaching and learning
Flipped, with example added:
A teacher uses a strategy of thumbs up, thumbs down with her students. This illustrates an example of:
(A) Affective assessment
(B) Formative assessment
(C) Diagnostic assessment
(D) Summative assessment
(B) Formative assessment
(C) Diagnostic assessment
(D) Summative assessment
Below is an example of a question at the application level from high school biology:
In Yellowstone National Park, a population of coyotes relies on pocket gophers, quails, and ground squirrels as its primary food source. The coyotes themselves are hunted by mountain lions.
The following table describes the size of the coyote population over time.
Year | Approximate population |
---|---|
2001 | 1302 |
2002 | 1426 |
2003 | 1450 |
2004 | 1551 |
2005 | 1607 |
2006 | 1700 |
2007 | 1702 |
2008 | 1700 |
Based on this data, which of the following is most likely?
(A) The coyote population reached the carrying capacity of its environment around 2006.*
(B) The local rodent population has steadily decreased since in 2001, reducing the area’s carrying capacity.
(C) Many more mountain lions were introduced to the area in 2005, increasing the carrying capacity of the environment.
(D) The carrying capacity of the environment is about 1300 coyotes.
A note of caution: There are disadvantages of adding more context, including increasing reading load and differential familiarity with the context provided. Follow other item writing guidelines to minimize these issues.
Add variety to question wording
Particularly at lower levels, it can be difficult to find new ways to ask questions that do simple things like define vocabulary. Rather than always asking, “What is the definition of X”, consider other ways to get at definitions, for example:
- Give example definitions and ask students to find the error (what's missing? what's added that shouldn't be there? what's a counterexample?)
- Ask students to compare elements of different definitions
References
Anderson, L.W., Krathwohl, D.R., Airasian, P.W., Cruickshank, K.A., Mayer, R.E., Pintrich, P.R., …
Wittrock, M.C. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s
Taxonomy of Educational Objectives. (Complete edition). New York: Longman.
Bloom, B. S. (1956). Taxonomy of educational objectives. Vol. 1: Cognitive domain. New York: McKay, 20, 24.
Scully, D. (2017). Constructing multiple-choice items to measure higher-order thinking. Practical Assessment, Research, and Evaluation, 22(1), 4.