We have been working with academics on several projects related to metacognition/critical thinking. You can find out more about them below.
This paper examines how higher education researchers approach writing the rationale and justification for their work published in journal articles. A common way for establishing this justification is through claiming a gap, but the problem is that it is often hard to find a research gap, and if it is included, there is too often no explanation for why the gap is worthwhile in terms of its contribution to knowledge. What we do not know is how this task is approached across the field, what different approaches are taken, and what the implications might be for the quality of research and the advancement of knowledge. Therefore, we examined the gap statements from 124 articles from five top-ranked higher education journals. What we found is that the majority of articles do have a gap statement, but these are mostly implicit rather than explicit, and located somewhere in the introductory text. However, 20% of articles had no gap statement and 27% of all articles had no justification for the importance of the research. Based on the data and drawing on theory, we present a tool to assist with writing gap statements and comment on current practice in relation to knowledge contribution.
Learn more about Nave Wald
Learn more about Tony Harland
Learn more about Chandima Daskon
This qualitative study looks at multiple-choice questions (MCQs) in examinations and their effectiveness in testing higher-order cognition. While there are claims that MCQs can do this, we consider many assertions problematic because of the difficulty in interpreting what higher-order cognition consists of and whether or not assessment tasks genuinely lead to specific outcomes. We interviewed university teachers from different disciplines to explore the extent to which MCQs can assess higher-order cognition specified in Bloom’s taxonomy. The study showed that study participants believed MCQs can test higher-order cognition but most likely only at levels of ‘apply’ and ‘analyse’. Using MCQs was often driven by the practicality of assessing large classes and by a need for comparing students’ performances. MCQs also had a powerful effect on curriculum due to the careful alignment between teaching and assessment, which makes changes to teaching difficult. These findings have implications for both teaching and how higher education is managed.
Learn more about Qian Liu
Learn more about Nave Wald
Learn more about Chandima Daskon
Learn more about Tony Harland
This article examines how much ‘complex knowledge’ is assessed during a university degree and the extent to which a student has the opportunity to develop this. We conceptualise complex knowledge as any type of assessment that requires students to create and evaluate knowledge, and for which they may receive formative feedback. Such activities are associated with developing higher-order cognition, a set of skills that is poorly understood in the context of modular degree structures.
The study analysed the foundational documents of 1135 modules between 1999 and 2018, and looked for the proportion of complex knowledge being assessed, as well as the weight assigned to the final examination and the number of internal assessments per module. Findings show a clear increase in the frequency of assessments involving complex knowledge over time in both Science and HSSC (humanities, social sciences, commerce) subject groups.
Complex knowledge was also more prevalent in second- and third-year modules. We argue that more attention needs to be devoted to the quality of assessment in terms of its potential for enabling students to develop higher-order cognition. The study opens up important conversations about the appropriate amount of higher-order learning that a university graduate should experience.
Input your search keywords and press Enter.