Good learning, criterion-referenced assessment, feedback and learning objectives

learning curveI am on a steep learning curve when it comes to criterion-referenced assessment. Like many teachers, I have used the standard norm-referenced assessment model for years. This is recognizable to everyone, for example: Question 1 asks for three main points. If the student identified the three points, three marks are awarded. In contrast to this, criterion-referenced assessment works with pre-defined criteria which describe increasing levels of success in understanding the curriculum. The more I read about it and work with it in the MYP and the IB DP, the more I see how using criterion-referenced assessment allows for a more sophisticated understanding and assessment of a student’s learning, skill and knowledge.

The example below shows what an MYP criterion (in “Individuals and Society”) looks like:



The article “A comparison of norm-referencing and criterion-referencing methods for determining student grades in higher education” lays out some interesting information regarding the difference between criterion-referenced assessment and norm-referenced assessment. I have summarised and shortened some of the points for easy reading.

  • In  a criterion-referenced assessment,  a grade is awarded by comparing achievements with clearly stated criteria for a particular levels of performance.
    Unlike norm-referencing, there is no pre-determined grade distribution (bell curve) and a student’s grade is not influenced by other students’. Theoretically, all students in a class could receive very high (or very low) grades depending on how they performed against the established criteria and standards.
  • In criterion-based assessment, the categories are often called failing, basic, proficient, and advanced. a

Best practice in criterion-referenced assessment can be summarized as follows:

  1. Strike a balance between criterion-referencing and norm-referencing, with the emphasis on criterion-referencing.
  2. In class, begin with clear statements of expected learning outcomes and levels of achievement.
  3. The learning outcomes (or learning intentions, objectives) should make sense to students and should be written in a language that makes sense to them.
  4. Measure student achievement as objectively as possible against these statements, and compute results and grades transparently; explain to students how they achieved the mark and how they can do better.

“Clear statements of expected learning outcomes” echoes what John Hattie says about stated learning intentions and success criteria:

John Hattie, Visible Learning for teachers. Maximizing impact on learning. P47

John Hattie, Visible Learning for teachers. Maximizing impact on learning. P47

A description of the expectations of criterion-referenced assessment in the MYP also mentions the importance of learning intentions (objectives):

  • .. In the MYP, students should have access to the learning objectives for each subject group  … students are informed about the criteria by which judgments are made about the  quality of their work. As Wiggins argues …, in order to effectively learn, students need “a complete demystification of  the standards and performances of test tasks against which they will be evaluated”
  • Knowing the specific objectives for learning and the criteria for  assessment enables students to prepare, self evaluate, self-adjust, and reflect;  fundamental requisites of learning in a system of educative assessment. In educative  assessment, learning objectives are articulated to students before, during, and after  assessment. b

Grant Wiggins, in “Educative Assessment: Designing Assessments to Inform and Improve Student Performance” c expands on that as follows: At the very least, students need four things that challenge conventional assessment at its roots:

  1. they need a complete demystification of the standards and performance test tasks against which they will be evaluated (as already occurs when people work at a job, play music, or participate in sports);
  2. they need multiple opportunities with accompanying feedback to learn to master complex tasks;
  3. they need progress reports in which current performance is judged against exemplary adult performance; and
  4. most of all, they need to know how they are doing as they do it.

To take my meandering thoughts back to assessment, I asked myself the question:

To what extent is criterion-referenced assessment better than norm-referenced assessment?

Biggs states that criterion referenced assessment is superior to norm referenced assessment as it allows grades to reflect learning. He adds that… “in criterion-referenced assessment, where students are assessed on how well they meet preset criteria, they see that to get a high grade, they have to know the intended outcomes and learn how to get there, with a premium on attributions involving effort, study skill and know-how. In norm-referenced assessment, success depends on the abilities of other students, over which there is no control, while in criterion-referenced assessment, the ball is in the student’s court.

Feedback as to progress also encourages beliefs in future success, which again is easier with criterion-referenced assessment: ‘This is what you did, this is what you might have done, this is how to get a better result.’ But how can norm-referenced feedback, such as ‘You are below average on this’, help students to learn? This is not to say that some students don’t want to be told where they stand in relation to their peers, but that information has little to do with teaching and learning. It is nice to be told that you’re cleverer than most other students, but not very helpful for learning how to improve your performance.” d

I turned to Twitter to see what people were saying about criterion-referenced assessment.


That led me to reflect on the purpose of exams (I told you I was meandering!) and I found this thought provoking overview by the Office of Learning and Teaching in Economics and Business, Sydney University:

Pros of exams e

The main advantages of timed examinations are as follows: (Biggs, 2003)

  • They are convenient in that they are held at a set place and time.
  • Invigilation – easier to prevent cheating.
  • Conditions are standardised – no unfair advantage.
  • Can be viewed as modelling real life – work swiftly and well under pressure.
  • Can have the positive backwash of targeting a holistic view of the course, however it is more likely that the negative backwash of memorising specific points will occur
  • The time constraint in exams is more for administrative reasons than educational reasons – convenient and less cheating (Biggs, 2003)

Cons of exams 

  • Exams result in memorisation related activities which only leads to an acquaintance with many topics, Assignments on the other hand result in application related activities which lead to deep learning of one topic. (Biggs, 2003)
  • In comparing the responses in an essay exam versus an assignment, the responses in the exam are very similar (cloned) whilst assignments elicit greater creativity. (Biggs, 2003)
  • The use of multiple choice questions in exams can result in anger as student are not able to express higher level skills. (Biggs, 2003)
  • Marks in coursework assignments are shown to be higher than marks in exams. (Gibbs & Simpson, 2005) f
  • Students consider coursework assignments to be fairer than exams in that they measure a greater range of abilities and allow them to organise their own work patterns. (Gibbs & Simpson, 2005)
  • Coursework assignments have been shown to be much better predictors of the long term learning and retention of course content than exams. (Gibbs & Simpson, 2005)
  • A disadvantage of exams is that they have the effect of students concentrating their study into an intense period at the end of the course. – Frequent assignments or tests can distribute effort across the course. (Gibbs & Simpson, 2005)

Alternatives to exams 

  • Open book timed exam – remove the memorisation of detail and allow higher level thinking. (Biggs, 2003)
  • Assignment or take home exam – leads to deeper learning as students consult more sources, however the problem of plagiarism arises. (Biggs, 2003)
  • Ordered outcome items – allows students to illustrate a broad range of skills, from basic data retrieval to higher level analytical skills. (Biggs, 2003)
  • Group projects – gives a decreasing marking workload and teaches cooperative skills, problem in that it doesn’t take into consideration individual contributions. This problem can be minimised through allowing peer evaluation and also getting students to explain how their contribution holistically fits in to the assignment as a whole. (Biggs, 2003)
  • Reflective journals – can be used more to establish evidence of quality thinking rather than as an assessment reporting tool. (Biggs, 2003)
  • Case studies – allow students to apply knowledge and exhibit professional skills. (Biggs, 2003)
  • Gobbets (= a short commentary on an assigned primary source)– eg. looking at newspaper article and responding with how it fits into the context of the course. This allows students to access the bigger picture. (Biggs, 2003)
  • Peer and self assessment of problems and assignments can be useful in allowing students to internalise expected standards so that they can supervise themselves and improve the quality of their own assignments. In addition to enhancing student learning this can decrease marking time. (Biggs, 2003)
So, if you are still with me on this journey, here is how I encapsulate all this for myself. Good learning happens when:

  • …students are given a clear learning objective that states what they will learn and how they can show that they’ve learnt it successfully. (Learning intentions and success criteria)
  • …students are given feedback that is geared towards teaching them how to improve.
  • …students are given multiple opportunities to practice something.
  • …students are assessed using criterion-referenced rubrics which are written in clear, accessible language. Students understand the rubric and know what success looks like.
  • …assessment is designed to teach.
  • …the emphasis is not on exams, but on more representative tasks that show student learning, rather than their ability to rote memorize.

Overview of difference between criterion-referenced assessment and norm-referenced assessment: g

Dimension Criterion-Referenced
Purpose To determine whether each student has achieved specific skills or concepts.To find out how much students know before instruction begins and after it has finished. To rank each student with respect to the achievement of others in broad areas of knowledge.To discriminate between high and low achievers.
Content Measures specific skills which make up a designated curriculum. These skills are identified by teachers and curriculum experts.Each skill is expressed as an instructional objective. Measures broad skill areas sampled from a variety of textbooks, syllabi, and the judgments of curriculum experts.
Each skill is tested by at least four items in order to obtain an adequate sample of student performance and to minimize the effect of guessing.The items which test any given skill are parallel in difficulty. Each skill is usually tested by less than four items. Items vary in difficulty. Items are selected that discriminate between high and low achievers.
Each individual is compared with a preset standard for acceptable achievement. The performance of other examinees is irrelevant.A student’s score is usually expressed as a percentage. Student achievement is reported for individual skills. Each individual is compared with other examinees and assigned a score–usually expressed as a percentile, a grade equivalent
score, or a stanine.Student achievement is reported  for broad skill areas, although some norm-referenced tests do report student achievement for individual skills.

This infographic contains a good overview of different assessment types, incl. criterion and norm-based referenced assessment: h


  1. University of Melbourne. A comparison of norm-referencing and criterion-referencing methods for determining student grades in higher education  (back)
  2. Jeff Thompson, Perspectives of Criteria Based Assessment in the International Baccalaureate’s Middle Years Programme. Chad Carlson, Colegio Internacional de Educación Integral, 2012, Executive Summary, jtwinners/documents/FinalCarlsonExecutiveReport.pdf  (back)
  3. Wiggins, Grant P. Educative assessment: designing assessments to inform and improve student performance. San Francisco, Calif.: Jossey-Bass, 1998.  (back)
  4. Biggs, J. B. (2003) Teaching for quality learning at university, Buckingham, UK: SRHE and Open University Press.   (back)
  5. The Pros and Cons of Exams, by the Office of Learning and Teaching in Economics and Business, Sydney University. 2010  (back)
  6. Gibbs, G., & Simpson, C. (2005) Conditions Under Which Assessment Supports Students’ Learning, Learning and Teaching in Higher Education, Issue 1, pp. 3-31.  (back)
  7. Measurement and evaluation: Criterion- versus norm-referenced testing. Source: Huitt, W. (1996). Measurement and evaluation: Criterion- versus norm-referenced testing. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved [date], from  (back)
  8. “The 6 Types Of Assessments (And How They’re Changing).” Edudemic. N.p., n.d. Web. 17 Dec. 2013. <;.  (back)