I had a chance to have a look at the workings of
ReView
while I was at UNSW. It is an impressive feedback tool with some affordances
that aren’t yet available in competing tools but also some limitations
which may be deal breakers for most institutions.
Pros
1 Mapping
ReView is really well suited to the Australian Tertiary Education
sector which has made significant investments in the identification of
Graduate Attributes (GAs). Institutions are now being (or will soon be)
required to map learning outcomes and student achievement against those
GAs. ReView’s key design feature is its ability to do this as a natural
part of the marking process. Academics identify assessment criteria and
link them directly to GAs so when they come to generate the feedback,
these automatically map onto them. This constitutes a huge time saving
in what could otherwise have become a very onerous process. In the HE
sector in the UK we are not yet required to do this – but I don’t think
it is far off. Designing this kind of mapping in where we can to
pre-empt such a requirement and finding the right tools to help us do it
may be worth considering.
2 Transparency
As tutors mark using ReView, they generate a grade based on student
achievement against defined Assessment Criteria. This is, I believe, a
great way of improving the transparency of marking for students (making
it clearer to them how their final grade was arrived at). While I
acknowledge the critical scholarship on the use of scored or calculated
rubrics and assessment criteria in this way (particularly Royce Sadler’s
work) I feel that the benefits it affords students outweigh the
potential or actual drawbacks in terms of integrity. The tutors using
this tool determine student attainment using ‘sliders’ which I do take
issue with (below) but which work in much the same way as the rubric
calculator in Grademark.
3 Analytics
Spitting out the back of ReView is a really interesting ‘dashboard’
which shows rich and valuable data on the student achievement harvested
from the marking. I didn’t get to see this in action because (as is
always the case with these tools) unless there is live student activity
in it, it’s difficult to ‘mock up’ demos of things like this. But I saw
enough to get the gist of it and it looks more advanced than simply the
raw data which is generated by Grademark.
4 Mobility
This tool is designed to work on mobile devices – particularly
tablets and especially iPads. This makes them portable. It means that
this is a tool which is very well suited to the marking of studio-based
work (such as design, textiles, fine art) and it’s also fantastic for
marking hand written exams because it doesn’t need assessment to be
submitted to it in order for it to be marked and returned.
5 Self-evaluation
Unlike Grademark, this tool includes a student self-evaluation tool.
Students can indicate what they think their work deserves. To achieve
the same thing in Grademark requires a workaround and lots of data
entry. The student self-evaluation is clear to the tutor as they mark
and this may influence their judgement in unhelpful ways. I feel that if
there is going to be a student self-evaluation function, it must be
‘blind’ to the tutors as they mark.
Cons
1 Integration
Currently ReView is not well integrated or integrable in that it is a
stand alone tool (it’s not yet a building block for any of the major
VLEs) and in that it is only a feedback tool not a marking tool. In
other words – students can’t submit their work to it and tutors can’t
comment directly on their work with it. The danger of this is that it
will generate false economy. So even if it saves tutors time in the
marking of student work, it may cost them or their institutions more
time in terms of mark entry, handling submissions, returning student
work etc. Tutors may find themselves moving between two or even three
different systems to received, read, annotate, plagiarism check, return
and enter the marks for a piece of student work. Additionally, the
transparency it achieves through the rubric ‘sliders’ may be
counteracted by the lack of clarity as to precisely where the strengths
and problems in the work are located if they can’t be marked on the work
itself. For instance – a comment saying that some sentences are poorly
constructed is useless to students unless they are clear which ones are
poor and which ones aren’t. The integration within VLEs will no doubt
come with time, but it’s looking unlikely that the marking tool is going
to emerge. As such, while it does some lovely analytics, its not the
‘granular’ level that GradeMark achieves.
2 Clarity
I have concerns about the ‘sliders’ themselves. If we are using
rubrics and assessment criteria to improve transparency, we need to take
great care not to then obfuscate what we are doing. The ‘sliders’ allow
tutors to decide whether a piece of work is in the high or low range
within a classification (i.e. that against a single criteria it can be a
‘high 2.1' or a ‘low 2.1'). This to me is one step forward and two
steps back. When you have five or more criteria (averaging 20% or less
per criteria) the difference between one classification and another is
going to be 2% or less of the total. To be making judgements
within
that classification (within 2%) is marking to a level of accuracy
which is simply not reliable or helpful to the students. It is for this
reason that I think the ‘radio button’ approach of the Grademark rubric
calculator is more transparent. In other words, it doesn’t leave them
wondering what makes their achievement a ‘high’ rather than ‘low’ 2.1
(for instance) against a particular criteria. It does allow tutors to
tweak a final grade away from a borderline (eg 69%) but my hunch is that
when rubric calculators are used, students are less inclined to
complain about a mark that ‘comes out in the wash’ to that number than
one which is arrived at holistically by the tutor. As a result – I don’t
think we should shy away from awarding borderline marks if that’s what
the rubric (which has been clearly communicated to the student before
hand) calculates. Anything else is duplicitous.
3 Cost
This tool looks like it’s going to be quite expensive in comparison
to its competitors. Given that this is likely to be a tool that would
need to be used in conjunction with other marking and submission tools and that we can
probably achieve many of the affordances it offers with some workarounds
within Grademark, it’s going to prove a hard sell to many cash-strapped
institutions at the moment.
Final Evaluation
This looks like a fine tool that will almost certainly be the right
tool for many marking jobs. I think it will be especially attractive to
colleagues marking physical objects (like artworks, models etc) and
performances (music, drama, presentations etc). I suspect many will find
it useful for marking exams, especially if feedback is required on
them. It won’t replace marking tools like Grademark and it may be hard
to justify the investment if we can find workarounds which achieve
similar things within the tools for which we already hold sight
licenses.