At the moment I'm playing with a couple of social networking apps to see if I can find a way of supporting students in developing social reading strategies to support their engagement with English Literature texts they are studying on their modules. I'm particularly interested in finding ways to make the behaviours and engagements that high-achieving students have with the primary texts they're studying more transparent to students who do less well. This kind of 'folksonomic' approach to learning is usually only visible in seminar interactions and that is only ever going to be a very small sliver of a glimpse. Social reading apps are always going to be partial as well, but I can't help thinking that they may offer new ways of supporting peer, social learning in Higher Education.
The apps I'm looking at are as follows:
Lemon Tree is our fabulous library game. It links to students' library use (both physical and virtual) and allows students to see the library engagement of their student colleagues. It allows students to share which books they've borrowed, how frequently and when they visit the library, they're use of ezproxy and many other things. It also allows them to review books and comment on them and to see who else has borrowed the books they have. It links to Facebook and rewards students with virtual badges and growth on their personal lemon trees.
Benefits: it allows students to see library behaviour clearly and to share thoughts and responses to texts.
Goodreads is a social reading app that allows people to share the books they have read, are reading and want to read in the future. It allows people to rank their readings (by five star value) and to join groups, sort their reading on shelves, engage in polls. I've built a private space for people who are keen to join my experiment over the summer, but there is also a facility for student groups to be formed. It's clear to me that this app is already widely used and it's highly likely that students of English Literature are already using this to support their learning and engagement with texts. I'm keen to support this more formally by embedding it into the teaching and learning design of a module.
Benefits: this is more text focussed and allows students to engage in discussions and other types of interactions around the text(s) they are reading in common. Again, reading behaviour is more visible and obvious to others.
Readmill is a social reading app that is more granular than those already described. It allows people to share highlights and notes that they have made within a text and to comment on or like the highlights made by others on that same or other texts. Like the other apps you can 'follow' like minded folks and friends (so therefore student colleagues). It allows you to synch with an eReading device (I'm using a kindle touch) but also operates as an eReading device in its own right and has an iPad and iPhone app for this purpose.
Benefits: this gets us to a deeper level than the apps in that we can see not just what others are reading and their reactions/responses to the text(s) as a whole, but we can also see how they are reading it: which aspects of the primary text they consider to be valuable and why.
Kindle is of course an eReading device that allows social highlighting which can operate in a similar way to Readmill but is device specific. I've recently purchased a kindle touch and am enjoying the hardware and software (although it's not as intuitive as I had hoped at times). The fact that so many literary texts are available very cheaply or even free means that an eReading device is a good investment for English Literature students (buying a Kindle and getting the texts so cheaply has got to be better value than buying all the set texts for a degree second hand I would have thought). I've been disappointed at the lack of availability of some texts (there is no eCopy of Said's Orientalism for instance which I found astonishing) but that will improve with time.
As I say - I'm pretty confident students are already using these social reading apps regardless of whether we know about them or not. I'm keen to develop my social reading skills over the summer so that I can use them in a more proactive way in my modules in the future. I've gathered together a few like-minded English Lit and librarian types and we're going to do some social reading experiments over the summer. I'll keep you posted on how it goes!
Friday, 1 June 2012
Wednesday, 7 March 2012
Grademark's new features
One of the fantastic (albeit slightly unnerving) aspects of using
Grademark to do my marking is that when new features are added, they
simply ‘appear’ in the inbox of the document viewer, ready for me to
use. I’m lucky in that I get alerts to some of these changes, but
sometimes they feel like they’ve turned up unannounced. This is a great
thing in that you always get the new features as soon as they are
available without having to wait for an upgrade. But it can be, as I
said, unnerving: especially if you are in the middle of conducting some
training and there is a new feature there that you’ve not encountered
before!
In the last few months a fair few new features have appeared and having just undertaken two rather significant blocks of marking, I thought it was worth reflecting on my experiences and uses of them. These are considered in no particular order.
Response column: This is a new feature in the assignment inbox which shows if and when students have collected their feedback (image right). Of course it can’t tell us if they’ve engaged with their feedback, understood it and/or addressed it, but at least we now know if it’s been collected. I’ve been amazed at how quickly students have accessed their mark and feedback. For instance, a bunch of assignments were released at 1pm today and by 1.05pm, eight students had collected their feedback. Given I’d only sent the email alerting them to the release time having been moved forward 15 minutes earlier, I was pretty amazed by this! I checked an hour or so later and the number was up to 27 students: roughly a third of the total.
Rubric viewer: In the move to Turnitin 2, one of the things that a lot of people missed was the ability to see the rubric clearly. The new version of the rubric required you to mouse over each segment in order to see the words. A recent adjustment means that with the click of a single button (the four-arrow button in the image to the left) the whole rubric can be opened out into a full view and can be moved to a second screen. I tend to use two monitors (I dock my laptop to a screen whenever I can) so I’ve now got into the habit of opening the rubric straight away and moving it onto my second screen. This means I can adjust it as I go rather than having to remember everything at the end of the paper. It also means that you can put a lot more detail into the rubric segments and still see it all clearly which is a vast improvement on the old view of the rubric.
Strike through: when you highlight text in the document viewer and click the ‘delete’ or ‘backspace’ key a red strike-through line appears. This is really handy for correcting spelling errors (I type the correct spelling above the word using Text Comment, but it’s also really useful to show students how they can simplify their language by eliminating unnecessary words or turns of phrase. I’ve done this several times with sentences or whole paragraphs and left a comment next to it saying to students that I have ‘deleted’ words to show how many unnecessary words they are using.
These three new features are pretty simple but I’ve been amazed at how quickly and easily I’ve been able to incorporate them into my marking practice.
In the last few months a fair few new features have appeared and having just undertaken two rather significant blocks of marking, I thought it was worth reflecting on my experiences and uses of them. These are considered in no particular order.
Response column: This is a new feature in the assignment inbox which shows if and when students have collected their feedback (image right). Of course it can’t tell us if they’ve engaged with their feedback, understood it and/or addressed it, but at least we now know if it’s been collected. I’ve been amazed at how quickly students have accessed their mark and feedback. For instance, a bunch of assignments were released at 1pm today and by 1.05pm, eight students had collected their feedback. Given I’d only sent the email alerting them to the release time having been moved forward 15 minutes earlier, I was pretty amazed by this! I checked an hour or so later and the number was up to 27 students: roughly a third of the total.
Rubric viewer: In the move to Turnitin 2, one of the things that a lot of people missed was the ability to see the rubric clearly. The new version of the rubric required you to mouse over each segment in order to see the words. A recent adjustment means that with the click of a single button (the four-arrow button in the image to the left) the whole rubric can be opened out into a full view and can be moved to a second screen. I tend to use two monitors (I dock my laptop to a screen whenever I can) so I’ve now got into the habit of opening the rubric straight away and moving it onto my second screen. This means I can adjust it as I go rather than having to remember everything at the end of the paper. It also means that you can put a lot more detail into the rubric segments and still see it all clearly which is a vast improvement on the old view of the rubric.
Strike through: when you highlight text in the document viewer and click the ‘delete’ or ‘backspace’ key a red strike-through line appears. This is really handy for correcting spelling errors (I type the correct spelling above the word using Text Comment, but it’s also really useful to show students how they can simplify their language by eliminating unnecessary words or turns of phrase. I’ve done this several times with sentences or whole paragraphs and left a comment next to it saying to students that I have ‘deleted’ words to show how many unnecessary words they are using.
These three new features are pretty simple but I’ve been amazed at how quickly and easily I’ve been able to incorporate them into my marking practice.
Getting started with Grademark
Tabitha from Turnitin has just sent through this fantastic interactive guide to Grademark. This is a really handy tool for anyone who is keen to find out the basics of this tool.
Tuesday, 14 February 2012
Reviewing ReView
I had a chance to have a look at the workings of ReView
while I was at UNSW. It is an impressive feedback tool with some affordances
that aren’t yet available in competing tools but also some limitations
which may be deal breakers for most institutions.
Pros
1 Mapping
ReView is really well suited to the Australian Tertiary Education sector which has made significant investments in the identification of Graduate Attributes (GAs). Institutions are now being (or will soon be) required to map learning outcomes and student achievement against those GAs. ReView’s key design feature is its ability to do this as a natural part of the marking process. Academics identify assessment criteria and link them directly to GAs so when they come to generate the feedback, these automatically map onto them. This constitutes a huge time saving in what could otherwise have become a very onerous process. In the HE sector in the UK we are not yet required to do this – but I don’t think it is far off. Designing this kind of mapping in where we can to pre-empt such a requirement and finding the right tools to help us do it may be worth considering.
2 Transparency
As tutors mark using ReView, they generate a grade based on student achievement against defined Assessment Criteria. This is, I believe, a great way of improving the transparency of marking for students (making it clearer to them how their final grade was arrived at). While I acknowledge the critical scholarship on the use of scored or calculated rubrics and assessment criteria in this way (particularly Royce Sadler’s work) I feel that the benefits it affords students outweigh the potential or actual drawbacks in terms of integrity. The tutors using this tool determine student attainment using ‘sliders’ which I do take issue with (below) but which work in much the same way as the rubric calculator in Grademark.
3 Analytics
Spitting out the back of ReView is a really interesting ‘dashboard’ which shows rich and valuable data on the student achievement harvested from the marking. I didn’t get to see this in action because (as is always the case with these tools) unless there is live student activity in it, it’s difficult to ‘mock up’ demos of things like this. But I saw enough to get the gist of it and it looks more advanced than simply the raw data which is generated by Grademark.
4 Mobility
This tool is designed to work on mobile devices – particularly tablets and especially iPads. This makes them portable. It means that this is a tool which is very well suited to the marking of studio-based work (such as design, textiles, fine art) and it’s also fantastic for marking hand written exams because it doesn’t need assessment to be submitted to it in order for it to be marked and returned.
5 Self-evaluation
Unlike Grademark, this tool includes a student self-evaluation tool. Students can indicate what they think their work deserves. To achieve the same thing in Grademark requires a workaround and lots of data entry. The student self-evaluation is clear to the tutor as they mark and this may influence their judgement in unhelpful ways. I feel that if there is going to be a student self-evaluation function, it must be ‘blind’ to the tutors as they mark.
Cons
1 Integration
Currently ReView is not well integrated or integrable in that it is a stand alone tool (it’s not yet a building block for any of the major VLEs) and in that it is only a feedback tool not a marking tool. In other words – students can’t submit their work to it and tutors can’t comment directly on their work with it. The danger of this is that it will generate false economy. So even if it saves tutors time in the marking of student work, it may cost them or their institutions more time in terms of mark entry, handling submissions, returning student work etc. Tutors may find themselves moving between two or even three different systems to received, read, annotate, plagiarism check, return and enter the marks for a piece of student work. Additionally, the transparency it achieves through the rubric ‘sliders’ may be counteracted by the lack of clarity as to precisely where the strengths and problems in the work are located if they can’t be marked on the work itself. For instance – a comment saying that some sentences are poorly constructed is useless to students unless they are clear which ones are poor and which ones aren’t. The integration within VLEs will no doubt come with time, but it’s looking unlikely that the marking tool is going to emerge. As such, while it does some lovely analytics, its not the ‘granular’ level that GradeMark achieves.
2 Clarity
I have concerns about the ‘sliders’ themselves. If we are using rubrics and assessment criteria to improve transparency, we need to take great care not to then obfuscate what we are doing. The ‘sliders’ allow tutors to decide whether a piece of work is in the high or low range within a classification (i.e. that against a single criteria it can be a ‘high 2.1' or a ‘low 2.1'). This to me is one step forward and two steps back. When you have five or more criteria (averaging 20% or less per criteria) the difference between one classification and another is going to be 2% or less of the total. To be making judgements within that classification (within 2%) is marking to a level of accuracy which is simply not reliable or helpful to the students. It is for this reason that I think the ‘radio button’ approach of the Grademark rubric calculator is more transparent. In other words, it doesn’t leave them wondering what makes their achievement a ‘high’ rather than ‘low’ 2.1 (for instance) against a particular criteria. It does allow tutors to tweak a final grade away from a borderline (eg 69%) but my hunch is that when rubric calculators are used, students are less inclined to complain about a mark that ‘comes out in the wash’ to that number than one which is arrived at holistically by the tutor. As a result – I don’t think we should shy away from awarding borderline marks if that’s what the rubric (which has been clearly communicated to the student before hand) calculates. Anything else is duplicitous.
3 Cost
This tool looks like it’s going to be quite expensive in comparison to its competitors. Given that this is likely to be a tool that would need to be used in conjunction with other marking and submission tools and that we can probably achieve many of the affordances it offers with some workarounds within Grademark, it’s going to prove a hard sell to many cash-strapped institutions at the moment.
Final Evaluation
This looks like a fine tool that will almost certainly be the right tool for many marking jobs. I think it will be especially attractive to colleagues marking physical objects (like artworks, models etc) and performances (music, drama, presentations etc). I suspect many will find it useful for marking exams, especially if feedback is required on them. It won’t replace marking tools like Grademark and it may be hard to justify the investment if we can find workarounds which achieve similar things within the tools for which we already hold sight licenses.
Pros
1 Mapping
ReView is really well suited to the Australian Tertiary Education sector which has made significant investments in the identification of Graduate Attributes (GAs). Institutions are now being (or will soon be) required to map learning outcomes and student achievement against those GAs. ReView’s key design feature is its ability to do this as a natural part of the marking process. Academics identify assessment criteria and link them directly to GAs so when they come to generate the feedback, these automatically map onto them. This constitutes a huge time saving in what could otherwise have become a very onerous process. In the HE sector in the UK we are not yet required to do this – but I don’t think it is far off. Designing this kind of mapping in where we can to pre-empt such a requirement and finding the right tools to help us do it may be worth considering.
2 Transparency
As tutors mark using ReView, they generate a grade based on student achievement against defined Assessment Criteria. This is, I believe, a great way of improving the transparency of marking for students (making it clearer to them how their final grade was arrived at). While I acknowledge the critical scholarship on the use of scored or calculated rubrics and assessment criteria in this way (particularly Royce Sadler’s work) I feel that the benefits it affords students outweigh the potential or actual drawbacks in terms of integrity. The tutors using this tool determine student attainment using ‘sliders’ which I do take issue with (below) but which work in much the same way as the rubric calculator in Grademark.
3 Analytics
Spitting out the back of ReView is a really interesting ‘dashboard’ which shows rich and valuable data on the student achievement harvested from the marking. I didn’t get to see this in action because (as is always the case with these tools) unless there is live student activity in it, it’s difficult to ‘mock up’ demos of things like this. But I saw enough to get the gist of it and it looks more advanced than simply the raw data which is generated by Grademark.
4 Mobility
This tool is designed to work on mobile devices – particularly tablets and especially iPads. This makes them portable. It means that this is a tool which is very well suited to the marking of studio-based work (such as design, textiles, fine art) and it’s also fantastic for marking hand written exams because it doesn’t need assessment to be submitted to it in order for it to be marked and returned.
5 Self-evaluation
Unlike Grademark, this tool includes a student self-evaluation tool. Students can indicate what they think their work deserves. To achieve the same thing in Grademark requires a workaround and lots of data entry. The student self-evaluation is clear to the tutor as they mark and this may influence their judgement in unhelpful ways. I feel that if there is going to be a student self-evaluation function, it must be ‘blind’ to the tutors as they mark.
Cons
1 Integration
Currently ReView is not well integrated or integrable in that it is a stand alone tool (it’s not yet a building block for any of the major VLEs) and in that it is only a feedback tool not a marking tool. In other words – students can’t submit their work to it and tutors can’t comment directly on their work with it. The danger of this is that it will generate false economy. So even if it saves tutors time in the marking of student work, it may cost them or their institutions more time in terms of mark entry, handling submissions, returning student work etc. Tutors may find themselves moving between two or even three different systems to received, read, annotate, plagiarism check, return and enter the marks for a piece of student work. Additionally, the transparency it achieves through the rubric ‘sliders’ may be counteracted by the lack of clarity as to precisely where the strengths and problems in the work are located if they can’t be marked on the work itself. For instance – a comment saying that some sentences are poorly constructed is useless to students unless they are clear which ones are poor and which ones aren’t. The integration within VLEs will no doubt come with time, but it’s looking unlikely that the marking tool is going to emerge. As such, while it does some lovely analytics, its not the ‘granular’ level that GradeMark achieves.
2 Clarity
I have concerns about the ‘sliders’ themselves. If we are using rubrics and assessment criteria to improve transparency, we need to take great care not to then obfuscate what we are doing. The ‘sliders’ allow tutors to decide whether a piece of work is in the high or low range within a classification (i.e. that against a single criteria it can be a ‘high 2.1' or a ‘low 2.1'). This to me is one step forward and two steps back. When you have five or more criteria (averaging 20% or less per criteria) the difference between one classification and another is going to be 2% or less of the total. To be making judgements within that classification (within 2%) is marking to a level of accuracy which is simply not reliable or helpful to the students. It is for this reason that I think the ‘radio button’ approach of the Grademark rubric calculator is more transparent. In other words, it doesn’t leave them wondering what makes their achievement a ‘high’ rather than ‘low’ 2.1 (for instance) against a particular criteria. It does allow tutors to tweak a final grade away from a borderline (eg 69%) but my hunch is that when rubric calculators are used, students are less inclined to complain about a mark that ‘comes out in the wash’ to that number than one which is arrived at holistically by the tutor. As a result – I don’t think we should shy away from awarding borderline marks if that’s what the rubric (which has been clearly communicated to the student before hand) calculates. Anything else is duplicitous.
3 Cost
This tool looks like it’s going to be quite expensive in comparison to its competitors. Given that this is likely to be a tool that would need to be used in conjunction with other marking and submission tools and that we can probably achieve many of the affordances it offers with some workarounds within Grademark, it’s going to prove a hard sell to many cash-strapped institutions at the moment.
Final Evaluation
This looks like a fine tool that will almost certainly be the right tool for many marking jobs. I think it will be especially attractive to colleagues marking physical objects (like artworks, models etc) and performances (music, drama, presentations etc). I suspect many will find it useful for marking exams, especially if feedback is required on them. It won’t replace marking tools like Grademark and it may be hard to justify the investment if we can find workarounds which achieve similar things within the tools for which we already hold sight licenses.
Labels:
diagnostics,
eAssessment,
eLearning,
marking,
ReView
Subscribe to:
Posts (Atom)