SM Grader is simple, easy to understand (both for instructors and students), and most importantly, 100% transparent in the auto-generated feedbacks to minimise student complaints as the students will be able to see justification for any points lost together with feedback on how to improve their performance.
Once auto-marking is completed, SM Grader generates rich feedback to be pushed back to the students' LMS accounts.
SM Grader allows for hybrid marking between automated and manual marking for all question types. That is, if required, you can always override any automated marking results for any student submission before publishing these results.
SM Grader maintains a balance between automation and human control in the assessment process, ensuring ultimate control remains with the assessor.
Assessors can create unlimited marking criteria and ratings aligned to rubrics, ensuring consistency in assessment scoring and standards.
Feature to oversee and manage permissions for multiple assistant evaluators, enabling centralised governance to ensure fair grading across the cohort.
Insight reports provide an overview of answer distribution and question-by-question analytics, providing valuable information for both students and assessors.
Assessors can verify answers for originality across cohorts, encouraging critical thinking in students' answers and reducing copying.
Assessors can choose to review individual questions side-by-side, (not just individual students) without context switching, enhancing efficiency in assessment processes.
Allow iterative criteria adjustment and updates to feedback and scores, significantly improving assessment speed and overall fairness across all student responses.
Provide comprehensive and partial scoring and rich feedback for answers, ensuring students understand the rationale behind their scores.
Optional capability to enable students a deeper understanding of their performance across the cohort with their scores position on a histogram.
SM Grader allows for hybrid marking between automated and manual marking for all question types. That is, if required, you can always override any automated marking results for any student submission before publishing these results.
SM Grader facilitates full control of the marking process via rule-based automation of partial credits and custom student feedback, as well as utilisation of state-of-the-art artificial intelligence (AI) algorithms for auto-marking of essay-type questions.
The marking and feedback rules are simply Python expressions (or Python functions in case a single expression would be insufficient).
In addition, SM Grader marking & feedback criteria are case-, space-, and quote-insensitive. This feature greatly simplifies the amount of work for defining marking and feedback criteria.
SM Grader allows for hybrid marking between automated and manual marking for all question types. That is, if required, you can always override any automated marking results for any student submission before publishing these results.
SM Grader maintains a balance between automation and human control in the assessment process, ensuring ultimate control remains with the assessor.
A SM Grader notebook edition has all the functionality of SM Grader, but also seamlessly integrates with Jupyter notebooks, allowing for direct submissions.
SM Grader currently supports both Python and R, and enables you to import relevant libraries or files to support the computation and assessment of the answer.
SM Grader is built on three key principles:
As educators, we would like to have complete control over how we conduct & evaluate our assessments. This control and flexibility need to apply across the following four aspects of an assessment:
Simply having complete control over our assessments is not enough: we would like to do that as quickly as possible in the most effective and efficient manner: