Handwritten Essay Marking Software

Table of Contents

  1. Introduction
  2. Problem
  3. Aim
  4. Dataset
  5. High_Level_Solutions_Architecture_Digram
  6. Use_Case_Diagram
  7. Technology_Stack
  8. Machine_Learning_Algorithm
  9. Testing
  10. Project_Timeline
  11. Product Owner
  12. Team Members
  13. Links

Introduction

The Automated Essay Scoring (AES) program is used to assess and score essays that have been written in response to particular prompts. It is the procedure of employing computer programs to grade written essays. Since it motivates students to make incremental changes to their writing, automating the evaluation process may be advantageous for both instructors and students.

Problem

The development of an ML-based Handwritten Essay marking software addresses several significant problems associated with manual essay grading. Firstly, manually marking numerous handwritten essays can be an overwhelming task for teachers and instructors. It often requires extensive time and effort to review each essay thoroughly and provide accurate feedback. This leads to the second problem, which is the time-consuming nature of the manual grading process. By automating the essay marking with machine learning algorithms, the software can significantly reduce the time required to evaluate and assess a large number of essays.

Another challenge in manual essay marking is the issue of inconsistency. Different markers may have varying interpretations and grading criteria, resulting in inconsistent scoring across multiple essays. With an ML-based software, the grading process can be standardized, ensuring consistent evaluation criteria are applied to each essay, leading to fair and reliable results.

Bias and human error are also common concerns in manual essay marking. Human markers can inadvertently introduce biases based on factors such as handwriting quality, gender, or race, which may affect the fairness and objectivity of the grading process. ML-based software, when trained on a diverse set of essays, can mitigate such biases and provide more impartial evaluations. Additionally, it reduces the likelihood of human error, such as calculation mistakes or overlooking important aspects of an essay, ensuring more accurate and precise grading.

Aim

“Design and develop a web interface to automate the process of grading handwritten essays by leveraging machine learning algorithms while ensuring fairness and consistency in evaluations.”

Dataset

The dataset we are using is ‘The Hewlett Foundation: Automated Essay Scoring Dataset’ by ASAP. You can find it by clicking here

High Level Solutions Architecture Diagram

Use Case Diagram

Technology Stack

Machine Learning Algorithm

Testing

Tested Features

REST API Test Results

Project Timeline

Product Owner

Team Members