Toward an Autonomous Robot for Real-Time Dysgraphia Diagnosis via Deep Learning
This study aims to design, develop, and implement a novel autonomous diagnostic framework embedded in the NAO humanoid robot, enabling it to perform real-time identification and classification of dysgraphia in children. This extension of robot-assisted therapy provides a truly autonomous diagnosis, performed without human supervision.
Dysgraphia is a handwriting disorder affecting the automation of graphic gestures and the formal presentation of written text. Traditional diagnostic methods require human experts and are often time-consuming. There is a pressing need for intelligent, autonomous tools capable of identifying dysgraphia in educational contexts. Real-time diagnosis is essential because it enables teachers and therapists to respond immediately to signs of difficulty, adapt teaching activities, and prevent the disorder from progressing. Existing approaches rely mainly on delayed assessments, which limit their effectiveness in dynamic school environments.
We developed a client/server-based architecture where the NAO robot acts as a client, autonomously guiding students through handwriting tasks, capturing data, and sending it to a server for processing. Machine learning models hosted on the server analyze handwriting samples for two main purposes: (1) identifying whether the student is affected by dysgraphia, and (2) recognizing and classifying specific signs and severity levels of the disorder. The dataset includes approximately 3,000 handwriting captures for dysgraphia identification and approximately 2,426 samples for sign recognition.
This research presents an original and intelligent diagnostic framework, implemented in the NAO humanoid robot, that enables the robot to detect and classify dysgraphia in real-time autonomously. Going beyond traditional approaches to robot-assisted therapy, our system gives the robot independent ability to perform analysis, providing immediate and relevant diagnosis for educational support. The framework is based on a multi-level classification model that categorizes severity and symptom types, and builds upon a set of original handwriting data collected from learners of different age groups.
Our dysgraphia identification model achieved an accuracy rate of 99%, while the sign recognition model achieved an accuracy rate of 78%. This difference is due to the complexity of the task and the nature of the data: identifying dysgraphia is a binary classification process (affected, non-affected), while the recognition of signs is a multi-class classification process, as each image may contain several varied and subtle signs (Crooked, Broken, Overlapping, Reversed, Poorly Formed, Too Small, Too Large). However, the results obtained enable the severity level to be estimated based on the number of signs detected. An interactive scenario was designed and tested in real educational settings, showing positive and effective outcomes.
Educators and therapists can utilize the robot to support early dysgraphia detection in classrooms, enabling timely intervention and personalized learning support.
Further investigations could explore cross-linguistic handwriting variations, extend the datasets, and integrate emotional feedback mechanisms to enhance the quality of robot-student interaction.
This research advances the field of inclusive education by introducing a scalable and fully autonomous technological solution for the early diagnosis of dysgraphia. By enabling timely identification and classification of handwriting disorders, the proposed framework fosters equitable access to tailored educational support, thereby mitigating long-term academic challenges and reducing the potential for social stigmatization among affected learners.
Future work will focus on extending the framework to support multi-language handwriting analysis, real-time progress tracking over multiple sessions, and integration with personalized therapy plans.



Back