อาจารย์ วริศร์รัตนนิมิตร สาขาเทคโนโลยีไฟฟ้าและระบบควบคุมอัตโนมัติ
Abstract—In the digital era, learning programming has
become an essential skill, and assessing students’ understanding
in this area plays a crucial role in enhancing educational quality.
However, evaluating subjective programming exam responses,
especially those involving conceptual explanations or
procedural descriptions, remains a challenging task due to the
diversity and complexity of answers. Manual grading can lead
to delays and inconsistencies in scoring. This research aims to
develop an automated system for evaluating Thai subjective
programming exams using Natural Language Processing (NLP)
techniques to assess the correctness of responses. The system is
implemented in Python, utilizing the ThaiNLP library for word
tokenization, alongside Machine Learning algorithms such as
Naive Bayes, Decision Tree, and Support Vector Machine
(SVM) for classification and prediction of answers. The
experimental dataset includes responses from 120 students who
answered subjective questions about Python commands. Expert
testing indicated high user satisfaction with an average score of
4.05. The findings show that SVM achieved the highest accuracy
at 85.33%. This system can enhance efficiency and consistency
in exam evaluation, reducing grading time and contributing to
standardized scoring.
