Seleksi Beasiswa Menggunakan Analytical Hierarchy Process dan Penilaian Rubrik dengan Explainability Serta Audit Fairness
DOI:
https://doi.org/10.71456/jimt.v2i2.1588Kata Kunci:
Analytical Hierarchy Process, Sistem Pendukung Keputusan, Rubric Scoring, Explainability, Fairness AuditingAbstrak
Seleksi beasiswa di perguruan tinggi sering terdistorsi oleh subjektivitas, penerapan kriteria yang tidak konsisten, dan rendahnya transparansi ketika penilaian dilakukan manual atau berbasis spreadsheet. Penelitian ini mengembangkan dan memvalidasi sistem pendukung keputusan yang memadukan Analytical Hierarchy Process untuk pembobotan kriteria dengan penilaian berbasis rubrik untuk menilai pelamar, disertai fitur explainability dan fairness auditing. Sistem menampilkan kontribusi skor per kriteria, menghasilkan justifikasi naratif deterministik yang diturunkan dari level rubrik dan bobot, serta merangkum dampak kelompok melalui dashboard audit. Artefak dirancang, diimplementasikan, dan dievaluasi menggunakan Design Science Research Methodology pada uji coba sandbox menggunakan 100 pelamar sintetis dalam tiga konfigurasi kebijakan. Pada kebijakan dasar, skor ternormalisasi berada pada rentang 0.48-0.93 dengan rata-rata 0.774, dan pelamar terkelompok menjadi 64 direkomendasikan, 34 borderline, dan 2 tidak direkomendasikan. Dibandingkan dengan keputusan referensi pakar, sistem menunjukkan kesepakatan tinggi (akurasi 0.96; presisi 0.953; recall 0.983; F1-score 0.96) serta keselarasan peringkat kuat (Spearman 0.97; Kendall 0.86). Audit keadilan berdasarkan kelompok semester mengindikasikan disparitas pada kebijakan dasar, dengan tingkat direkomendasikan 0% untuk semester 1-2, 52% untuk semester 3-4, dan 96% untuk semester 5-8. Hasil ini menunjukkan bahwa pendekatan hibrida meningkatkan konsistensi dan transparansi penilaian sekaligus menyediakan bukti untuk meninjau pertukaran antara prestasi, kebutuhan, dan dampak kelompok, dengan keputusan akhir tetap berada pada komite.
Referensi
Al-Habaibeh, A., Christersson, E., Hossain, M. A., & Desouza, N. (2021). Human-centered design science research evaluation for AR games. Frontiers in Virtual Reality, 2, 713718. https://doi.org/10.3389/frvir.2021.713718
Alves, G., Bernier, F., Couceiro, M., Makhlouf, K., Palamidessi, C., & Zhioua, S. (2023). Survey on fairness notions and related tensions. EURO Journal on Decision Processes, 11, 100033. https://doi.org/10.1016/j.ejdp.2023.100033
Anggrawan, A., Mayadi, M., Satria, C., & Putra, L. G. R. (2022). Scholarship recipients recommendation system using AHP and MOORA methods. International Journal of Intelligent Engineering and Systems, 15(2), 260–275. https://doi.org/10.22266/ijies2022.0430.24
Brown, S., Davidovic, J., & Hasan, A. (2021). The algorithm audit: Scoring the algorithms that score us. Big Data & Society, 8(1), 1–15. https://doi.org/10.1177/2053951720983865
Chen, W. (2025). FairDgcl: fairness-aware recommendation with dynamic graph contrastive learning. IEEE Transactions on Knowledge and Data Engineering, 37(9), 5230–5242. https://doi.org/10.1109/TKDE.2025.3580087
Deck, L., Schoeffer, J., De-Arteaga, M., & Kühl, N. (2024). A Critical Survey on Fairness Benefits of Explainable AI. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), 1579–1595. https://doi.org/10.1145/3630106.3658990
DeLuca, C. (2025). Diversity: a necessary imperative for assessment research. Assessment in Education: Principles, Policy and Practice, 32(3), 253–256. https://doi.org/10.1080/0969594X.2025.2549158
Dethier, A., Delcourt, C., & Dessart, L. (2024). Donor perceptions of nonprofit organizations’ transparency: Conceptualization and operationalization. Nonprofit and Voluntary Sector Quarterly, 53(5), 1230–1260. https://doi.org/10.1177/08997640231211212
English, N., Robertson, P., Gillis, S., & Graham, L. (2022). Rubrics and formative assessment in K-12 education: A scoping review of literature. International Journal of Educational Research, 113, 101964. https://doi.org/10.1016/j.ijer.2022.101964
Garcia, T., Tenorio, R., Garcia, P., Arquero, J. F., & Diaz, E. (2023). Effects of rubrics on academic performance, self-regulated learning, and self-efficacy: A meta-analysis. Educational Psychology Review, 35, 113. https://doi.org/10.1007/s10648-023-09823-4
Ghoorah, U., Mariyani-Squire, E., & Amin, S. Z. (2025). Relationships between financial transparency, trust, and performance: an examination of donors’ perceptions. Humanities and Social Sciences Communications, 12, 315. https://doi.org/10.1057/s41599-025-04640-2
Jannach, D., Lerche, L., Kovalerchuk, I., Zanker, M., Kuschnig, P., & Jerhot, P. (2024). Fairness in recommender systems: Research landscape and future directions. User Modeling and User-Adapted Interaction, 34, 59–108. https://doi.org/10.1007/s11257-023-09364-z
Jin, D., Wang, L., Zhang, H., Zheng, Y., Ding, W., Xia, F., & Pan, S. (2023). A Survey on Fairness-aware Recommender Systems. Information Fusion, 100, 101906. https://doi.org/10.1016/j.inffus.2023.101906
Jonsson, A., Panadero, E., Pinedo, L., & Fernandez-Castilla, B. (2025). Using rubrics for formative purposes: identifying factors that may affect the success of rubric implementations. Assessment in Education: Principles, Policy and Practice, 32(2), 192–211. https://doi.org/10.1080/0969594X.2025.2486947
Kirat, Th., Tambou, O., Do, V., & Tsoukiàs, A. (2023). Fairness and explainability in automatic decision-making systems. A challenge for computer science and law. EURO Journal on Decision Processes, 11, 100036. https://doi.org/10.1016/j.ejdp.2023.100036
Longo, L., Brcic, M., Cabitza, F., Choi, J., Confalonieri, R., Ser, J. Del, Guidotti, R., Hayashi, Y., Herrera, F., Holzinger, A., Jiang, R., Khosravi, H., Lecue, F., Malgieri, G., Páez, A., Samek, W., Schneider, J., Speith, T., & Stumpf, S. (2024). Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, 106, 102301. https://doi.org/10.1016/j.inffus.2024.102301
Madzik, P., & Falat, L. (2022). State-of-the-art on analytic hierarchy process in the last 40 years: Literature review based on Latent Dirichlet Allocation topic modelling. PLOS ONE, 17(5), e0268777. https://doi.org/10.1371/journal.pone.0268777
Matejova, M., & Paralic, J. (2025). A multi-criteria decision-making approach for the selection of explainable AI methods. Machine Learning and Knowledge Extraction, 7(4), 158. https://doi.org/10.3390/make7040158
Mulyaningsih, T., Dong, S. X., Miranti, R., Daly, A., & Purwaningsih, Y. (2022). Targeted scholarship for higher education and academic performance: Evidence from Indonesia. International Journal of Educational Development, 88, 102510. https://doi.org/10.1016/j.ijedudev.2021.102510
Murchan, D., Shaw, S., & Likhovtseva, E. (2025). Policy and practice in relation to external moderation of school-based assessment in 13 examination systems internationally. Assessment in Education: Principles, Policy and Practice. https://doi.org/10.1080/0969594X.2025.2562814
Naghiaei, M., Kamishima, T., Kameyada, H., & Sakuma, J. (2021). Fairness metrics and bias mitigation strategies for rating predictions. Information Processing & Management, 58(5), 102645. https://doi.org/10.1016/j.ipm.2021.102646
Pitoura, G., Siachamis, K., Stefanidis, K., & Tsaparas, P. (2022). Fairness in rankings and recommendations: An overview. The VLDB Journal, 31, 431–454. https://doi.org/10.1007/s00778-021-00697-y
Salomon, V. A. P., & Gomes, L. F. A. M. (2024). Consistency Improvement in the Analytic Hierarchy Process. Mathematics, 12(6), 828. https://doi.org/10.3390/math12060828
Shi, C., & Yao, Y. (2025). Explainable multi-criteria decision-making: A three-way decision perspective. International Journal of Approximate Reasoning, 187, 109528. https://doi.org/10.1016/j.ijar.2025.109528
Syverud, M. S. (2025). Oral exams in four Norwegian secondary schools - characteristics and variations in practice and possible threats to validity and fairness. Assessment in Education: Principles, Policy and Practice. https://doi.org/10.1080/0969594X.2025.2563722
Tanaka, M. (2025). Friendship bias in peer assessment of EFL oral presentations. Assessment in Education: Principles, Policy and Practice, 1–23. https://doi.org/10.1080/0969594X.2025.2570248
Teh, L. J. (2025). Exploring postgraduate students’ experience with rubric-referenced assessment: limitations and solutions. Pertanika Journal of Social Sciences and Humanities, 33(2), 541–562. https://doi.org/10.47836/pjssh.33.2.03
van der Veer, S. N., Dowrick, C. J., Genao, M. G. M., O’Connor, M. J., & Teodorczuk, A. (2021). Trading off accuracy and explainability in AI decision-making: Findings from two citizens’ juries. Journal of the American Medical Informatics Association, 28(10), 2128–2138. https://doi.org/10.1093/jamia/ocab118
Unduhan
Diterbitkan
Cara Mengutip
Terbitan
Bagian
Lisensi
Hak Cipta (c) 2026 Amanda Listiana Puspanagara, Fathoni Mahardika, Dani Indra Junaedi, & Asep Saeppani

Artikel ini berlisensi Creative Commons Attribution 4.0 International License.





