BABUL BIJAN MANDAL , DR NAGARAJAN G , DR. A. MAHALAKSHMI , AKANSH GARG , P. SAHAYA SUGANYA PRINCES , SUSHIL DOHARE

DOI: https://doi.org/

Explainable Artificial Intelligence has become central to modern psychometric modelling as organizations increasingly rely on machine learning for high-stakes personality assessments. Yet traditional black-box predictive systems introduce opacity, bias, and limited accountability, creating concerns in recruitment, promotion, and behavioural screening workflows. This study examines how XAI techniques can be integrated into psychometric algorithms to enhance interpretability, fairness, and trustworthiness without compromising predictive performance. The research analyses key XAI methods such as SHAP, LIME, counterfactual reasoning, feature‐attribution maps, and rule-based surrogates, and evaluates their suitability for personality trait prediction grounded in Five-Factor Model indicators, item-response patterns, and behavioural analytics. A hybrid methodological framework is proposed that combines supervised learning models with transparent post-hoc and intrinsic interpretability layers. Experimental simulations demonstrate that XAI explanations significantly improve transparency by identifying influential behavioural variables and surfacing hidden model dependencies. Fairness diagnostics reveal that XAI tools can detect subgroup bias earlier, enabling corrective re-weighting, debiasing, and algorithmic auditing. The study argues that integrating XAI into psychometric pipelines creates more ethical, accountable, and evidence-based decision systems that align with organizational governance standards. These findings contribute to responsible AI deployment in human resources and strengthen the reliability of algorithmic personality prediction models.