
The impact of mental health on students’ academic performance and practical results is crucial, yet traditional analytic approaches often struggle to capture the intricate and multifaceted aspects of psychological well-being. Recently, there has been a growing interest in utilizing Explainable AI (XAI) methods to increase transparency and interpretability in mental health research. This research introduces a transformer-based model called Tabular BERT (TaBERT), which is specifically designed to analyze and integrate contextual, psychological, personal, and social factors for predicting mental health outcomes comprehensively. By incorporating deep contextual embeddings, bidirectional attention, and a unique memorization mechanism, TaBERT excels at capturing complex feature interactions that conventional machine learning and ensemble methods may miss. Comparative studies have validated the effectiveness of this model, achieving an impressive accuracy rate of 96%. The study’s empirical analyses were further reinforced through advanced feature ranking techniques such as information gain, gain ratio, and entropy, highlighting the significance of mental health-related features with an information value of 0.129 for prediction purposes. To enhance transparency and credibility, explainable AI techniques like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) were employed to provide detailed insights into global and local feature contributions. Finally, thorough statistical tests utilizing *p*-values offered additional evidence supporting the reliability and importance of the research findings.