Hybrid Deep Learning and Information Flow-Based Fuzzy Cognitive Maps for Explainable Predictive Maintenance in Collaborative Robotics
DOI:
https://doi.org/10.26629/Keywords:
Collaborative Robotics, Fuzzy Cognitive Maps, Deep Learning, Transfer Entropy, Industrial Cyber-Physical SystemsAbstract
{
Predictive maintenance (PdM) in collaborative robotics (cobots) faces a critical dilemma: while deep learning models offer high accuracy, they lack interpretability, and rule-based systems are transparent but insufficiently adaptive posing a serious challenge in safety-critical Industry 5.0 environments where both performance and explainability are non-negotiable. To resolve this trade-off, this paper proposes a novel hybrid architecture that synergistically combines a Convolutional Recurrent Neural Network (CRNN) for high-fidelity fault prediction with an Information Flow-based Fuzzy Cognitive Map (IF-FCM) for human-interpretable causal reasoning. Unlike prior approaches that rely on heuristic or static FCM weights, this research IF-FCM is automatically calibrated using the CRNN’s latent representations and data-driven causal discovery: edge weights are derived from transfer entropy (for directional influence) and mutual information (for co-variability), eliminating expert bias and enabling dynamic, physics-grounded explanations. Evaluated on the real-world UR3 CobotOps dataset from the UCI repository, the model achieves state-of-the-art performance with 97.8% accuracy, a 0.983 F1-score, and a 0.991 AUC while generating expert-validated explanations with 89% consistency (inter-rater κ = 0.81). A key advantage is a 34% reduction in false alarms through context-aware reasoning (e.g., ignoring isolated thermal spikes without corroborating electrical anomalies). Furthermore, domain-constrained min-max normalization, aligned with manufacturer-specified physical thresholds, ensures semantic fidelity and model stability. The framework outperforms leading baselines, including CNN-LSTM, Attention LSTM, XGBoost+SHAP, and static FCMs across all metrics. This work’s primary contributions are (1) a closed-loop hybrid architecture that unifies deep learning and causal interpretability; (2) the first integration of information-theoretic measures into FCM learning for robotic PdM; and (3) a trustworthy, scalable solution that meets regulatory and operational demands for transparent AI in human-robot collaboration.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Journal of Technology Research

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.