A Comparative Analysis of CNN and RNN Architectures for Deep Learning-Based Arabic Text Classification

Authors

  • Abdulmawla Najih Higher Institute of Sciences and Technology, Computer Department Gharian, Libya Author
  • Ramzi Alshagif Department of Computer Science, School of Basic Sciences, Libyan Academy, Tripoli, Libya Author
  • Albahlool Abood Faculty of Information Technology،Gharyan University,Libay Author
  • Salem Enajeh Higher Institute of Sciences and Technology, Computer Department Tripoli, Libya Author

DOI:

https://doi.org/10.26629/jtr.2025.46

Keywords:

Arabic Natural Language processing, Text Classification, Recurrent Neural Networks (RNNs), Comparative Analysis

Abstract

The proliferation of digital Arabic content has created a pressing need for efficient text classification systems. However, the Arabic language's complex morphological structure, including its root-based derivation and agglutinative nature, poses significant challenges for automated processing. While deep learning models like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have shown promise, their comparative effectiveness for Arabic text remains inadequately explored. This study presents a comprehensive empirical comparison of CNN and RNN models for multi-class Arabic text classification. We curated a heterogeneous dataset spanning seven distinct domains—including sports, politics, and economics—to ensure model robustness. A rigorous Arabic-specific preprocessing pipeline was implemented, involving stemming, stop-word removal, and tokenization. The CNN model utilized GloVe word embeddings for feature representation, whereas the RNN model employed TF-IDF vectors. Our results demonstrate a significant performance disparity: the RNN model achieved a remarkable 98% accuracy, substantially outperforming the CNN model, which reached 79% accuracy. Analysis of learning curves revealed that the CNN model suffered from overfitting, failing to generalize beyond the training data. In contrast, the RNN model effectively captured sequential dependencies and contextual information, which are crucial for understanding Arabic syntax and morphology. The findings strongly indicate that RNN architectures are superior for Arabic text classification tasks due to their innate ability to model long-range semantic relationships. This research provides valuable insights for selecting and developing optimal deep-learning architectures for Arabic NLP applications.

Downloads

Download data is not yet available.
A Comparative Analysis of CNN and RNN Architectures for Deep Learning-Based Arabic Text Classification

Downloads

Published

2025-12-28

How to Cite

A Comparative Analysis of CNN and RNN Architectures for Deep Learning-Based Arabic Text Classification. (2025). Journal of Technology Research, 494-504. https://doi.org/10.26629/jtr.2025.46