Image restoration presents a significant challenge in the realm of computer vision, aiming to recreate high-quality images from their low-quality, degraded counterparts. This issue spans various domains, including photography, medical imaging, and autonomous systems. The rise of deep learning has led to substantial advancements in this area, introducing techniques such as Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), Transformers, and Diffusion Models (DMs). However, each of these approaches has its own set of limitations. For instance, CNNs often fail to effectively capture long-range dependencies, DMs depend on resource intensive denoising processes, and transformers experience quadratic complexity as the size of the input increases. Recently, State-space models particularly, Mamba have gained considerable interest in recent years as promising alternatives due to their balance between efficient computation and global receptive fields. However, Mamba’s inherent causal modeling nature restricts its ability to model spatial relationships effectively on image data. While previous research has attempted to alleviate the shortcoming through multi-directional scanning but increase computational complexity as a trade-off. To address this challenge, we propose Graph Vision Mamba (GVMambaIR), a novel framework that integrates a Graph Neural Network (GNN) into the Mamba architecture. By leveraging GNNs, our model enhances spatial information flow and enable image feature in- teraction while preserving computational efficiency. Extensive evaluations on datasets including UAV-Rain1k, RainDrop and Rain200L dataset, demonstrate that GVMambaIR delivers superior quantitative results surpass current state of the art, by 1.7dB on UAV-Rain1k and by 0.85dB on RainDrop and by 0.12dB on Rain200L dataset, establishing it as a robust solution for image restoration system