تخطى إلى المحتوى
الصفحة الرئيسية » الإصدار 5، العدد 1 ـــــ يناير 2026 ـــــ Vol. 5, No. 1 » A Self-Supervised Damage-Aware Vision Transformer for Image Enhancement in Low-Quality and Post-Conflict Environments

A Self-Supervised Damage-Aware Vision Transformer for Image Enhancement in Low-Quality and Post-Conflict Environments

    Authors

    MSc, Computer Science, University of Anbar, Ramadi, Iraq

    [email protected]

    MSc, Computer Science, University of Anbar, Ramadi, Iraq

    [email protected]

    Abstract

    In such environment fraught with both conflict and scarcity of resources, images captured by surveillance cameras, mobile phones, digital cameras or unmanned aerial vehicles (UAV) tend to encounter multiple degradation types such as noise, blur, deficient illumination, and artifact in transmission. Typical image enhancement techniques and supervised deep learning models require large paired (low/high-quality) datasets. However, these are rarely obtainable under the restricted conditions of post-conflict environments. This paper proposes a new damage-aware, self-supervised image enhancement framework based on the architecture of Vision Transformer (ViT). The proposed model implicitly learns different forms of degradation and autonomously enhances image quality without the need for high-resolution reference images as ground-truth. A multi-task self-supervised objective that allows simultaneous recognition and restoration of degradation. Experimental results show that our technique improves perceptual quality, structural integrity, and robustness by a large margin, when contrasted with conventional CNN-based methods. Of significance is the fact that the algorithm is highly effective in its ability to handle unseen real-world and post-conflict degradation scenarios.