Diffusion-aided deep joint source–channel coding (DeepJSCC) has recently shown strong potential for semantic communication (SemCom), offering high perceptual quality under challenging wireless conditions. However, existing diffusion-based JSCC schemes lack adaptivity in three key aspects: (i) task-specific fine-tuning, (ii) channel and rate adaptation, and (iii) instance-level generalization. We propose FPS-SemCom, a feedback-guided diffusion posterior sampling framework that unifies posterior-based decoding and feedback-driven progressive encoding. FPS-SemCom achieves instance-, channel-, and rate-adaptive image transmission through a training-free sampler and a lightweight, feedback-guided linear encoder. Built entirely upon frozen pre-trained diffusion models, FPS-SemCom refines the posterior toward optimal reconstruction without retraining, achieving robust channel adaptability and semantic consistency. Remarkably, our results reveal that under strong generative priors, even a simple linear encoder achieves competitive performance, highlighting the power of the diffusion prior. Extensive experiments on the pre-trained Stable Diffusion model show that FPS-SemCom outperforms the existing diffusion-based JSCC baseline, achieving up to a 33.3% improvement in LPIPS and a 1.56 dB gain in MS-SSIM.
Fig. 1: Illustration of the FPS framework.
Fig. 2(a): LPIPS vs. SNR
Fig. 2(b): MS-SSIM vs. SNR
Fig. 3(a): LPIPS vs. N
Fig. 3(b): PSNR vs. N
N = 10
@article{FPS-SemCom,
title={Diffusion Posterior Sampling with Channel Feedback for Adaptive Semantic Communication},
author={Bingxuan xu, Haotian Wu, Xiaodong Xu and Deniz Gündüz},
journal={ArXiv},
year={2026}
}