icon

Diffusion Posterior Sampling with Channel Feedback for Adaptive Semantic Communication

Bingxuan Xu*, Haotian Wu†‡, Xiaodong Xu*, and Deniz Gündüz

*State Key Laboratory of Networking and Switching Technology, BUPT, Beijing, China

† Imperial College London

Abstract

Diffusion-aided deep joint source–channel coding (DeepJSCC) has recently shown strong potential for semantic communication (SemCom), offering high perceptual quality under challenging wireless conditions. However, existing diffusion-based JSCC schemes lack adaptivity in three key aspects: (i) task-specific fine-tuning, (ii) channel and rate adaptation, and (iii) instance-level generalization. We propose FPS-SemCom, a feedback-guided diffusion posterior sampling framework that unifies posterior-based decoding and feedback-driven progressive encoding. FPS-SemCom achieves instance-, channel-, and rate-adaptive image transmission through a training-free sampler and a lightweight, feedback-guided linear encoder. Built entirely upon frozen pre-trained diffusion models, FPS-SemCom refines the posterior toward optimal reconstruction without retraining, achieving robust channel adaptability and semantic consistency. Remarkably, our results reveal that under strong generative priors, even a simple linear encoder achieves competitive performance, highlighting the power of the diffusion prior. Extensive experiments on the pre-trained Stable Diffusion model show that FPS-SemCom outperforms the existing diffusion-based JSCC baseline, achieving up to a 33.3% improvement in LPIPS and a 1.56 dB gain in MS-SSIM.

Middle illustration

Fig. 1: Illustration of the FPS framework.

Performance Comparison

Fig. 2(a): LPIPS vs. SNR

Fig. 2(b): MS-SSIM vs. SNR

Fig. 3(a): LPIPS vs. N

Fig. 3(b): PSNR vs. N

The number of Blocks N vs. Reconstruction

N = 10

10 100 300 500 700 1000 1300 1600 1800 2048
Reconstruction for N=10

BibTeX

  
  @article{FPS-SemCom,
  title={Diffusion Posterior Sampling with Channel Feedback for Adaptive Semantic Communication},
  author={Bingxuan xu, Haotian Wu, Xiaodong Xu and Deniz Gündüz},
  journal={ArXiv},
  year={2026}
  }