This paper addresses the challenge of Source-free Domain Adaptation (SFDA), where knowledge is transferred from a labeled source domain to an unlabeled target domain without requiring access to the source data during adaptation. Traditional Unsupervised Domain Adaptation (UDA) methods typically depend on source data availability during training, which raises concerns related to privacy, security, and scalability. Our proposed approach eliminates this dependency by leveraging only a pre-trained source model for adaptation to the target domain. We introduce a comprehensive framework that incorporates iterative centroid refinement for pseudo-labeling, enhanced self-supervised learning strategies, advanced regularization techniques, and dynamic loss weighting mechanisms. These innovations improve feature alignment and classification performance in the target domain. Extensive experiments conducted on diverse datasets, including digital and object benchmarks, demonstrate that our method consistently outperforms state-of-the-art techniques in both accuracy and robustness. Additionally, this study delves into the theoretical foundations of SFDA, providing insights into its efficacy and exploring its practical applications across various domains.
Relation:
International Journal of Pattern Recognition and Artificial Intelligence 39(07),p.2552007 (2025)