Aiming at improving performance of visual classiﬁcation in a cost-effective manner, this paper proposes an incremental semi-supervised learning paradigm called Deep CoSpace (DCS). Unlike many conventional semi-supervised learning methods usually performing within a ﬁxed feature space, our DCS gradually propagates information from labeled samples to unlabeled ones along with deep feature learning. We regard deep feature learning as a series of steps pursuing feature transformation, i.e., projecting the samples from a previous space into a new one, which tends to select the reliable unlabeled samples with respect to this setting. Speciﬁcally, for each unlabeled image instance, we measure its reliability by calculating the category variations of feature transformation from two different neighborhood variation perspectives, and merged them into an uniﬁed sample mining criterion deriving from Hellinger distance. Then, those samples keeping stable correlation to their neighboring samples (i.e., having small category variation in distribution) across the successive feature space transformation, areautomaticallyreceivedlabelsandincorporatedintothemodel forincrementallytrainingintermsofclassiﬁcation.Ourextensive experiments on standard image classiﬁcation benchmarks (e.g., Caltech-256  and SUN-397 ) demonstrate that the proposed framework is capable of effectively mining from large-scale unlabeled images, which boosts image classiﬁcation performance and achieves promising results compared to other semi-supervised learning methods.
Figure: The pipeline of the proposed Deep Co-Space framework. At the beginning, we have limited labeled data and inﬁnite unlabeled data for training. The labeled data would be used to ﬁne-tune a pre-trained CNN-based deep model and results in a new one. After that, all labeled and unlabeled data will be extracted by the old and new models to construct two successive feature space (Co-Space) respectively. We measure the distribution variation of labeled neighbors for each unlabeled sample in Co-Space, and assign those samples having stable structures with pseudo-labels. Then these selected samples are employed to update the model for the next iteration.