IEEE Trans. Image Process. 2017
Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning
Dongyu Zhang, Liang Lin*, Tianshui Chen, Xian Wu, Wenwei Tan, Ebroul Izquierdo
IEEE Trans. Image Process. 2017


PowerPoint 演示文稿

Fig. 1.Illustration results of existing methods and the proposed approach. (a) Photos. (b) Ours. (c) MRF[1]. (d) SSD[2]. (e) SRGS[3].

As shown in Fig. 1, the synthesis results of non-facial factors of these example-based methods are not satisfied, such as hairpins and glasses [1], [3]. Because of the great variations in appearance and geometry of these decorations, it is easy to involve artifacts in the synthesis results. Besides some methods [2], [3] average the candidate sketches to generate smoothed results. They may produce acceptable sketches for face part, but always fail to preserve textural details, such as the hair region. Finally, the performance of these example-based methods are only acceptable when training and test samples originate from the same dataset, however, this situation is rarely happened in practice. 




Aiming at alleviating the aforementioned problems, we propose to learn sketch representations directly from raw pixels of input photos, and develop a decompositional representation learning framework to generate an end-to-end photo-sketch mapping through structure and textural decomposition. Given an input photo, our method first roughly decompose it into different regions according to their representational contents, such as face, hair and background. Then we learn structural representation and textural representation from different parts respectively. The structural representation learning mainly focuses on the facial part, while the textural representation learning mainly targets on preserving the fine-grained details of hair regions. Finally, the two representations are fused to generate the final sketch portrait via a probabilistic method. The pipeline of generating sketch portraits of proposed method is illustrated in Fig. 2.

PowerPoint 演示文稿

Fig. 2. Illustration of the pipeline of sketch portraits generation via the proposed framework. Our approach feeds an input photo into the branched fully convolutional network to produce a structural sketch and a textural sketch, respectively. Guided by the parsing maps, the two sketches are fused to get the final result via a probability fusion method.




PowerPoint 演示文稿

Fig. 3.  Comparison of sketches generated by different methods. 
(a) Input Photo. (b) MR [4]. (c) MRF [1]. (d) MWF [5]. (e) SRGS [3]. (f) SSD [2]. (g) Our method.


Fig. 4. Comparison on the Rank-1 and Rank-10 Cumulative Match Score of sketch-based face recognition task. Left: Rank-1 Cumulative Match Score. Right: Rank-10 Cumulative Match Score.




[1] X. Wang and X. Tang, “Face photo-sketch synthesis and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 11, pp. 1955–1967, Nov. 2009.

[2] Y. Song, L. Bao, Q. Yang, and M.-H. Yang, “Real-time exemplarbased face sketch synthesis,” in Proc. Eur. Conf. Comput. Vis., 2014, pp. 800–813.

[3] S. Zhang, X. Gao, N. Wang, J. Li, and M. Zhang, “Face sketch synthesis via sparse representation-based greedy search,” IEEE Trans. Image Process., vol. 24, no. 8, pp. 2466–2477, Aug. 2015.

[4] C. Peng, X. Gao, N. Wang, D. Tao, X. Li, and J. Li, “Multiple representations-based face sketch-photo synthesis,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 11, pp. 2201–2215, Nov. 2016.

[5] H. Zhou, Z. Kuang, and K.-Y. Wong, “Markov weight fields for face sketch synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2012, pp. 1091–1097.