![]() |
|
I am currently a second-year master student in Intelligent and Multimedia Science Laboratory of Sun Yat-sen University (SYSU), supervised by Prof. Chengying GAO and Prof. Ning LIU. My research interests cover Computer Vision and Computer Graphics, particularly in sketch understanding and sketch-based applications. Luckily for me, I also work closely with Prof. Changqing ZOU in Huawei Noah's Ark Lab and Prof. Edgar Simo-Serra in Waseda University. |
News
Nov. 2019: Attended SIGGRAPH Asia 2019 in Brisbane and gave a talk of our paper.
Aug. 2019: One paper accepted by SIGGRAPH Asia 2019.
May-July 2019: I had a wonderful time in Simo-Serra Lab. Many thanks to Prof. Edgar Simo-Serra.
July 2018: One paper accepted by ECCV 2018.
Education
2018~2020: M.Sc. in Sun Yat-sen University
2014~2018: B.Eng. in Sun Yat-sen University
Publications
![]() |
Changqing Zou#, Haoran Mo# (joint first author), Chengying Gao*, Ruofei Du and Hongbo Fu
ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH Asia 2019)
Project Page | Paper | Supplementary | Code | Fast Forward Video | Slide | Abstract | Bibtex
Being natural, touchless, and fun-embracing, language-based inputs have been demonstrated effective for various tasks from image generation to literacy education for children. This paper for the first time presents a language-based system for interactive colorization of scene sketches, based on semantic comprehension. The proposed system is built upon deep neural networks trained on a large-scale repository of scene sketches and cartoon-style color images with text descriptions. Given a scene sketch, our system allows users, via language-based instructions, to interactively localize and colorize specific foreground object instances to meet various colorization requirements in a progressive way. We demonstrate the effectiveness of our approach via comprehensive experimental results including alternative studies, comparison with the state-of-the-art methods, and generalization user studies. Given the unique characteristics of language-based inputs, we envision a combination of our interface with a traditional scribble-based interface for a practical multimodal colorization system, benefiting various applications. |
![]() |
Changqing Zou#, Qian Yu#, Ruofei Du, Haoran Mo, Yi-Zhe Song, Tao Xiang, Chengying Gao, Baoquan Chen* and Hao Zhang
European Conference on Computer Vision (ECCV), 2018
Project Page | Paper | Poster | Code | Abstract | Bibtex
We contribute the first large-scale dataset of scene sketches, SketchyScene, with the goal of advancing research on sketch understanding at both the object and scene level. The dataset is created through a novel and carefully designed crowdsourcing pipeline, enabling users to efficiently generate large quantities realistic and diverse scene sketches. SketchyScene contains more than 29,000 scene-level sketches, 7,000+ pairs of scene templates and photos, and 11,000+ object sketches. All objects in the scene sketches have ground-truth semantic and instance masks. The dataset is also highly scalable and extensible, easily allowing augmenting and/or changing scene composition. We demonstrate the potential impact of SketchyScene by training new computational models for semantic segmentation of scene sketches and showing how the new dataset enables several applications including image retrieval, sketch colorization, editing, and captioning, etc. |
Interns
Simo-Serra Lab., Waseda University (Tokyo, Japan) 2019.05 - 2019.07
Research Intern, advised by Prof. Edgar Simo-Serra.
Talks
"Language-based Colorization of Scene Sketches" at SIGGRAPH Asia 2019.
Academic Service
Paper reviewer of conference CGI 2019.
Gallery
See my gallery :)