Our goal is to generate photo-realistic images from given texts and freehand sketches, where texts provide the contents and sketches control the shapes. Freehand sketch can be highly abstract (examples shown below), and learning representations of sketches is not trivial. In contrast to other cross domain learning approaches, like pix2pix and CycleGAN, where a mapping from representations in one domain to those in another domain is learned, we propose to learn a joint representation of text, sketch and image.
face | bird | shoe |
---|---|---|
![]() |
![]() |
![]() |
* A few freehand sketches were collected from volunteers.
- Major Contributor: Shangzhe Wu (HKUST), Yongyi Lu (HKUST)
- Supervisor: Yu-wing Tai (Tencent), Chi-Keung Tang (HKUST)
- Mentor in MLJejuCamp2017: Hyungjoo Cho
Part of the project was developed in Machine Learning Camp Jeju 2017. More interesting projects can be found in project descriptions and program GitHub.
- Python 3.5
- Tensorflow 0.12.1
- SciPy
- Clone this repo:
git clone https://github.com/elliottwu/sText2Image.git
cd sText2Image
- Download preprocessed CelebA data (~3GB):
sh ./datasets/download_dataset.sh
sh train.sh
- To monitor training using Tensorboard, copy the following to your terminal and open
localhost:8888
in your browser
tensorboard --logdir=logs_face --port=8888
sh test.sh
- Download pretrained model:
sh download_pretrained_model.sh
- Test pretrained model on CelebA dataset:
python test.py ./datasets/celeba/test/* --checkpointDir checkpoints_face_pretrained --maskType right --batchSize 64 --lam1 100 --lam2 1 --lam3 0.1 --lr 0.001 --nIter 1000 --outDir results_face_pretrained --text_vector_dim 18 --text_path datasets/celeba/imAttrs.pkl
We test our framework with 3 kinds of data, face(CelebA), bird(CUB), and flower(Oxford-102). So far, we have only experimented with face images using attribute vectors as texts information. Here are some preliminary results:
We used CelebA dataset, which also provides 40 attributes for each image. Similar to the text information, attributes control the specific details of the generated images. We chose 18 attrbutes for training.
The following images were generated given sketches and the corresponding attriubtes.
The following images were generated given sketches and the random attriubtes. The controlling effects of the attributes are still under improvement.
The following images were generated given freehand sketches and the random attriubtes. The controlling effects of the attributes are still under improvement.
Codes are based on DCGAN and dcgan-completion.