迁移分享:使用人工智能由文本生成遥感影像,智能扩图
| assets | ||
| configs | ||
| data | ||
| ldm | ||
| models | ||
| scripts | ||
| LICENSE | ||
| main.py | ||
| README.md | ||
| requirements.txt | ||
| sample.sh | ||
| setup.py | ||
Stable Diffusion for Remote Sensing Image Generation
Author: Zhiqiang yuan @ AIR CAS, Send a Email
A simple project for text-to-image remote sensing image generation,
and we will release the code of using text to control regions for super-large RS image generation later.
Also welcome to see the project of image-condition fake sample generation in TGRS, 2023.
Environment configuration
Follow original training repo .
Pretrained weights
We used RSITMD as training data and fine-tuned stable diffusion for 10 epochs with 1 x A100 GPU. When the batchsize is 4, the GPU memory consumption is about 40+ Gb during training, and about 20+ Gb during sampling. The pretrain weights is realesed at last-pruned.ckpt.
Using
Download the pretrain weights to current dir, and run with:
bash sample.sh
We will update the train code ASAP.

