迁移分享:使用人工智能由文本生成遥感影像,智能扩图
Go to file
2023-05-06 17:04:03 +08:00
assets first commit 2023-05-06 17:04:03 +08:00
configs first commit 2023-05-06 17:04:03 +08:00
data first commit 2023-05-06 17:04:03 +08:00
ldm first commit 2023-05-06 17:04:03 +08:00
models first commit 2023-05-06 17:04:03 +08:00
scripts first commit 2023-05-06 17:04:03 +08:00
LICENSE first commit 2023-05-06 17:04:03 +08:00
main.py first commit 2023-05-06 17:04:03 +08:00
README.md first commit 2023-05-06 17:04:03 +08:00
requirements.txt first commit 2023-05-06 17:04:03 +08:00
sample.sh first commit 2023-05-06 17:04:03 +08:00
setup.py first commit 2023-05-06 17:04:03 +08:00

Stable Diffusion for Remote Sensing Image Generation

Author: Zhiqiang yuan @ AIRCAS, Send a Email

A simple project for text-to-image remote sensing image generation. We will release the code of using text to control regions for super-large RS image generation later.

Environment configuration

Follow original training repo .

Pretrained weights

We used RSITMD as training data and fine-tuned stable diffusion for 10 epochs with 1 x A100 GPU. When the batchsize is 4, the GPU memory consumption is about 40+ Gb during training, and about 20+ Gb during sampling. The pretrain weights is realesed at last-pruned.ckpt.

Using

Download the pretrain weights to current dir, and run with:

bash sample.sh

We will update the train code ASAP.

Examples

Caption: some boats drived in the sea ./assets/shows1.png