Update README.md

This commit is contained in:
Abi Raja 2024-03-22 13:51:23 -04:00 committed by GitHub
parent 48d2ae9cfd
commit 04cb502be9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -1,23 +1,37 @@
# screenshot-to-code
This simple app converts a screenshot to code (HTML/Tailwind CSS, or React or Bootstrap or Vue). It uses GPT-4 Vision (or Claude 3) to generate the code and DALL-E 3 to generate similar-looking images. You can now also enter a URL to clone a live website.
🆕 Now, supporting Claude 3!
A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI.
https://github.com/abi/screenshot-to-code/assets/23818/6cebadae-2fe3-4986-ac6a-8fb9db030045
Supported stacks:
- HTML + Tailwind
- React + Tailwind
- Vue + Tailwind
- Bootstrap
- Ionic + Tailwind
- SVG
Supported AI models:
- GPT-4 Vision
- Claude 3 Sonnet (faster, and on par or better than GPT-4 vision for many inputs)
- DALL-E 3 for image generation
See the [Examples](#-examples) section below for more demos.
We also just added experimental support for taking a video/screen recording of a website in action and turning that into a functional prototype.
![google in app quick 3](https://github.com/abi/screenshot-to-code/assets/23818/8758ffa4-9483-4b9b-bb66-abd6d1594c33)
[Learn more about video here](https://github.com/abi/screenshot-to-code/wiki/Screen-Recording-to-Code).
[Follow me on Twitter for updates](https://twitter.com/_abi_).
## 🚀 Try It Out!
🆕 [Try it here](https://screenshottocode.com) (bring your own OpenAI key - **your key must have access to GPT-4 Vision. See [FAQ](#%EF%B8%8F-faqs) section below for details**). Or see [Getting Started](#-getting-started) below for local install instructions.
## 🌟 Recent Updates
- Mar 8 - 🔥🎉🎁 Video-to-app: turn videos/screen recordings into functional apps
- Mar 5 - Added support for Claude Sonnet 3 (as capable as or better than GPT-4 Vision, and faster!)
🆕 [Try it live on the hosted version](https://screenshottocode.com) (bring your own OpenAI key - **your key must have access to GPT-4 Vision. See [FAQ](#%EF%B8%8F-faqs) section below for details**). Or see [Getting Started](#-getting-started) below for local install instructions.
## 🛠 Getting Started
@ -53,15 +67,6 @@ For debugging purposes, if you don't want to waste GPT4-Vision credits, you can
MOCK=true poetry run uvicorn main:app --reload --port 7001
```
## Screen Recording to prototype (experimental)
https://github.com/abi/screenshot-to-code/assets/23818/1468bef4-164f-4046-a6c8-4cfc40a5cdff
Record yourself using any website or app or even a Figma prototype, drag & drop in a video and in a few minutes, get a functional, similar-looking app.
[You need an Anthropic API key for this functionality. Follow instructions here.](https://github.com/abi/screenshot-to-code/blob/main/blog/video-to-app.md)
## Docker
If you have Docker installed on your system, in the root directory, run: