Merge branch 'main' of https://github.com/abi/screenshot-to-code
This commit is contained in:
commit
9465b6780b
57
README.md
57
README.md
@ -1,27 +1,41 @@
|
||||
# screenshot-to-code
|
||||
|
||||
This simple app converts a screenshot to code (HTML/Tailwind CSS, or React or Bootstrap or Vue). It uses GPT-4 Vision (or Claude 3) to generate the code and DALL-E 3 to generate similar-looking images. You can now also enter a URL to clone a live website.
|
||||
|
||||
🆕 Now, supporting Claude 3!
|
||||
A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI.
|
||||
|
||||
https://github.com/abi/screenshot-to-code/assets/23818/6cebadae-2fe3-4986-ac6a-8fb9db030045
|
||||
|
||||
Supported stacks:
|
||||
|
||||
- HTML + Tailwind
|
||||
- React + Tailwind
|
||||
- Vue + Tailwind
|
||||
- Bootstrap
|
||||
- Ionic + Tailwind
|
||||
- SVG
|
||||
|
||||
Supported AI models:
|
||||
|
||||
- GPT-4 Vision
|
||||
- Claude 3 Sonnet (faster, and on par or better than GPT-4 vision for many inputs)
|
||||
- DALL-E 3 for image generation
|
||||
|
||||
See the [Examples](#-examples) section below for more demos.
|
||||
|
||||
We also just added experimental support for taking a video/screen recording of a website in action and turning that into a functional prototype.
|
||||
|
||||

|
||||
|
||||
[Learn more about video here](https://github.com/abi/screenshot-to-code/wiki/Screen-Recording-to-Code).
|
||||
|
||||
[Follow me on Twitter for updates](https://twitter.com/_abi_).
|
||||
|
||||
## 🚀 Try It Out!
|
||||
|
||||
🆕 [Try it here](https://screenshottocode.com) (bring your own OpenAI key - **your key must have access to GPT-4 Vision. See [FAQ](#%EF%B8%8F-faqs) section below for details**). Or see [Getting Started](#-getting-started) below for local install instructions.
|
||||
|
||||
## 🌟 Recent Updates
|
||||
|
||||
- Mar 8 - 🔥🎉🎁 Video-to-app: turn videos/screen recordings into functional apps
|
||||
- Mar 5 - Added support for Claude Sonnet 3 (as capable as or better than GPT-4 Vision, and faster!)
|
||||
🆕 [Try it live on the hosted version](https://screenshottocode.com) (bring your own OpenAI key - **your key must have access to GPT-4 Vision. See [FAQ](#%EF%B8%8F-faqs) section below for details**). Or see [Getting Started](#-getting-started) below for local install instructions.
|
||||
|
||||
## 🛠 Getting Started
|
||||
|
||||
The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API.
|
||||
The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API or an Anthropic key if you want to use Claude Sonnet, or for experimental video support.
|
||||
|
||||
Run the backend (I use Poetry for package management - `pip install poetry` if you don't have it):
|
||||
|
||||
@ -33,6 +47,8 @@ poetry shell
|
||||
poetry run uvicorn main:app --reload --port 7001
|
||||
```
|
||||
|
||||
If you want to use Anthropic, add the `ANTHROPIC_API_KEY` to `backend/.env` with your API key from Anthropic.
|
||||
|
||||
Run the frontend:
|
||||
|
||||
```bash
|
||||
@ -51,25 +67,6 @@ For debugging purposes, if you don't want to waste GPT4-Vision credits, you can
|
||||
MOCK=true poetry run uvicorn main:app --reload --port 7001
|
||||
```
|
||||
|
||||
## Video to app (experimental)
|
||||
|
||||
https://github.com/abi/screenshot-to-code/assets/23818/1468bef4-164f-4046-a6c8-4cfc40a5cdff
|
||||
|
||||
Record yourself using any website or app or even a Figma prototype, drag & drop in a video and in a few minutes, get a functional, similar-looking app.
|
||||
|
||||
[You need an Anthropic API key for this functionality. Follow instructions here.](https://github.com/abi/screenshot-to-code/blob/main/blog/video-to-app.md)
|
||||
|
||||
## Configuration
|
||||
|
||||
- You can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog
|
||||
|
||||
## Using Claude 3
|
||||
|
||||
We recently added support for Claude 3 Sonnet. It performs well, on par or better than GPT-4 vision for many inputs, and it tends to be faster.
|
||||
|
||||
1. Add an env var `ANTHROPIC_API_KEY` to `backend/.env` with your API key from Anthropic
|
||||
2. When using the front-end, select "Claude 3 Sonnet" from the model dropdown
|
||||
|
||||
## Docker
|
||||
|
||||
If you have Docker installed on your system, in the root directory, run:
|
||||
@ -85,6 +82,8 @@ The app will be up and running at http://localhost:5173. Note that you can't dev
|
||||
|
||||
- **I'm running into an error when setting up the backend. How can I fix it?** [Try this](https://github.com/abi/screenshot-to-code/issues/3#issuecomment-1814777959). If that still doesn't work, open an issue.
|
||||
- **How do I get an OpenAI API key?** See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md
|
||||
- **How can I configure an OpenAI proxy?** - you can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog
|
||||
- **How can I update the backend host that my front-end connects to?** - Configure VITE_HTTP_BACKEND_URL and VITE_WS_BACKEND_URL in front/.env.local For example, set VITE_HTTP_BACKEND_URL=http://124.10.20.1:7001
|
||||
- **How can I provide feedback?** For feedback, feature requests and bug reports, open an issue or ping me on [Twitter](https://twitter.com/_abi_).
|
||||
|
||||
## 📚 Examples
|
||||
|
||||
Loading…
Reference in New Issue
Block a user