From 6069c2a118592905fd8c0bc2b406b2e93891dfb1 Mon Sep 17 00:00:00 2001 From: Abi Raja Date: Wed, 20 Mar 2024 15:54:42 -0400 Subject: [PATCH 1/4] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f8b6795..b1dc247 100644 --- a/README.md +++ b/README.md @@ -51,7 +51,7 @@ For debugging purposes, if you don't want to waste GPT4-Vision credits, you can MOCK=true poetry run uvicorn main:app --reload --port 7001 ``` -## Video to app (experimental) +## Screen Recording to prototype (experimental) https://github.com/abi/screenshot-to-code/assets/23818/1468bef4-164f-4046-a6c8-4cfc40a5cdff From 48d2ae9cfdb48b06198085c9257325be2d199c76 Mon Sep 17 00:00:00 2001 From: Abi Raja Date: Fri, 22 Mar 2024 13:43:33 -0400 Subject: [PATCH 2/4] Update README.md --- README.md | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index b1dc247..c58cc81 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ See the [Examples](#-examples) section below for more demos. ## 🛠 Getting Started -The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API. +The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API or an Anthropic key if you want to use Claude Sonnet, or for experimental video support. Run the backend (I use Poetry for package management - `pip install poetry` if you don't have it): @@ -33,6 +33,8 @@ poetry shell poetry run uvicorn main:app --reload --port 7001 ``` +If you want to use Anthropic, add the `ANTHROPIC_API_KEY` to `backend/.env` with your API key from Anthropic. + Run the frontend: ```bash @@ -59,16 +61,6 @@ Record yourself using any website or app or even a Figma prototype, drag & drop [You need an Anthropic API key for this functionality. Follow instructions here.](https://github.com/abi/screenshot-to-code/blob/main/blog/video-to-app.md) -## Configuration - -- You can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog - -## Using Claude 3 - -We recently added support for Claude 3 Sonnet. It performs well, on par or better than GPT-4 vision for many inputs, and it tends to be faster. - -1. Add an env var `ANTHROPIC_API_KEY` to `backend/.env` with your API key from Anthropic -2. When using the front-end, select "Claude 3 Sonnet" from the model dropdown ## Docker @@ -85,6 +77,7 @@ The app will be up and running at http://localhost:5173. Note that you can't dev - **I'm running into an error when setting up the backend. How can I fix it?** [Try this](https://github.com/abi/screenshot-to-code/issues/3#issuecomment-1814777959). If that still doesn't work, open an issue. - **How do I get an OpenAI API key?** See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md +- **How can I configure an OpenAI proxy?** - you can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog - **How can I provide feedback?** For feedback, feature requests and bug reports, open an issue or ping me on [Twitter](https://twitter.com/_abi_). ## 📚 Examples From 04cb502be9a0a5deac3738c4c8e30cf3e5b20c67 Mon Sep 17 00:00:00 2001 From: Abi Raja Date: Fri, 22 Mar 2024 13:51:23 -0400 Subject: [PATCH 3/4] Update README.md --- README.md | 41 +++++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index c58cc81..a6b9edc 100644 --- a/README.md +++ b/README.md @@ -1,23 +1,37 @@ # screenshot-to-code -This simple app converts a screenshot to code (HTML/Tailwind CSS, or React or Bootstrap or Vue). It uses GPT-4 Vision (or Claude 3) to generate the code and DALL-E 3 to generate similar-looking images. You can now also enter a URL to clone a live website. - -🆕 Now, supporting Claude 3! +A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI. https://github.com/abi/screenshot-to-code/assets/23818/6cebadae-2fe3-4986-ac6a-8fb9db030045 +Supported stacks: + +- HTML + Tailwind +- React + Tailwind +- Vue + Tailwind +- Bootstrap +- Ionic + Tailwind +- SVG + +Supported AI models: + +- GPT-4 Vision +- Claude 3 Sonnet (faster, and on par or better than GPT-4 vision for many inputs) +- DALL-E 3 for image generation + See the [Examples](#-examples) section below for more demos. +We also just added experimental support for taking a video/screen recording of a website in action and turning that into a functional prototype. + +![google in app quick 3](https://github.com/abi/screenshot-to-code/assets/23818/8758ffa4-9483-4b9b-bb66-abd6d1594c33) + +[Learn more about video here](https://github.com/abi/screenshot-to-code/wiki/Screen-Recording-to-Code). + [Follow me on Twitter for updates](https://twitter.com/_abi_). ## 🚀 Try It Out! -🆕 [Try it here](https://screenshottocode.com) (bring your own OpenAI key - **your key must have access to GPT-4 Vision. See [FAQ](#%EF%B8%8F-faqs) section below for details**). Or see [Getting Started](#-getting-started) below for local install instructions. - -## 🌟 Recent Updates - -- Mar 8 - 🔥🎉🎁 Video-to-app: turn videos/screen recordings into functional apps -- Mar 5 - Added support for Claude Sonnet 3 (as capable as or better than GPT-4 Vision, and faster!) +🆕 [Try it live on the hosted version](https://screenshottocode.com) (bring your own OpenAI key - **your key must have access to GPT-4 Vision. See [FAQ](#%EF%B8%8F-faqs) section below for details**). Or see [Getting Started](#-getting-started) below for local install instructions. ## 🛠 Getting Started @@ -53,15 +67,6 @@ For debugging purposes, if you don't want to waste GPT4-Vision credits, you can MOCK=true poetry run uvicorn main:app --reload --port 7001 ``` -## Screen Recording to prototype (experimental) - -https://github.com/abi/screenshot-to-code/assets/23818/1468bef4-164f-4046-a6c8-4cfc40a5cdff - -Record yourself using any website or app or even a Figma prototype, drag & drop in a video and in a few minutes, get a functional, similar-looking app. - -[You need an Anthropic API key for this functionality. Follow instructions here.](https://github.com/abi/screenshot-to-code/blob/main/blog/video-to-app.md) - - ## Docker If you have Docker installed on your system, in the root directory, run: From fc9b2e0530c413235b2b7996e5b6215303d54eb3 Mon Sep 17 00:00:00 2001 From: Abi Raja Date: Mon, 25 Mar 2024 11:44:54 -0400 Subject: [PATCH 4/4] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index a6b9edc..42ac2aa 100644 --- a/README.md +++ b/README.md @@ -83,6 +83,7 @@ The app will be up and running at http://localhost:5173. Note that you can't dev - **I'm running into an error when setting up the backend. How can I fix it?** [Try this](https://github.com/abi/screenshot-to-code/issues/3#issuecomment-1814777959). If that still doesn't work, open an issue. - **How do I get an OpenAI API key?** See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md - **How can I configure an OpenAI proxy?** - you can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog +- **How can I update the backend host that my front-end connects to?** - Configure VITE_HTTP_BACKEND_URL and VITE_WS_BACKEND_URL in front/.env.local For example, set VITE_HTTP_BACKEND_URL=http://124.10.20.1:7001 - **How can I provide feedback?** For feedback, feature requests and bug reports, open an issue or ping me on [Twitter](https://twitter.com/_abi_). ## 📚 Examples