diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
new file mode 100644
index 0000000..386e7e3
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -0,0 +1,21 @@
+---
+name: Bug report
+about: Create a report to help us improve
+title: ''
+labels: ''
+assignees: ''
+
+---
+
+**Describe the bug**
+A clear and concise description of what the bug is.
+
+**To Reproduce**
+Steps to reproduce the behavior:
+1. Go to '...'
+2. Click on '....'
+3. Scroll down to '....'
+4. See error
+
+**Screenshots of backend AND frontend terminal logs**
+If applicable, add screenshots to help explain your problem.
diff --git a/.github/ISSUE_TEMPLATE/custom.md b/.github/ISSUE_TEMPLATE/custom.md
new file mode 100644
index 0000000..48d5f81
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/custom.md
@@ -0,0 +1,10 @@
+---
+name: Custom issue template
+about: Describe this issue template's purpose here.
+title: ''
+labels: ''
+assignees: ''
+
+---
+
+
diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md
new file mode 100644
index 0000000..bbcbbe7
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/feature_request.md
@@ -0,0 +1,20 @@
+---
+name: Feature request
+about: Suggest an idea for this project
+title: ''
+labels: ''
+assignees: ''
+
+---
+
+**Is your feature request related to a problem? Please describe.**
+A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
+
+**Describe the solution you'd like**
+A clear and concise description of what you want to happen.
+
+**Describe alternatives you've considered**
+A clear and concise description of any alternative solutions or features you've considered.
+
+**Additional context**
+Add any other context or screenshots about the feature request here.
diff --git a/README.md b/README.md
index a56c528..4b681b1 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
# screenshot-to-code
-A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI.
+A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI. **Now supporting GPT-4O!**
https://github.com/abi/screenshot-to-code/assets/23818/6cebadae-2fe3-4986-ac6a-8fb9db030045
@@ -15,8 +15,10 @@ Supported stacks:
Supported AI models:
-- GPT-4 Vision
-- Claude 3 Sonnet (faster, and on par or better than GPT-4 vision for many inputs)
+- GPT-4O - Best model!
+- GPT-4 Turbo (Apr 2024)
+- GPT-4 Vision (Nov 2023)
+- Claude 3 Sonnet
- DALL-E 3 for image generation
See the [Examples](#-examples) section below for more demos.
@@ -82,7 +84,7 @@ The app will be up and running at http://localhost:5173. Note that you can't dev
- **I'm running into an error when setting up the backend. How can I fix it?** [Try this](https://github.com/abi/screenshot-to-code/issues/3#issuecomment-1814777959). If that still doesn't work, open an issue.
- **How do I get an OpenAI API key?** See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md
-- **How can I configure an OpenAI proxy?** - you can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog
+- **How can I configure an OpenAI proxy?** - If you're not able to access the OpenAI API directly (due to e.g. country restrictions), you can try a VPN or you can configure the OpenAI base URL to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog. Make sure the URL has "v1" in the path so it should look like this: `https://xxx.xxxxx.xxx/v1`
- **How can I update the backend host that my front-end connects to?** - Configure VITE_HTTP_BACKEND_URL and VITE_WS_BACKEND_URL in front/.env.local For example, set VITE_HTTP_BACKEND_URL=http://124.10.20.1:7001
- **Seeing UTF-8 errors when running the backend?** - On windows, open the .env file with notepad++, then go to Encoding and select UTF-8.
- **How can I provide feedback?** For feedback, feature requests and bug reports, open an issue or ping me on [Twitter](https://twitter.com/_abi_).
diff --git a/Troubleshooting.md b/Troubleshooting.md
index 3891db3..89aa3ba 100644
--- a/Troubleshooting.md
+++ b/Troubleshooting.md
@@ -11,7 +11,8 @@ You don't need a ChatGPT Pro account. Screenshot to code uses API keys from your
5. Go to Settings > Limits and check at the bottom of the page, your current tier has to be "Tier 1" to have GPT4 access
-6. Go to Screenshot to code and paste it in the Settings dialog under OpenAI key (gear icon). Your key is only stored in your browser. Never stored on our servers.
+6. Navigate to OpenAI [api keys](https://platform.openai.com/api-keys) page and create and copy a new secret key.
+7. Go to Screenshot to code and paste it in the Settings dialog under OpenAI key (gear icon). Your key is only stored in your browser. Never stored on our servers.
## Still not working?
diff --git a/backend/llm.py b/backend/llm.py
index 3d653b2..e541046 100644
--- a/backend/llm.py
+++ b/backend/llm.py
@@ -13,6 +13,7 @@ from utils import pprint_prompt
class Llm(Enum):
GPT_4_VISION = "gpt-4-vision-preview"
GPT_4_TURBO_2024_04_09 = "gpt-4-turbo-2024-04-09"
+ GPT_4O_2024_05_13 = "gpt-4o-2024-05-13"
CLAUDE_3_SONNET = "claude-3-sonnet-20240229"
CLAUDE_3_OPUS = "claude-3-opus-20240229"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
@@ -47,7 +48,11 @@ async def stream_openai_response(
}
# Add 'max_tokens' only if the model is a GPT4 vision or Turbo model
- if model == Llm.GPT_4_VISION or model == Llm.GPT_4_TURBO_2024_04_09:
+ if (
+ model == Llm.GPT_4_VISION
+ or model == Llm.GPT_4_TURBO_2024_04_09
+ or model == Llm.GPT_4O_2024_05_13
+ ):
params["max_tokens"] = 4096
stream = await client.chat.completions.create(**params) # type: ignore
diff --git a/backend/routes/evals.py b/backend/routes/evals.py
index 798a9d8..22262cd 100644
--- a/backend/routes/evals.py
+++ b/backend/routes/evals.py
@@ -7,10 +7,13 @@ from evals.config import EVALS_DIR
router = APIRouter()
+# Update this if the number of outputs generated per input changes
+N = 1
+
class Eval(BaseModel):
input: str
- output: str
+ outputs: list[str]
@router.get("/evals")
@@ -25,21 +28,27 @@ async def get_evals():
input_file_path = os.path.join(input_dir, file)
input_file = await image_to_data_url(input_file_path)
- # Construct the corresponding output file name
- output_file_name = file.replace(".png", ".html")
- output_file_path = os.path.join(output_dir, output_file_name)
+ # Construct the corresponding output file names
+ output_file_names = [
+ file.replace(".png", f"_{i}.html") for i in range(0, N)
+ ] # Assuming 3 outputs for each input
- # Check if the output file exists
- if os.path.exists(output_file_path):
- with open(output_file_path, "r") as f:
- output_file_data = f.read()
- else:
- output_file_data = "Output file not found."
+ output_files_data: list[str] = []
+ for output_file_name in output_file_names:
+ output_file_path = os.path.join(output_dir, output_file_name)
+ # Check if the output file exists
+ if os.path.exists(output_file_path):
+ with open(output_file_path, "r") as f:
+ output_files_data.append(f.read())
+ else:
+ output_files_data.append(
+ "
+ Now supporting GPT-4o. Higher quality and 2x faster. Give it a + try! +
+