Commit Graph

180 Commits

Author SHA1 Message Date
clean99
02c3178b6b feat: update api to support image comparison 2023-11-21 21:14:45 +08:00
clean99
4290c70548 feat: update prompt to support image comparison 2023-11-21 21:14:25 +08:00
Nothing1024
e69269d844 Feat: OpenAI Base URL supported 2023-11-21 16:05:23 +08:00
Abi Raja
bf23d3198a support screenshotting a URL with ScreenshotOne 2023-11-20 11:48:33 -05:00
Abi Raja
87bbe4987a Merge branch 'main' into pr/7 2023-11-19 21:44:25 -05:00
Abi Raja
4ae04553df simplify dockerfile and update docker-compose to use 7001 2023-11-19 21:39:10 -05:00
Abi Raja
dbf89928ec add env var for where to store logs 2023-11-17 12:15:38 -05:00
Abi Raja
1cf4d544d5 fix bug with print statement when no key is sent in 2023-11-16 22:45:36 -05:00
Abi Raja
ba6a6828c4 update error message 2023-11-16 22:16:34 -05:00
Abi Raja
3adec32ef1 add build script for render backend deployment 2023-11-16 19:03:05 -05:00
Abi Raja
ee9b40d990 support setting openai api key on the client side 2023-11-16 18:12:07 -05:00
Abi Raja
371dddda3f add a settings dialog to allow you to disable the image generation 2023-11-16 17:37:03 -05:00
Abi Raja
68dd33cf06 return early if there are no images to replace 2023-11-16 15:11:54 -05:00
Abi Raja
640f41619c prettify code after image generation 2023-11-16 14:43:50 -05:00
Jonathan Adly
fdd3880162 Add dockerfiler & docker compose 2023-11-16 13:08:18 -05:00
Abi Raja
0dcefb34fb remove readme in pyproject.toml 2023-11-16 10:51:46 -05:00
Abi Raja
6b1c5aecef fix bad gitignore and missing file 2023-11-15 19:42:47 -05:00
Abi Raja
c061c9b610 improve image cache so we don't re-generate the same images on update 2023-11-15 17:31:01 -05:00
Abi Raja
7ed13c5fd4 only need to generate images that use placeholders from https://placehold.co 2023-11-15 16:19:06 -05:00
Abi Raja
db56dbd3e6 support updating the initially generated version 2023-11-15 15:08:59 -05:00
Abi Raja
51c7334c0e add image generation for placeholder images 2023-11-14 23:05:41 -05:00
Abi Raja
8be1b42fb4 add a console to show progress 2023-11-14 17:43:51 -05:00
Abi Raja
beff936e4e update prompt to clarify font awesome <link> 2023-11-14 17:23:02 -05:00
Abi Raja
593eb5ba63 fix toasts, error capture and get end to end working 2023-11-14 15:41:04 -05:00
Abi Raja
3f4cf2895e better UX for different app states 2023-11-14 15:31:24 -05:00
Abi Raja
86dd82e358 write logs to file 2023-11-14 15:26:14 -05:00
Abi Raja
82fc80b3fc aider: Changed file writing operation to write into a new file in the run_logs directory for each run using the current timestamp as the filename.
# Aider chat conversation:

ASSISTANT: To achieve this, we need to modify the `backend/main.py` file. We will change the file writing operation to write into a new file in the `run_logs` directory for each run. We will use the current timestamp to create a unique filename for each run.

Here is the step-by-step plan:

1. Import the `datetime` and `os` modules.
2. Create a `run_logs` directory if it doesn't exist.
3. Generate a unique filename using the current timestamp.
4. Change the file writing operation to write into the new file in the `run_logs` directory.

Here is the *edit block* for these changes:

```python
backend/main.py
<<<<<<< HEAD
# Load environment variables first
from dotenv import load_dotenv

from prompts import assemble_prompt

load_dotenv()

from fastapi import FastAPI, WebSocket
from llm import stream_openai_response

app = FastAPI()

@app.websocket("/generate-code")
async def stream_code_test(websocket: WebSocket):
    await websocket.accept()

    result = await websocket.receive_json()

    async def process_chunk(content):
        await websocket.send_json({"type": "chunk", "value": content})

    messages = assemble_prompt("")
    print(messages)

    # Write the messages dict into a file for debugging
    with open("messages.json", "w") as f:
        f.write(str(messages))

    await stream_openai_response(
        messages,
        lambda x: process_chunk(x),
    )
=======
# Load environment variables first
from dotenv import load_dotenv
import os
from datetime import datetime

from prompts import assemble_prompt

load_dotenv()

from fastapi import FastAPI, WebSocket
from llm import stream_openai_response

app = FastAPI()

@app.websocket("/generate-code")
async def stream_code_test(websocket: WebSocket):
    await websocket.accept()

    result = await websocket.receive_json()

    async def process_chunk(content):
        await websocket.send_json({"type": "chunk", "value": content})

    messages = assemble_prompt("")
    print(messages)

    # Create run_logs directory if it doesn't exist
    if not os.path.exists('run_logs'):
        os.makedirs('run_logs')

    # Generate a unique filename using the current timestamp
    filename = datetime.now().strftime('run_logs/messages_%Y%m%d_%H%M%S.json')

    # Write the messages dict into a new file for each run
    with open(filename, "w") as f:
        f.write(str(messages))

    await stream_openai_response(
        messages,
        lambda x: process_chunk(x),
    )
>>>>>>> updated
```
2023-11-14 15:00:23 -05:00
Abi Raja
4bb012922b Write messages dict into a file for debugging. 2023-11-14 15:00:22 -05:00
Abi Raja
54ddde2820 add a prompt library 2023-11-14 14:56:46 -05:00
Abi Raja
ce763782c4 add backend with openai calling 2023-11-14 14:40:17 -05:00