Commit Graph

180 Commits

Author SHA1 Message Date
Abi Raja
2ab95eb2c0 upgrade uvicorn dependency and add a backend start python script 2024-01-08 09:35:02 -08:00
Abi Raja
9b728d034b add experimental support for SVG 2023-12-14 10:18:41 -05:00
Abi Raja
f1a9859384 fix bug with history not being updated if image generation fails 2023-12-14 08:29:04 -05:00
Abi Raja
d23cec9bc0 add a front-end for scoring eval results 2023-12-13 16:38:26 -05:00
Abi Raja
896ac66ac5 add a simple eval script to batch run inferences 2023-12-13 12:50:34 -05:00
Abi Raja
dd65a6e1e4 add tests for the new prompts 2023-12-10 15:49:21 -05:00
Abi Raja
356dd428b8 re-org files and improve tests 2023-12-10 10:50:02 -05:00
Abi Raja
dcb95431ea update prompts for all stack combos of imported code 2023-12-10 10:34:03 -05:00
Abi Raja
e8e3d4cb6f make user set the stack when importing code 2023-12-10 10:22:05 -05:00
Abi Raja
52fee9e49b initial implementation of importing from code 2023-12-09 21:00:18 -05:00
Abi Raja
435402bc85 split main.py into appropriate routes files 2023-12-09 15:46:42 -05:00
Abi Raja
6a28ee2d3c strictly type python backend 2023-12-09 15:34:16 -05:00
Abi Raja
0301f24fd8 fix up pretty printing 2023-12-07 11:26:48 -05:00
Abi Raja
f0101036a0 return failure reason to front-end 2023-12-06 10:50:22 -05:00
Abi Raja
70094ea765 handle openai.RateLimitError 2023-12-04 16:43:31 -05:00
Abi Raja
e3240511ab handle model not found error better 2023-12-04 16:37:49 -05:00
Abi Raja
26447ce15d handle openai.AuthenticationError 2023-12-04 16:32:17 -05:00
Abi Raja
96d45cf1c2 add default backend home route so people know it's running correctly 2023-12-03 19:58:22 -05:00
Abi Raja
a98b9d83f0 close OpenAI client 2023-12-03 14:45:39 -05:00
Abi Raja
0cc2c78f55 move websocket close to fix memory leak 2023-12-03 14:04:40 -05:00
Abi Raja
3e34c40e73 update libraries 2023-12-03 14:03:38 -05:00
Abi Raja
f39bd285bd remove comment 2023-12-03 13:08:32 -05:00
Abi Raja
dc28bd5e6b fix and add unit test 2023-11-30 16:07:08 -05:00
Abi Raja
414d83c458 catch prompt assembly error 2023-11-30 16:04:03 -05:00
Abi Raja
f51e70d701 redo how output settings is configured 2023-11-30 15:58:01 -05:00
Abi Raja
45a64326f6 Merge branch 'main' into pr/122 2023-11-30 13:56:38 -05:00
Abi Raja
7a76621e80 Disable user-specified OpenAI Base URL in prod 2023-11-30 12:07:35 -05:00
Abi Raja
aae7c0e468 fix type annotation 2023-11-30 11:56:51 -05:00
Abi Raja
572dedbd01 fix typo 2023-11-30 11:51:37 -05:00
Abi Raja
fa6f57914b fix issue with openAiBaseURL not being set 2023-11-30 11:49:59 -05:00
Abi Raja
339f07373a Merge branch 'main' into pr/62 2023-11-30 11:36:39 -05:00
Abi Raja
13888db9b5 update URL to prod URL 2023-11-29 14:28:36 -05:00
Abi Raja
b904710cfc add secret to access the backend 2023-11-29 14:00:45 -05:00
Abi Raja
7889e0c3d5 call backend to validate the access token 2023-11-29 12:00:33 -05:00
Abi Raja
8a242103fc use platform api key if the access code is correct 2023-11-29 11:38:53 -05:00
Abi Raja
b46d08b416 read access code from params 2023-11-29 11:35:04 -05:00
dialmedu
9bee5c79b8 Add prompt ionic support 2023-11-28 23:50:04 -05:00
Abi Raja
8c08d992fd print system prompt info in the backend logs 2023-11-28 14:33:03 -05:00
Abi Raja
dfe8ae4164 add support for React 2023-11-28 14:05:59 -05:00
Abi Raja
d6b3bf8573 add unit tests for prompts 2023-11-28 12:23:08 -05:00
Abi Raja
397fa0630f add support for outputs in bootstrap 2023-11-28 09:47:57 -05:00
Abi Raja
4c71db12b4 add some more backend logs to help with debugging 2023-11-27 21:22:20 -05:00
Abi Raja
e8874fdbd4 coerce to bool 2023-11-23 11:30:19 -05:00
Abi Raja
daa6218f2b Merge branch 'main' into pr/79 2023-11-23 11:26:57 -05:00
Abi Raja
c20b2015ee remove newline 2023-11-23 11:20:58 -05:00
Abi Raja
4d3c16defb improve names 2023-11-23 11:20:30 -05:00
vagusx
ba9aa55041 feat: add MOCK env variable for debugging
purposes
2023-11-22 14:17:03 +08:00
clean99
cc22292941
Merge branch 'main' into feat/add-image-comparison 2023-11-22 10:14:22 +08:00
Abi Raja
49ce6dcc88 rename method 2023-11-21 10:52:33 -05:00
clean99
44479a7c84 fix: prompt 2023-11-21 21:22:54 +08:00
clean99
02c3178b6b feat: update api to support image comparison 2023-11-21 21:14:45 +08:00
clean99
4290c70548 feat: update prompt to support image comparison 2023-11-21 21:14:25 +08:00
Nothing1024
e69269d844 Feat: OpenAI Base URL supported 2023-11-21 16:05:23 +08:00
Abi Raja
bf23d3198a support screenshotting a URL with ScreenshotOne 2023-11-20 11:48:33 -05:00
Abi Raja
87bbe4987a Merge branch 'main' into pr/7 2023-11-19 21:44:25 -05:00
Abi Raja
4ae04553df simplify dockerfile and update docker-compose to use 7001 2023-11-19 21:39:10 -05:00
Abi Raja
dbf89928ec add env var for where to store logs 2023-11-17 12:15:38 -05:00
Abi Raja
1cf4d544d5 fix bug with print statement when no key is sent in 2023-11-16 22:45:36 -05:00
Abi Raja
ba6a6828c4 update error message 2023-11-16 22:16:34 -05:00
Abi Raja
3adec32ef1 add build script for render backend deployment 2023-11-16 19:03:05 -05:00
Abi Raja
ee9b40d990 support setting openai api key on the client side 2023-11-16 18:12:07 -05:00
Abi Raja
371dddda3f add a settings dialog to allow you to disable the image generation 2023-11-16 17:37:03 -05:00
Abi Raja
68dd33cf06 return early if there are no images to replace 2023-11-16 15:11:54 -05:00
Abi Raja
640f41619c prettify code after image generation 2023-11-16 14:43:50 -05:00
Jonathan Adly
fdd3880162 Add dockerfiler & docker compose 2023-11-16 13:08:18 -05:00
Abi Raja
0dcefb34fb remove readme in pyproject.toml 2023-11-16 10:51:46 -05:00
Abi Raja
6b1c5aecef fix bad gitignore and missing file 2023-11-15 19:42:47 -05:00
Abi Raja
c061c9b610 improve image cache so we don't re-generate the same images on update 2023-11-15 17:31:01 -05:00
Abi Raja
7ed13c5fd4 only need to generate images that use placeholders from https://placehold.co 2023-11-15 16:19:06 -05:00
Abi Raja
db56dbd3e6 support updating the initially generated version 2023-11-15 15:08:59 -05:00
Abi Raja
51c7334c0e add image generation for placeholder images 2023-11-14 23:05:41 -05:00
Abi Raja
8be1b42fb4 add a console to show progress 2023-11-14 17:43:51 -05:00
Abi Raja
beff936e4e update prompt to clarify font awesome <link> 2023-11-14 17:23:02 -05:00
Abi Raja
593eb5ba63 fix toasts, error capture and get end to end working 2023-11-14 15:41:04 -05:00
Abi Raja
3f4cf2895e better UX for different app states 2023-11-14 15:31:24 -05:00
Abi Raja
86dd82e358 write logs to file 2023-11-14 15:26:14 -05:00
Abi Raja
82fc80b3fc aider: Changed file writing operation to write into a new file in the run_logs directory for each run using the current timestamp as the filename.
# Aider chat conversation:

ASSISTANT: To achieve this, we need to modify the `backend/main.py` file. We will change the file writing operation to write into a new file in the `run_logs` directory for each run. We will use the current timestamp to create a unique filename for each run.

Here is the step-by-step plan:

1. Import the `datetime` and `os` modules.
2. Create a `run_logs` directory if it doesn't exist.
3. Generate a unique filename using the current timestamp.
4. Change the file writing operation to write into the new file in the `run_logs` directory.

Here is the *edit block* for these changes:

```python
backend/main.py
<<<<<<< HEAD
# Load environment variables first
from dotenv import load_dotenv

from prompts import assemble_prompt

load_dotenv()

from fastapi import FastAPI, WebSocket
from llm import stream_openai_response

app = FastAPI()

@app.websocket("/generate-code")
async def stream_code_test(websocket: WebSocket):
    await websocket.accept()

    result = await websocket.receive_json()

    async def process_chunk(content):
        await websocket.send_json({"type": "chunk", "value": content})

    messages = assemble_prompt("")
    print(messages)

    # Write the messages dict into a file for debugging
    with open("messages.json", "w") as f:
        f.write(str(messages))

    await stream_openai_response(
        messages,
        lambda x: process_chunk(x),
    )
=======
# Load environment variables first
from dotenv import load_dotenv
import os
from datetime import datetime

from prompts import assemble_prompt

load_dotenv()

from fastapi import FastAPI, WebSocket
from llm import stream_openai_response

app = FastAPI()

@app.websocket("/generate-code")
async def stream_code_test(websocket: WebSocket):
    await websocket.accept()

    result = await websocket.receive_json()

    async def process_chunk(content):
        await websocket.send_json({"type": "chunk", "value": content})

    messages = assemble_prompt("")
    print(messages)

    # Create run_logs directory if it doesn't exist
    if not os.path.exists('run_logs'):
        os.makedirs('run_logs')

    # Generate a unique filename using the current timestamp
    filename = datetime.now().strftime('run_logs/messages_%Y%m%d_%H%M%S.json')

    # Write the messages dict into a new file for each run
    with open(filename, "w") as f:
        f.write(str(messages))

    await stream_openai_response(
        messages,
        lambda x: process_chunk(x),
    )
>>>>>>> updated
```
2023-11-14 15:00:23 -05:00
Abi Raja
4bb012922b Write messages dict into a file for debugging. 2023-11-14 15:00:22 -05:00
Abi Raja
54ddde2820 add a prompt library 2023-11-14 14:56:46 -05:00
Abi Raja
ce763782c4 add backend with openai calling 2023-11-14 14:40:17 -05:00