Commit Graph

53 Commits

Author SHA1 Message Date
Khajaameen455@
d20f402d89
Update main.py
If load_dotenv() is not called before importing modules that rely on environment variables defined in the .env file, those modules might not be able to access the environment variables correctly. This could lead to errors or unexpected behavior in your code.
2024-02-23 14:31:15 +05:30
Abi Raja
d23cec9bc0 add a front-end for scoring eval results 2023-12-13 16:38:26 -05:00
Abi Raja
435402bc85 split main.py into appropriate routes files 2023-12-09 15:46:42 -05:00
Abi Raja
6a28ee2d3c strictly type python backend 2023-12-09 15:34:16 -05:00
Abi Raja
0301f24fd8 fix up pretty printing 2023-12-07 11:26:48 -05:00
Abi Raja
f0101036a0 return failure reason to front-end 2023-12-06 10:50:22 -05:00
Abi Raja
70094ea765 handle openai.RateLimitError 2023-12-04 16:43:31 -05:00
Abi Raja
e3240511ab handle model not found error better 2023-12-04 16:37:49 -05:00
Abi Raja
26447ce15d handle openai.AuthenticationError 2023-12-04 16:32:17 -05:00
Abi Raja
96d45cf1c2 add default backend home route so people know it's running correctly 2023-12-03 19:58:22 -05:00
Abi Raja
0cc2c78f55 move websocket close to fix memory leak 2023-12-03 14:04:40 -05:00
Abi Raja
f39bd285bd remove comment 2023-12-03 13:08:32 -05:00
Abi Raja
414d83c458 catch prompt assembly error 2023-11-30 16:04:03 -05:00
Abi Raja
f51e70d701 redo how output settings is configured 2023-11-30 15:58:01 -05:00
Abi Raja
45a64326f6 Merge branch 'main' into pr/122 2023-11-30 13:56:38 -05:00
Abi Raja
7a76621e80 Disable user-specified OpenAI Base URL in prod 2023-11-30 12:07:35 -05:00
Abi Raja
572dedbd01 fix typo 2023-11-30 11:51:37 -05:00
Abi Raja
fa6f57914b fix issue with openAiBaseURL not being set 2023-11-30 11:49:59 -05:00
Abi Raja
339f07373a Merge branch 'main' into pr/62 2023-11-30 11:36:39 -05:00
Abi Raja
7889e0c3d5 call backend to validate the access token 2023-11-29 12:00:33 -05:00
Abi Raja
8a242103fc use platform api key if the access code is correct 2023-11-29 11:38:53 -05:00
Abi Raja
b46d08b416 read access code from params 2023-11-29 11:35:04 -05:00
dialmedu
9bee5c79b8 Add prompt ionic support 2023-11-28 23:50:04 -05:00
Abi Raja
dfe8ae4164 add support for React 2023-11-28 14:05:59 -05:00
Abi Raja
397fa0630f add support for outputs in bootstrap 2023-11-28 09:47:57 -05:00
Abi Raja
4c71db12b4 add some more backend logs to help with debugging 2023-11-27 21:22:20 -05:00
Abi Raja
e8874fdbd4 coerce to bool 2023-11-23 11:30:19 -05:00
Abi Raja
daa6218f2b Merge branch 'main' into pr/79 2023-11-23 11:26:57 -05:00
Abi Raja
c20b2015ee remove newline 2023-11-23 11:20:58 -05:00
Abi Raja
4d3c16defb improve names 2023-11-23 11:20:30 -05:00
vagusx
ba9aa55041 feat: add MOCK env variable for debugging
purposes
2023-11-22 14:17:03 +08:00
clean99
cc22292941
Merge branch 'main' into feat/add-image-comparison 2023-11-22 10:14:22 +08:00
Abi Raja
49ce6dcc88 rename method 2023-11-21 10:52:33 -05:00
clean99
02c3178b6b feat: update api to support image comparison 2023-11-21 21:14:45 +08:00
Nothing1024
e69269d844 Feat: OpenAI Base URL supported 2023-11-21 16:05:23 +08:00
Abi Raja
bf23d3198a support screenshotting a URL with ScreenshotOne 2023-11-20 11:48:33 -05:00
Abi Raja
dbf89928ec add env var for where to store logs 2023-11-17 12:15:38 -05:00
Abi Raja
1cf4d544d5 fix bug with print statement when no key is sent in 2023-11-16 22:45:36 -05:00
Abi Raja
ba6a6828c4 update error message 2023-11-16 22:16:34 -05:00
Abi Raja
ee9b40d990 support setting openai api key on the client side 2023-11-16 18:12:07 -05:00
Abi Raja
371dddda3f add a settings dialog to allow you to disable the image generation 2023-11-16 17:37:03 -05:00
Abi Raja
68dd33cf06 return early if there are no images to replace 2023-11-16 15:11:54 -05:00
Abi Raja
c061c9b610 improve image cache so we don't re-generate the same images on update 2023-11-15 17:31:01 -05:00
Abi Raja
db56dbd3e6 support updating the initially generated version 2023-11-15 15:08:59 -05:00
Abi Raja
51c7334c0e add image generation for placeholder images 2023-11-14 23:05:41 -05:00
Abi Raja
8be1b42fb4 add a console to show progress 2023-11-14 17:43:51 -05:00
Abi Raja
593eb5ba63 fix toasts, error capture and get end to end working 2023-11-14 15:41:04 -05:00
Abi Raja
3f4cf2895e better UX for different app states 2023-11-14 15:31:24 -05:00
Abi Raja
86dd82e358 write logs to file 2023-11-14 15:26:14 -05:00
Abi Raja
82fc80b3fc aider: Changed file writing operation to write into a new file in the run_logs directory for each run using the current timestamp as the filename.
# Aider chat conversation:

ASSISTANT: To achieve this, we need to modify the `backend/main.py` file. We will change the file writing operation to write into a new file in the `run_logs` directory for each run. We will use the current timestamp to create a unique filename for each run.

Here is the step-by-step plan:

1. Import the `datetime` and `os` modules.
2. Create a `run_logs` directory if it doesn't exist.
3. Generate a unique filename using the current timestamp.
4. Change the file writing operation to write into the new file in the `run_logs` directory.

Here is the *edit block* for these changes:

```python
backend/main.py
<<<<<<< HEAD
# Load environment variables first
from dotenv import load_dotenv

from prompts import assemble_prompt

load_dotenv()

from fastapi import FastAPI, WebSocket
from llm import stream_openai_response

app = FastAPI()

@app.websocket("/generate-code")
async def stream_code_test(websocket: WebSocket):
    await websocket.accept()

    result = await websocket.receive_json()

    async def process_chunk(content):
        await websocket.send_json({"type": "chunk", "value": content})

    messages = assemble_prompt("")
    print(messages)

    # Write the messages dict into a file for debugging
    with open("messages.json", "w") as f:
        f.write(str(messages))

    await stream_openai_response(
        messages,
        lambda x: process_chunk(x),
    )
=======
# Load environment variables first
from dotenv import load_dotenv
import os
from datetime import datetime

from prompts import assemble_prompt

load_dotenv()

from fastapi import FastAPI, WebSocket
from llm import stream_openai_response

app = FastAPI()

@app.websocket("/generate-code")
async def stream_code_test(websocket: WebSocket):
    await websocket.accept()

    result = await websocket.receive_json()

    async def process_chunk(content):
        await websocket.send_json({"type": "chunk", "value": content})

    messages = assemble_prompt("")
    print(messages)

    # Create run_logs directory if it doesn't exist
    if not os.path.exists('run_logs'):
        os.makedirs('run_logs')

    # Generate a unique filename using the current timestamp
    filename = datetime.now().strftime('run_logs/messages_%Y%m%d_%H%M%S.json')

    # Write the messages dict into a new file for each run
    with open(filename, "w") as f:
        f.write(str(messages))

    await stream_openai_response(
        messages,
        lambda x: process_chunk(x),
    )
>>>>>>> updated
```
2023-11-14 15:00:23 -05:00