readme tweaks

This commit is contained in:
karpathy
2025-11-22 15:35:19 -08:00
parent 8affe1d794
commit 92e1fccb1b

View File

@@ -2,22 +2,13 @@
![llmcouncil](header.jpg)
The idea of this repo is that instead of asking a question to your favorite LLM provider (e.g. OpenAI GPT 5.1, Google Gemini 3.0 Pro, Anthropic Claude Sonnet 4.5, xAI Grok 4), you can group them into your "LLM Council".
The idea of this repo is that instead of asking a question to your favorite LLM provider (e.g. OpenAI GPT 5.1, Google Gemini 3.0 Pro, Anthropic Claude Sonnet 4.5, xAI Grok 4, eg.c), you can group them into your "LLM Council". This repo is a simple, local web app that essentially looks like ChatGPT except it uses OpenRouter to send your query to multiple LLMs, it then asks them to review and rank each other's work, and finally a Chairman LLM produces the final response.
This is a fully local, lightweight web app. The code has to be minimal, readable, simple and not bloated with complexity. The idea is that the user runs the web app locally, and access it via a localhost url/port.
In a bit more detail, here is what happens when you submit a query:
UIUX wise, the app looks like ChatGPT. Meaning there is a sidebar on the left storing conversations. User can create a new conversation or access past conversations. The conversations persist.
The life of a single conversation looks as follows. The user writes up a query. Then:
1. **Stage 1: First opinions**. The user query is given to all LLMs individually, and the responses are collected. The individual responses are shown in a "tab view", so that the user can inspect them.
2. **Stage 2: Review**. Each individual LLM is given the responses of the other LLMs. The LLM identities are anonymized so that the LLM can't play favorites when judging their outputs. The LLM is asked to rank them in accuracy and insight.
3. **Stage 3: Final response**. The designated Chairman of the LLM Council (the strongest model) takes all of the model's responses and compiles them into a single final answer that is presented to the user.
Implementation details:
- The project uses [uv](https://docs.astral.sh/uv/) for project management.
- The project uses [OpenRouter](https://openrouter.ai/) to easily call models across all API providers with a single API key. The API key is stored in `.env` file in current repo.
1. **Stage 1: First opinions**. The user query is given to all LLMs individually, and the responses are collected. The individual responses are shown in a "tab view", so that the user can inspect them all one by one.
2. **Stage 2: Review**. Each individual LLM is given the responses of the other LLMs. Under the hood, the LLM identities are anonymized so that the LLM can't play favorites when judging their outputs. The LLM is asked to rank them in accuracy and insight.
3. **Stage 3: Final response**. The designated Chairman of the LLM Council takes all of the model's responses and compiles them into a single final answer that is presented to the user.
## Vibe Code Alert
@@ -27,6 +18,8 @@ This project was 99% vibe coded as a fun Saturday hack because I wanted to explo
### 1. Install Dependencies
The project uses [uv](https://docs.astral.sh/uv/) for project management.
**Backend:**
```bash
uv sync
@@ -91,4 +84,4 @@ Then open http://localhost:5173 in your browser.
- **Backend:** FastAPI (Python 3.10+), async httpx, OpenRouter API
- **Frontend:** React + Vite, react-markdown for rendering
- **Storage:** JSON files in `data/conversations/`
- **Package Management:** uv for Python, npm for JavaScript
- **Package Management:** uv for Python, npm for JavaScript