Compare commits
38 Commits
vk/f591-im
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
79766d279d
|
|||
|
1570f79b62
|
|||
|
cbedcd67f6
|
|||
|
da044be68c
|
|||
| a7421b9350 | |||
| acedc01e83 | |||
| bca4d6898f | |||
| 10588a18b9 | |||
| 611de57bf8 | |||
| add685a326 | |||
| ff66bc9e1f | |||
| 49aaed5dec | |||
| 8c091b1e6d | |||
| 438e561da0 | |||
| 06735bdfb1 | |||
| f21abd6299 | |||
| 2140c5facf | |||
| 0718a06c19 | |||
| 34ee49410d | |||
| 8e0be6ca89 | |||
| 9646931181 | |||
| 506ea8a3b8 | |||
| 8ea2afb259 | |||
| 18351abb00 | |||
| 2898d9e832 | |||
| 9c14e51b43 | |||
| d6bd56dace | |||
| 56c620473f | |||
| e1de6d016d | |||
| 6986c1b5ab | |||
| 00588ed1b8 | |||
| b94219fc5b | |||
| a9f5f3f75d | |||
| b3e183eb0a | |||
| d22924f6aa | |||
| 4521755344 | |||
| 0f36b2dde7 | |||
| 65cd647f01 |
9
.dockerignore
Normal file
9
.dockerignore
Normal file
@@ -0,0 +1,9 @@
|
||||
.git
|
||||
.podman
|
||||
artifacts
|
||||
.container-home
|
||||
db.sqlite3
|
||||
docker/data
|
||||
signal-cli-config
|
||||
__pycache__
|
||||
*.pyc
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -166,3 +166,6 @@ oom
|
||||
node_modules/
|
||||
.podman/
|
||||
.beads/
|
||||
.sisyphus/
|
||||
|
||||
.container-home/
|
||||
|
||||
@@ -1,22 +1,22 @@
|
||||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 23.1.0
|
||||
rev: 26.3.0
|
||||
hooks:
|
||||
- id: black
|
||||
exclude: ^core/migrations
|
||||
- repo: https://github.com/PyCQA/isort
|
||||
rev: 5.11.5
|
||||
rev: 8.0.1
|
||||
hooks:
|
||||
- id: isort
|
||||
args: ["--profile", "black"]
|
||||
- repo: https://github.com/PyCQA/flake8
|
||||
rev: 6.0.0
|
||||
rev: 7.3.0
|
||||
hooks:
|
||||
- id: flake8
|
||||
args: [--max-line-length=88]
|
||||
exclude: ^core/migrations
|
||||
- repo: https://github.com/rtts/djhtml
|
||||
rev: v2.0.0
|
||||
rev: 3.0.10
|
||||
hooks:
|
||||
- id: djhtml
|
||||
args: [-t 2]
|
||||
@@ -25,6 +25,6 @@ repos:
|
||||
- id: djjs
|
||||
exclude: ^core/static/js # slow
|
||||
- repo: https://github.com/sirwart/ripsecrets.git
|
||||
rev: v0.1.5
|
||||
rev: v0.1.11
|
||||
hooks:
|
||||
- id: ripsecrets
|
||||
|
||||
17
AGENTS.md
17
AGENTS.md
@@ -2,7 +2,7 @@
|
||||
|
||||
## Overview
|
||||
|
||||
GIA is a multi-transport communication platform bridging Signal, WhatsApp, XMPP, and Instagram through a Django web interface. It provides message relay, AI-powered workspace analysis, compose UX, and OSINT search. Stack: Python 3.11, Django 4.x, HTMX, Bulma CSS, SQLite, Redis, Docker Compose. Async runtime uses asyncio + uvloop.
|
||||
GIA is a multi-transport communication platform bridging Signal, WhatsApp, XMPP, and Instagram through a Django web interface. It provides message relay, AI-powered workspace analysis, compose UX, and OSINT search. Stack: Python 3.11, Django 4.x, HTMX, Bulma CSS, SQLite, Redis, Podman. Async runtime uses asyncio + uvloop.
|
||||
|
||||
## Structure
|
||||
|
||||
@@ -45,8 +45,8 @@ GIA/
|
||||
│ ├── templates/ # Django templates (75 files, partials/ heavy)
|
||||
│ ├── management/commands/ # ur (unified router), scheduling
|
||||
│ └── util/logs.py # Custom colored logger — use logs.get_logger("name")
|
||||
├── Makefile # Docker Compose orchestration commands
|
||||
├── docker-compose.yml # Services: app, asgi, ur, scheduling, redis, signal-cli
|
||||
├── Makefile # Podman + quadlet orchestration commands
|
||||
├── scripts/quadlet/ # Podman lifecycle scripts and unit rendering
|
||||
├── Dockerfile # Python 3.11, venv at /venv
|
||||
├── requirements.txt # Pinned deps (django, openai, neonize, slixmpp, etc.)
|
||||
├── stack.env # Runtime env vars (from stack.env.example)
|
||||
@@ -56,14 +56,11 @@ GIA/
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
# All commands run via Docker Compose with stack.env
|
||||
# All commands run via Podman + quadlet with stack.env
|
||||
make build # Build Docker images
|
||||
make run # Start all services (quadlet manager)
|
||||
make stop # Stop all services
|
||||
make log # Tail logs
|
||||
make compose-run # Start via docker-compose directly
|
||||
make compose-stop # Stop via docker-compose
|
||||
make compose-log # Tail via docker-compose
|
||||
|
||||
# Database
|
||||
make migrate # Run Django migrations
|
||||
@@ -80,8 +77,8 @@ python manage.py test core.tests.test_foo.TestBar -v 2 # Single class
|
||||
python manage.py test core.tests.test_foo.TestBar.test_method -v 2 # Single test
|
||||
|
||||
# Service restarts after code changes
|
||||
docker-compose restart ur # Restart unified router
|
||||
docker-compose restart scheduling # Restart scheduler
|
||||
podman restart ur_gia # Restart unified router
|
||||
podman restart scheduling_gia # Restart scheduler
|
||||
# uWSGI auto-reloads for app/core code changes
|
||||
```
|
||||
|
||||
@@ -169,5 +166,5 @@ docker-compose restart scheduling # Restart scheduler
|
||||
- **Unified Router** (`core/modules/router.py`): Management command `python manage.py ur` runs the event loop with all transport clients. Each client inherits `ClientBase` ABC.
|
||||
- **Transport Layer** (`core/clients/transport.py`): Shared cache-backed runtime state, command queuing, and attachment prep. All outbound media goes through `prepare_outbound_attachments()`.
|
||||
- **Settings Chain**: `app/settings.py` → imports `app/local_settings.py` (wildcard `*`) → env vars from `stack.env`. Feature flags: `WHATSAPP_ENABLED`, `INSTAGRAM_ENABLED`, `COMPOSE_WS_ENABLED`.
|
||||
- **Services in docker-compose**: `app` (uWSGI), `asgi` (uvicorn for WebSockets), `ur` (unified router), `scheduling` (APScheduler), `redis`, `signal-cli-rest-api`.
|
||||
- **Services in podman stack**: `app` (uWSGI), `asgi` (uvicorn for WebSockets), `ur` (unified router), `scheduling` (APScheduler), `redis`, `signal-cli-rest-api`.
|
||||
- **No test suite currently**: `core/tests.py` is empty scaffold; `core/tests/` has only `__init__.py`. Tests run via `make test MODULES=...` but need to be written.
|
||||
|
||||
61
CLAUDE.md
Normal file
61
CLAUDE.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# GIA — Claude Code Rules
|
||||
|
||||
## Privacy: No Real Contact Data in Code
|
||||
|
||||
**NEVER use real contact identifiers in tests, fixtures, seeds, or any committed file.**
|
||||
|
||||
Real contact data includes: phone numbers, JIDs, email addresses, usernames, or any identifier belonging to an actual person in the user's contacts.
|
||||
|
||||
### Use fictitious data instead
|
||||
|
||||
| Type | Safe fictitious examples |
|
||||
|---|---|
|
||||
| UK mobile (E.164) | `+447700900001`, `+447700900002` (Ofcom-reserved range 07700 900000–900999) |
|
||||
| UK mobile (no +) | `447700900001`, `447700900002` |
|
||||
| US phone | `+15550001234`, `+15550009999` (555-0xxx NANP reserved range) |
|
||||
| Email | `test@example.com`, `user@example.invalid` |
|
||||
| WhatsApp JID | `447700900001@s.whatsapp.net`, `447700900001@g.us` |
|
||||
|
||||
### Why this matters
|
||||
|
||||
AI coding tools (Copilot, Claude) will reuse any values they see in context. A real number placed in a test becomes training signal and will be suggested in future completions — potentially leaking it further.
|
||||
|
||||
### Quick check
|
||||
|
||||
Before committing test files, verify no identifier matches a real person:
|
||||
- No number outside the reserved fictitious ranges above
|
||||
- No name that corresponds to a real contact used as a literal identifier
|
||||
|
||||
## Naming: Avoid Ambiguous Role Labels
|
||||
|
||||
**Never use "User", "Bot", "Us", or "Them" as role labels without qualification — these terms are context-dependent and misleading in this codebase.**
|
||||
|
||||
GIA acts in multiple roles simultaneously:
|
||||
- It is a Django **User** (account holder) from the perspective of external services (XMPP, WhatsApp, Signal).
|
||||
- It is a **component** (gateway/bot) from the perspective of contacts.
|
||||
- The human who owns and operates the GIA instance is the **account holder** or **operator** (not "user", which collides with `User` model).
|
||||
- Remote people the system communicates with are **contacts**.
|
||||
|
||||
Preferred terms:
|
||||
|
||||
| Avoid | Prefer |
|
||||
| ------------------ | --------------------------------------------------------------- |
|
||||
| "User" (ambiguous) | "account holder" or "operator" (for the Django `User`) |
|
||||
| "Bot" | "component" or "gateway" (for the XMPP/transport layer) |
|
||||
| "Us" | name the specific actor: "GIA", "the component", "the operator" |
|
||||
| "Them" | "contact" or "remote party" |
|
||||
|
||||
Apply this in: comments, template labels, log messages, and variable names.
|
||||
|
||||
## Runtime: uWSGI Reload File
|
||||
|
||||
The app container uses uWSGI with a single touch-based reload sentinel:
|
||||
|
||||
- Reload file: `/code/.uwsgi-reload`
|
||||
- Config: [docker/uwsgi.ini](/code/xf/GIA/docker/uwsgi.ini)
|
||||
|
||||
Rules:
|
||||
|
||||
- Never run `python manage.py ...` on the host. Run Django management commands inside Podman, for example with `podman exec`.
|
||||
- After changing templates or app code that should be picked up by the `gia` uWSGI service, touch `/code/.uwsgi-reload`.
|
||||
- If the uWSGI config itself changes, touch `/code/.uwsgi-reload` and restart the `gia` container so the new config is loaded.
|
||||
56
Containerfile.dev
Normal file
56
Containerfile.dev
Normal file
@@ -0,0 +1,56 @@
|
||||
FROM python:3.11-bookworm
|
||||
|
||||
ARG USER_ID=1000
|
||||
ARG GROUP_ID=1000
|
||||
ARG USER_NAME=dev
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
bash-completion \
|
||||
build-essential \
|
||||
cargo \
|
||||
ca-certificates \
|
||||
curl \
|
||||
fd-find \
|
||||
fzf \
|
||||
git \
|
||||
golang \
|
||||
jq \
|
||||
less \
|
||||
libffi-dev \
|
||||
libssl-dev \
|
||||
make \
|
||||
neovim \
|
||||
nodejs \
|
||||
npm \
|
||||
procps \
|
||||
ripgrep \
|
||||
rsync \
|
||||
rustc \
|
||||
sqlite3 \
|
||||
tar \
|
||||
tmux \
|
||||
unzip \
|
||||
wget \
|
||||
which \
|
||||
zip && \
|
||||
rm -rf /var/lib/apt/lists/* && \
|
||||
ln -sf /usr/bin/fdfind /usr/local/bin/fd
|
||||
|
||||
RUN groupadd -g "${GROUP_ID}" "${USER_NAME}" 2>/dev/null || true && \
|
||||
useradd -m -u "${USER_ID}" -g "${GROUP_ID}" -s /bin/bash "${USER_NAME}"
|
||||
|
||||
USER ${USER_NAME}
|
||||
WORKDIR /home/${USER_NAME}
|
||||
|
||||
# Build a project virtualenv and preinstall dependencies.
|
||||
COPY --chown=${USER_NAME}:${USER_NAME} requirements.txt /tmp/requirements.txt
|
||||
RUN bash -lc 'set -e; \
|
||||
python3.11 -m venv /home/${USER_NAME}/.venv/gia && \
|
||||
/home/${USER_NAME}/.venv/gia/bin/pip install --upgrade pip setuptools wheel && \
|
||||
grep -Ev "^(git\\+https://git\\.zm\\.is/|aiograpi$)" /tmp/requirements.txt > /tmp/requirements.build.txt && \
|
||||
/home/${USER_NAME}/.venv/gia/bin/pip install -r /tmp/requirements.build.txt'
|
||||
|
||||
ENV VIRTUAL_ENV=/home/${USER_NAME}/.venv/gia
|
||||
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
|
||||
|
||||
CMD ["/bin/bash"]
|
||||
@@ -14,17 +14,13 @@ RUN chown xf:xf /venv
|
||||
|
||||
RUN apt-get update && apt-get install -y cargo rustc
|
||||
|
||||
USER xf
|
||||
ENV PYTHONDONTWRITEBYTECODE=1
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
WORKDIR /code
|
||||
COPY requirements.txt /code/
|
||||
RUN python -m venv /venv
|
||||
RUN . /venv/bin/activate && pip install -r requirements.txt
|
||||
RUN chown -R xf:xf /code /venv /conf
|
||||
|
||||
# CMD . /venv/bin/activate && uwsgi --ini /conf/uwsgi.ini
|
||||
|
||||
USER xf
|
||||
CMD if [ "$OPERATION" = "uwsgi" ] ; then . /venv/bin/activate && uwsgi --ini /conf/uwsgi.ini ; else . /venv/bin/activate && exec python manage.py runserver 0.0.0.0:8000; fi
|
||||
|
||||
# CMD . /venv/bin/activate && uvicorn --reload --reload-include *.html --workers 2 --uds /var/run/socks/app.sock app.asgi:application
|
||||
# CMD . /venv/bin/activate && gunicorn -b 0.0.0.0:8000 --reload app.asgi:application -k uvicorn.workers.UvicornWorker
|
||||
175
INSTALL.md
175
INSTALL.md
@@ -6,7 +6,7 @@ Use this first. Then read `README.md` for feature and operation-mode details.
|
||||
|
||||
## 1) Prerequisites
|
||||
|
||||
- Linux host with either Podman + podman-compose wrapper or Docker Compose compatibility.
|
||||
- Linux host with Podman.
|
||||
- Git.
|
||||
- Network access for service images and Python dependencies.
|
||||
|
||||
@@ -47,6 +47,24 @@ XMPP bridge settings:
|
||||
- `XMPP_PORT`
|
||||
- `XMPP_SECRET`
|
||||
|
||||
Prosody container helpers:
|
||||
|
||||
- `QUADLET_PROSODY_CONFIG_FILE`
|
||||
- `QUADLET_PROSODY_CERTS_DIR`
|
||||
- `QUADLET_PROSODY_DATA_DIR`
|
||||
- `QUADLET_PROSODY_LOGS_DIR`
|
||||
|
||||
Memory/wiki search helpers:
|
||||
|
||||
- `MEMORY_SEARCH_BACKEND` (`django` or `manticore`)
|
||||
- `MANTICORE_HTTP_URL`
|
||||
- `MANTICORE_MEMORY_TABLE`
|
||||
- `MANTICORE_EVENT_TABLE`
|
||||
- `MANTICORE_METRIC_TABLE`
|
||||
- `COMPOSING_ABANDONED_WINDOW_SECONDS`
|
||||
- `CONVERSATION_EVENT_RETENTION_DAYS`
|
||||
- `MANTICORE_HTTP_TIMEOUT`
|
||||
|
||||
For XMPP media upload, configure one of:
|
||||
|
||||
- `XMPP_UPLOAD_SERVICE`
|
||||
@@ -71,9 +89,18 @@ make auth
|
||||
Optional static token helper:
|
||||
|
||||
```bash
|
||||
make token
|
||||
make token TOKEN_USER=<your_username>
|
||||
```
|
||||
|
||||
Local code-quality checks:
|
||||
|
||||
```bash
|
||||
make pre-commit
|
||||
make pre-commit-glibc
|
||||
```
|
||||
|
||||
`make pre-commit-glibc` selects `env` on musl systems and `genv` on glibc systems.
|
||||
|
||||
## 6) Logs and health checks
|
||||
|
||||
Tail logs:
|
||||
@@ -85,7 +112,7 @@ make log
|
||||
Basic stack status:
|
||||
|
||||
```bash
|
||||
docker-compose --env-file=stack.env ps
|
||||
make status
|
||||
```
|
||||
|
||||
## 7) Restart conventions
|
||||
@@ -101,7 +128,7 @@ Use the explicit `make stop && make run` command sequence when a full recycle is
|
||||
### Single service restart
|
||||
|
||||
```bash
|
||||
docker-compose --env-file=stack.env restart <service>
|
||||
podman restart <container-name>
|
||||
```
|
||||
|
||||
If single-service restart fails due to dependency/pod state, use full recycle above.
|
||||
@@ -113,7 +140,7 @@ After changing UR/runtime code (`core/clients/*`, transport, relay paths), resta
|
||||
Minimum target:
|
||||
|
||||
```bash
|
||||
docker-compose --env-file=stack.env restart ur
|
||||
podman restart ur_gia
|
||||
```
|
||||
|
||||
If blocked, use full recycle.
|
||||
@@ -127,8 +154,21 @@ If blocked, use full recycle.
|
||||
- Manual compose: `/compose/page/`
|
||||
- AI workspace: `/ai/workspace/`
|
||||
- OSINT search: `/search/page/`
|
||||
- Security encryption settings: `/settings/security/encryption/`
|
||||
- Security permissions settings: `/settings/security/permissions/`
|
||||
- Command routing settings: `/settings/command-routing/`
|
||||
- Task automation settings: `/settings/tasks/`
|
||||
- Task inbox / manual task creation: `/tasks/`
|
||||
|
||||
## 10) Common troubleshooting
|
||||
## 10) Security and capability controls
|
||||
|
||||
- `Require OMEMO encryption` rejects plaintext XMPP messages before command routing.
|
||||
- `Encrypt gateway component chat replies with OMEMO` only affects gateway/component conversations.
|
||||
- `Encrypt contact relay messages to your XMPP client with OMEMO` only affects relayed contact chats.
|
||||
- Fine-grained capability policy is configured in `/settings/security/permissions/` and applies by scope, service, and optional channel pattern.
|
||||
- Trusted OMEMO key enforcement depends on trusted key records, not only the most recently observed client key.
|
||||
|
||||
## 11) Common troubleshooting
|
||||
|
||||
### A) Compose restart errors / dependency improper state
|
||||
|
||||
@@ -144,8 +184,131 @@ make stop && make run
|
||||
- Confirm `XMPP_UPLOAD_SERVICE`/`XMPP_UPLOAD_JID` is set, or discovery works.
|
||||
- Check runtime logs for slot request and upload errors.
|
||||
|
||||
### D) Prosody container (lightweight, no systemd)
|
||||
|
||||
Use:
|
||||
|
||||
```bash
|
||||
./utilities/prosody/manage_prosody_container.sh up
|
||||
./utilities/prosody/manage_prosody_container.sh status
|
||||
./utilities/prosody/manage_prosody_container.sh logs
|
||||
```
|
||||
|
||||
Auth script path for Prosody config:
|
||||
|
||||
```lua
|
||||
external_auth_command = "/code/utilities/prosody/auth_django.sh"
|
||||
```
|
||||
|
||||
Certificate renewal helper (run as root on host):
|
||||
|
||||
```bash
|
||||
./utilities/prosody/renew_prosody_cert.sh
|
||||
```
|
||||
|
||||
### E) Manticore container for memory/wiki retrieval
|
||||
|
||||
```bash
|
||||
./utilities/memory/manage_manticore_container.sh up
|
||||
./utilities/memory/manage_manticore_container.sh status
|
||||
./utilities/memory/manage_manticore_container.sh logs
|
||||
```
|
||||
|
||||
Reindex memory into configured backend:
|
||||
|
||||
```bash
|
||||
podman exec ur_gia /venv/bin/python manage.py memory_search_reindex --user-id 1 --statuses active
|
||||
```
|
||||
|
||||
Query memory backend:
|
||||
|
||||
```bash
|
||||
podman exec ur_gia /venv/bin/python manage.py memory_search_query --user-id 1 --query "reply style"
|
||||
```
|
||||
|
||||
Generate proposed memories from recent inbound messages:
|
||||
|
||||
```bash
|
||||
podman exec ur_gia /venv/bin/python manage.py memory_suggest_from_messages --user-id 1 --limit-messages 300 --max-items 30
|
||||
```
|
||||
|
||||
Run memory hygiene (expiry decay + contradiction queueing):
|
||||
|
||||
```bash
|
||||
podman exec ur_gia /venv/bin/python manage.py memory_hygiene --user-id 1
|
||||
```
|
||||
|
||||
Performance defaults now applied in GIA:
|
||||
|
||||
- Batched Manticore reindex writes (`REPLACE ... VALUES (...)` in chunks) for lower ingest latency.
|
||||
- Cached table-ensure checks to avoid `CREATE TABLE IF NOT EXISTS` overhead on every query.
|
||||
- Behavioral event dual-write uses `MANTICORE_EVENT_TABLE` (default `gia_events`) when event ledger flags are enabled.
|
||||
- Behavioral metrics are written by `python manage.py gia_analysis` into `MANTICORE_METRIC_TABLE` (default `gia_metrics`).
|
||||
- ORM shadow copies can be pruned with `python manage.py prune_behavioral_orm_data`; defaults are driven by `CONVERSATION_EVENT_RETENTION_DAYS`.
|
||||
- Runtime table maintenance available through MCP (`FLUSH RAMCHUNK`, `OPTIMIZE TABLE`) for steady query responsiveness.
|
||||
|
||||
### F) MCP server for task + memory tooling (VS Code)
|
||||
|
||||
The workspace includes an MCP config at `/code/xf/.vscode/mcp.json` for server `manticore`.
|
||||
|
||||
It launches inside the running `ur_gia` container and forces:
|
||||
|
||||
- `MEMORY_SEARCH_BACKEND=manticore`
|
||||
|
||||
`MANTICORE_HTTP_URL` is read from the container environment (`stack.env` / app settings).
|
||||
|
||||
Start requirements first:
|
||||
|
||||
```bash
|
||||
make run
|
||||
./utilities/memory/manage_manticore_container.sh up
|
||||
```
|
||||
|
||||
Then approve/enable the `manticore` MCP server in VS Code when prompted.
|
||||
|
||||
The MCP task surface now supports canonical task creation/completion in GIA:
|
||||
|
||||
- `tasks.create`
|
||||
- `tasks.complete`
|
||||
- `tasks.create_note`
|
||||
- `tasks.link_artifact`
|
||||
|
||||
Optional ultra-light Rust MCP worker:
|
||||
|
||||
```bash
|
||||
cd /code/xf/GIA
|
||||
make mcp-rust-build
|
||||
```
|
||||
|
||||
Then enable `manticore-rust-worker` in `/code/xf/.vscode/mcp.json`.
|
||||
It is intentionally `disabled: true` by default so the existing Python MCP server remains the baseline.
|
||||
|
||||
### H) Optional browser MCP for visual validation
|
||||
|
||||
To validate compose/tasks/settings flows visually, add a browser-capable MCP server in your editor workspace alongside `manticore`. A Playwright-style browser MCP is the intended integration point for GIA UI checks.
|
||||
|
||||
Recommended usage:
|
||||
|
||||
- keep browser MCP outside host-network mode
|
||||
- point it at the local GIA app URL/port from the running stack
|
||||
- use it for page-load, form-flow, and visual regression checks on compose/tasks/settings pages
|
||||
|
||||
### I) Task command shortcuts
|
||||
|
||||
Gateway / XMPP / chat task commands now include:
|
||||
|
||||
- `.l` -> list open tasks
|
||||
- `.tasks add <project> :: <title>` -> create a canonical task in a named project
|
||||
- `.task add <title>` -> create a task inside the current mapped chat scope
|
||||
|
||||
### C) Signal or WhatsApp send failures
|
||||
|
||||
- Verify account/link status in service pages.
|
||||
- Verify `ur` service is running.
|
||||
- Inspect `ur` logs for transport-specific errors.
|
||||
|
||||
### G) XMPP reconnect loop in logs
|
||||
|
||||
- Confirm `XMPP_ADDRESS`, `XMPP_JID`, `XMPP_PORT`, and `XMPP_SECRET` are populated in `stack.env`.
|
||||
- `XMPP_PORT` is parsed as an integer in settings; invalid values can cause repeated reconnect failures.
|
||||
- The runtime now uses a single reconnect loop with exponential backoff to avoid overlapping reconnect churn.
|
||||
|
||||
74
Makefile
74
Makefile
@@ -1,10 +1,18 @@
|
||||
QUADLET_MGR := ./scripts/quadlet/manage.sh
|
||||
MODULES ?= core.tests
|
||||
TOKEN_USER ?= m
|
||||
STACK_ID_CLEAN := $(shell sid="$${GIA_STACK_ID:-$${STACK_ID:-}}"; sid=$$(printf "%s" "$$sid" | tr -cs 'a-zA-Z0-9._-' '-' | sed 's/^-*//; s/-*$$//'); printf "%s" "$$sid")
|
||||
STACK_SUFFIX := $(if $(STACK_ID_CLEAN),_$(STACK_ID_CLEAN),)
|
||||
APP_CONTAINER := gia$(STACK_SUFFIX)
|
||||
LOCAL_LIBC := $(shell if ldd --version 2>&1 | head -n1 | tr '[:upper:]' '[:lower:]' | grep -q musl; then printf musl; else printf glibc; fi)
|
||||
LOCAL_VENV := $(if $(filter musl,$(LOCAL_LIBC)),env,genv)
|
||||
PRE_COMMIT_BIN := $(firstword $(wildcard $(LOCAL_VENV)/bin/pre-commit) $(wildcard genv/bin/pre-commit) $(wildcard env/bin/pre-commit))
|
||||
|
||||
run:
|
||||
bash $(QUADLET_MGR) up
|
||||
|
||||
build:
|
||||
docker-compose --env-file=stack.env build
|
||||
OPERATION=uwsgi podman build --build-arg OPERATION=uwsgi -t localhost/xf/gia:prod -f Dockerfile .
|
||||
|
||||
stop:
|
||||
bash $(QUADLET_MGR) down
|
||||
@@ -18,26 +26,62 @@ status:
|
||||
quadlet-install:
|
||||
bash $(QUADLET_MGR) install
|
||||
|
||||
compose-run:
|
||||
docker-compose --env-file=stack.env up -d
|
||||
|
||||
compose-stop:
|
||||
docker-compose --env-file=stack.env down
|
||||
|
||||
compose-log:
|
||||
docker-compose --env-file=stack.env logs -f --names
|
||||
|
||||
test:
|
||||
docker-compose --env-file=stack.env run --rm app sh -c ". /venv/bin/activate && python manage.py test $(MODULES) -v 2"
|
||||
@if podman ps --format '{{.Names}}' | grep -qx "$(APP_CONTAINER)"; then \
|
||||
podman exec "$(APP_CONTAINER)" sh -lc "cd /code && . /venv/bin/activate && python manage.py test $(MODULES) -v 2"; \
|
||||
else \
|
||||
echo "Container '$(APP_CONTAINER)' is not running. Start the stack first with 'make run'." >&2; \
|
||||
exit 125; \
|
||||
fi
|
||||
|
||||
pre-commit:
|
||||
@if [ -x "$(PRE_COMMIT_BIN)" ]; then \
|
||||
"$(PRE_COMMIT_BIN)" run -a; \
|
||||
else \
|
||||
echo "No local pre-commit executable found in $(LOCAL_VENV)/bin, genv/bin, or env/bin." >&2; \
|
||||
exit 127; \
|
||||
fi
|
||||
|
||||
pre-commit-glibc:
|
||||
@if [ -x "$(PRE_COMMIT_BIN)" ]; then \
|
||||
echo "Using $(LOCAL_VENV) ($(LOCAL_LIBC))"; \
|
||||
"$(PRE_COMMIT_BIN)" run -a; \
|
||||
else \
|
||||
echo "No local pre-commit executable found in $(LOCAL_VENV)/bin, genv/bin, or env/bin." >&2; \
|
||||
exit 127; \
|
||||
fi
|
||||
|
||||
migrate:
|
||||
docker-compose --env-file=stack.env run --rm app sh -c ". /venv/bin/activate && python manage.py migrate"
|
||||
@if podman ps --format '{{.Names}}' | grep -qx "$(APP_CONTAINER)"; then \
|
||||
podman exec "$(APP_CONTAINER)" sh -lc "cd /code && . /venv/bin/activate && python manage.py migrate"; \
|
||||
else \
|
||||
echo "Container '$(APP_CONTAINER)' is not running. Start the stack first with 'make run'." >&2; \
|
||||
exit 125; \
|
||||
fi
|
||||
|
||||
makemigrations:
|
||||
docker-compose --env-file=stack.env run --rm app sh -c ". /venv/bin/activate && python manage.py makemigrations"
|
||||
@if podman ps --format '{{.Names}}' | grep -qx "$(APP_CONTAINER)"; then \
|
||||
podman exec "$(APP_CONTAINER)" sh -lc "cd /code && . /venv/bin/activate && python manage.py makemigrations"; \
|
||||
else \
|
||||
echo "Container '$(APP_CONTAINER)' is not running. Start the stack first with 'make run'." >&2; \
|
||||
exit 125; \
|
||||
fi
|
||||
|
||||
auth:
|
||||
docker-compose --env-file=stack.env run --rm app sh -c ". /venv/bin/activate && python manage.py createsuperuser"
|
||||
@if podman ps --format '{{.Names}}' | grep -qx "$(APP_CONTAINER)"; then \
|
||||
podman exec "$(APP_CONTAINER)" sh -lc "cd /code && . /venv/bin/activate && python manage.py createsuperuser"; \
|
||||
else \
|
||||
echo "Container '$(APP_CONTAINER)' is not running. Start the stack first with 'make run'." >&2; \
|
||||
exit 125; \
|
||||
fi
|
||||
|
||||
token:
|
||||
docker-compose --env-file=stack.env run --rm app sh -c ". /venv/bin/activate && python manage.py addstatictoken m"
|
||||
@if podman ps --format '{{.Names}}' | grep -qx "$(APP_CONTAINER)"; then \
|
||||
podman exec "$(APP_CONTAINER)" sh -lc "cd /code && . /venv/bin/activate && python manage.py addstatictoken $(TOKEN_USER)"; \
|
||||
else \
|
||||
echo "Container '$(APP_CONTAINER)' is not running. Start the stack first with 'make run'." >&2; \
|
||||
exit 125; \
|
||||
fi
|
||||
|
||||
mcp-rust-build:
|
||||
cd rust/manticore-mcp-worker && cargo build --release
|
||||
|
||||
18
README.md
18
README.md
@@ -9,9 +9,15 @@ GIA is a multi-transport communication workspace that unifies Signal, WhatsApp,
|
||||
- Unifies chats from multiple protocols in one interface.
|
||||
- Keeps conversation history in a shared model (`Person`, `PersonIdentifier`, `ChatSession`, `Message`).
|
||||
- Supports manual, queue-driven, and AI-assisted outbound messaging.
|
||||
- Supports canonical task creation from chat commands, web UI, and MCP tooling.
|
||||
- Bridges messages across transports (including XMPP) with attachment handling.
|
||||
- Tracks delivery/read metadata and typing state events.
|
||||
- Can dual-write canonical behavioral events to Manticore for time-series analysis.
|
||||
- Includes `gia_analysis` for rolling behavioral metric aggregation into Manticore.
|
||||
- Includes `prune_behavioral_orm_data` to keep Django event shadow tables bounded once Manticore is primary.
|
||||
- Provides AI workspace analytics, mitigation plans, and insight visualizations.
|
||||
- Exposes fine-grained capability policy controls for gateway commands, task intake, and command execution.
|
||||
- Separates XMPP encryption controls into plaintext rejection, component-chat encryption, and relayed-contact encryption.
|
||||
|
||||
## Operation Modes
|
||||
|
||||
@@ -104,6 +110,14 @@ Core behavior:
|
||||
- XMPP bridge supports text, attachments, typing, and chat-state paths.
|
||||
- Signal and WhatsApp media relay paths are normalized via shared transport/media logic.
|
||||
|
||||
## Settings Model
|
||||
|
||||
- `Security > Encryption`: transport-level XMPP/OMEMO controls, observed client state, and discovered key trust management.
|
||||
- `Security > Permissions`: fine-grained capability policy by scope, service, and channel pattern.
|
||||
- `Modules > Commands`: command profiles, bindings, and delivery behavior.
|
||||
- `Modules > Task Automation`: task extraction defaults, channel overrides, and provider approval routing.
|
||||
- `Modules > Business Plans`: generated document inbox and editor.
|
||||
|
||||
Key design points:
|
||||
|
||||
- Prefer shared media preparation over per-service duplicated logic.
|
||||
@@ -119,7 +133,9 @@ Core components:
|
||||
- `core/clients/signal.py`, `core/clients/signalapi.py`: Signal event + REST transport handling.
|
||||
- `core/clients/whatsapp.py`: Neonize-backed runtime transport.
|
||||
- `core/clients/xmpp.py`: XMPP component bridge and media upload relay.
|
||||
- `rust/manticore-mcp-worker`: optional ultra-light MCP frontend for direct Manticore status/query/maintenance.
|
||||
- `core/views/compose.py`: Manual compose UX, polling/ws, send pipeline, media blob endpoint.
|
||||
- `core/tasks/engine.py`: Canonical task creation/completion helpers used by chat commands and UI.
|
||||
- `core/views/workspace.py`: AI workspace operations and insight surfaces.
|
||||
- `core/views/osint.py`: Search/workspace OSINT interactions.
|
||||
|
||||
@@ -143,6 +159,7 @@ After environment setup from `INSTALL.md`:
|
||||
4. Open manual compose and test per-service send/receive.
|
||||
5. Open AI workspace for analysis/mitigation workflows.
|
||||
6. Verify queue workflows if approval mode is used.
|
||||
7. Verify task creation from `/tasks/`, `.tasks add <project> :: <title>`, or scoped `.task add <title>`.
|
||||
|
||||
Recommended functional smoke test:
|
||||
|
||||
@@ -156,6 +173,7 @@ Recommended functional smoke test:
|
||||
- After runtime code changes, restart runtime services before validation.
|
||||
- Full environment recycle convention: `make stop && make run`.
|
||||
- If single-service restart fails due to dependency state, use full recycle.
|
||||
- Local repository checks are available via `make pre-commit`; use `make pre-commit-glibc` when you want libc-based `env`/`genv` selection.
|
||||
|
||||
## Security & Reliability Notes
|
||||
|
||||
|
||||
@@ -1,16 +1,39 @@
|
||||
from os import getenv
|
||||
from urllib.parse import urlparse
|
||||
|
||||
trues = ("t", "true", "yes", "y", "1")
|
||||
|
||||
|
||||
def _csv_env(name: str, default: str) -> list[str]:
|
||||
return [item.strip() for item in getenv(name, default).split(",") if item.strip()]
|
||||
|
||||
|
||||
# URLs
|
||||
DOMAIN = getenv("DOMAIN", "example.com")
|
||||
URL = getenv("URL", f"https://{DOMAIN}")
|
||||
URL_HOST = urlparse(URL).hostname or ""
|
||||
DEBUG = getenv("DEBUG", "false").lower() in trues
|
||||
|
||||
# Access control
|
||||
ALLOWED_HOSTS = getenv("ALLOWED_HOSTS", f"127.0.0.1,{DOMAIN}").split(",")
|
||||
ALLOWED_HOSTS = _csv_env(
|
||||
"ALLOWED_HOSTS",
|
||||
",".join(
|
||||
item
|
||||
for item in (
|
||||
"localhost",
|
||||
"127.0.0.1",
|
||||
DOMAIN,
|
||||
URL_HOST,
|
||||
)
|
||||
if item
|
||||
),
|
||||
)
|
||||
if DEBUG:
|
||||
# Local/dev stack runs behind varying hostnames/tunnels.
|
||||
ALLOWED_HOSTS = ["*"]
|
||||
|
||||
# CSRF
|
||||
CSRF_TRUSTED_ORIGINS = getenv("CSRF_TRUSTED_ORIGINS", URL).split(",")
|
||||
CSRF_TRUSTED_ORIGINS = _csv_env("CSRF_TRUSTED_ORIGINS", URL)
|
||||
|
||||
# Stripe
|
||||
BILLING_ENABLED = getenv("BILLING_ENABLED", "false").lower() in trues
|
||||
@@ -23,7 +46,10 @@ STRIPE_PUBLIC_API_KEY_PROD = getenv("STRIPE_PUBLIC_API_KEY_PROD", "")
|
||||
|
||||
STRIPE_ENDPOINT_SECRET = getenv("STRIPE_ENDPOINT_SECRET", "")
|
||||
STATIC_ROOT = getenv("STATIC_ROOT", "")
|
||||
SECRET_KEY = getenv("SECRET_KEY", "")
|
||||
SECRET_KEY = (getenv("SECRET_KEY", "") or "").strip()
|
||||
if not SECRET_KEY:
|
||||
# Keep local developer stacks usable when stack.env is uninitialized.
|
||||
SECRET_KEY = "gia-dev-secret-key"
|
||||
|
||||
STRIPE_ADMIN_COUPON = getenv("STRIPE_ADMIN_COUPON", "")
|
||||
|
||||
@@ -33,17 +59,20 @@ LAGO_API_KEY = getenv("LAGO_API_KEY", "")
|
||||
LAGO_ORG_ID = getenv("LAGO_ORG_ID", "")
|
||||
LAGO_URL = getenv("LAGO_URL", "")
|
||||
|
||||
DEBUG = getenv("DEBUG", "false") in trues
|
||||
PROFILER = getenv("PROFILER", "false") in trues
|
||||
|
||||
if DEBUG:
|
||||
import socket # only if you haven't already imported this
|
||||
|
||||
hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())
|
||||
INTERNAL_IPS = [ip[: ip.rfind(".")] + ".1" for ip in ips] + [
|
||||
"127.0.0.1",
|
||||
"10.0.2.2",
|
||||
]
|
||||
INTERNAL_IPS = [ip[: ip.rfind(".")] + ".1" for ip in ips]
|
||||
INTERNAL_IPS.extend(
|
||||
[
|
||||
item.strip()
|
||||
for item in getenv("DEBUG_INTERNAL_IPS", "localhost").split(",")
|
||||
if item.strip()
|
||||
]
|
||||
)
|
||||
|
||||
SETTINGS_EXPORT = ["BILLING_ENABLED"]
|
||||
|
||||
@@ -58,5 +87,42 @@ INSTAGRAM_HTTP_URL = getenv("INSTAGRAM_HTTP_URL", "http://instagram:8080")
|
||||
|
||||
XMPP_ADDRESS = getenv("XMPP_ADDRESS")
|
||||
XMPP_JID = getenv("XMPP_JID")
|
||||
XMPP_PORT = getenv("XMPP_PORT")
|
||||
XMPP_USER_DOMAIN = getenv("XMPP_USER_DOMAIN", "")
|
||||
XMPP_PORT = int(getenv("XMPP_PORT", "8888") or 8888)
|
||||
XMPP_SECRET = getenv("XMPP_SECRET")
|
||||
XMPP_OMEMO_DATA_DIR = getenv("XMPP_OMEMO_DATA_DIR", "")
|
||||
XMPP_UPLOAD_SERVICE = getenv("XMPP_UPLOAD_SERVICE", "").strip()
|
||||
XMPP_UPLOAD_JID = getenv("XMPP_UPLOAD_JID", "").strip()
|
||||
if not XMPP_UPLOAD_SERVICE and XMPP_UPLOAD_JID:
|
||||
XMPP_UPLOAD_SERVICE = XMPP_UPLOAD_JID
|
||||
if not XMPP_UPLOAD_SERVICE and XMPP_USER_DOMAIN:
|
||||
XMPP_UPLOAD_SERVICE = XMPP_USER_DOMAIN
|
||||
|
||||
EVENT_LEDGER_DUAL_WRITE = getenv("EVENT_LEDGER_DUAL_WRITE", "false").lower() in trues
|
||||
CAPABILITY_ENFORCEMENT_ENABLED = (
|
||||
getenv("CAPABILITY_ENFORCEMENT_ENABLED", "true").lower() in trues
|
||||
)
|
||||
TRACE_PROPAGATION_ENABLED = getenv("TRACE_PROPAGATION_ENABLED", "true").lower() in trues
|
||||
EVENT_PRIMARY_WRITE_PATH = getenv("EVENT_PRIMARY_WRITE_PATH", "false").lower() in trues
|
||||
|
||||
MEMORY_SEARCH_BACKEND = getenv("MEMORY_SEARCH_BACKEND", "django")
|
||||
MANTICORE_HTTP_URL = getenv("MANTICORE_HTTP_URL", "http://localhost:9308")
|
||||
MANTICORE_MEMORY_TABLE = getenv("MANTICORE_MEMORY_TABLE", "gia_memory_items")
|
||||
MANTICORE_EVENT_TABLE = getenv("MANTICORE_EVENT_TABLE", "gia_events")
|
||||
MANTICORE_METRIC_TABLE = getenv("MANTICORE_METRIC_TABLE", "gia_metrics")
|
||||
COMPOSING_ABANDONED_WINDOW_SECONDS = int(
|
||||
getenv("COMPOSING_ABANDONED_WINDOW_SECONDS", "300")
|
||||
)
|
||||
CONVERSATION_EVENT_RETENTION_DAYS = int(
|
||||
getenv("CONVERSATION_EVENT_RETENTION_DAYS", "90") or 90
|
||||
)
|
||||
MANTICORE_HTTP_TIMEOUT = int(getenv("MANTICORE_HTTP_TIMEOUT", "5") or 5)
|
||||
|
||||
# Attachment security defaults for transport adapters.
|
||||
ATTACHMENT_MAX_BYTES = int(getenv("ATTACHMENT_MAX_BYTES", str(25 * 1024 * 1024)) or 0)
|
||||
ATTACHMENT_ALLOW_PRIVATE_URLS = (
|
||||
getenv("ATTACHMENT_ALLOW_PRIVATE_URLS", "false").lower() in trues
|
||||
)
|
||||
ATTACHMENT_ALLOW_UNKNOWN_MIME = (
|
||||
getenv("ATTACHMENT_ALLOW_UNKNOWN_MIME", "false").lower() in trues
|
||||
)
|
||||
|
||||
@@ -13,6 +13,8 @@ https://docs.djangoproject.com/en/4.0/ref/settings/
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
TRUE_VALUES = {"1", "true", "yes", "on"}
|
||||
|
||||
# Build paths inside the project like this: BASE_DIR / 'subdir'.
|
||||
BASE_DIR = Path(__file__).resolve().parent.parent
|
||||
|
||||
@@ -36,8 +38,6 @@ INSTALLED_APPS = [
|
||||
"django.contrib.sessions",
|
||||
"django.contrib.messages",
|
||||
"django.contrib.staticfiles",
|
||||
"debug_toolbar",
|
||||
"template_profiler_panel",
|
||||
"django_htmx",
|
||||
"crispy_forms",
|
||||
"crispy_bulma",
|
||||
@@ -57,6 +57,16 @@ INSTALLED_APPS = [
|
||||
"cachalot",
|
||||
]
|
||||
|
||||
DEBUG_TOOLBAR_ENABLED = os.getenv("DEBUG_TOOLBAR_ENABLED", "false").lower() in TRUE_VALUES
|
||||
|
||||
if DEBUG_TOOLBAR_ENABLED:
|
||||
INSTALLED_APPS.extend(
|
||||
[
|
||||
"debug_toolbar",
|
||||
"template_profiler_panel",
|
||||
]
|
||||
)
|
||||
|
||||
# Performance optimisations
|
||||
CACHES = {
|
||||
"default": {
|
||||
@@ -81,7 +91,6 @@ CRISPY_ALLOWED_TEMPLATE_PACKS = ("bulma",)
|
||||
DJANGO_TABLES2_TEMPLATE = "django-tables2/bulma.html"
|
||||
|
||||
MIDDLEWARE = [
|
||||
"debug_toolbar.middleware.DebugToolbarMiddleware",
|
||||
"django.middleware.security.SecurityMiddleware",
|
||||
"django.contrib.sessions.middleware.SessionMiddleware",
|
||||
# 'django.middleware.cache.UpdateCacheMiddleware',
|
||||
@@ -95,6 +104,9 @@ MIDDLEWARE = [
|
||||
"django_htmx.middleware.HtmxMiddleware",
|
||||
]
|
||||
|
||||
if DEBUG_TOOLBAR_ENABLED:
|
||||
MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")
|
||||
|
||||
ROOT_URLCONF = "app.urls"
|
||||
ASGI_APPLICATION = "app.asgi.application"
|
||||
COMPOSE_WS_ENABLED = os.environ.get("COMPOSE_WS_ENABLED", "false").lower() in {
|
||||
@@ -104,19 +116,24 @@ COMPOSE_WS_ENABLED = os.environ.get("COMPOSE_WS_ENABLED", "false").lower() in {
|
||||
"on",
|
||||
}
|
||||
|
||||
TEMPLATE_CONTEXT_PROCESSORS = [
|
||||
"django.template.context_processors.request",
|
||||
"django.contrib.auth.context_processors.auth",
|
||||
"django.contrib.messages.context_processors.messages",
|
||||
"core.util.django_settings_export.settings_export",
|
||||
"core.context_processors.settings_hierarchy_nav",
|
||||
]
|
||||
|
||||
if DEBUG_TOOLBAR_ENABLED:
|
||||
TEMPLATE_CONTEXT_PROCESSORS.insert(0, "django.template.context_processors.debug")
|
||||
|
||||
TEMPLATES = [
|
||||
{
|
||||
"BACKEND": "django.template.backends.django.DjangoTemplates",
|
||||
"DIRS": [os.path.join(BASE_DIR, "core/templates")],
|
||||
"APP_DIRS": True,
|
||||
"OPTIONS": {
|
||||
"context_processors": [
|
||||
"django.template.context_processors.debug",
|
||||
"django.template.context_processors.request",
|
||||
"django.contrib.auth.context_processors.auth",
|
||||
"django.contrib.messages.context_processors.messages",
|
||||
"core.util.django_settings_export.settings_export",
|
||||
],
|
||||
"context_processors": TEMPLATE_CONTEXT_PROCESSORS,
|
||||
},
|
||||
},
|
||||
]
|
||||
@@ -130,7 +147,7 @@ WSGI_APPLICATION = "app.wsgi.application"
|
||||
DATABASES = {
|
||||
"default": {
|
||||
"ENGINE": "django.db.backends.sqlite3",
|
||||
"NAME": "/conf/db.sqlite3",
|
||||
"NAME": os.getenv("APP_DATABASE_PATH", "/conf/db.sqlite3"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -189,8 +206,9 @@ REST_FRAMEWORK = {
|
||||
}
|
||||
|
||||
INTERNAL_IPS = [
|
||||
"127.0.0.1",
|
||||
"10.1.10.11",
|
||||
item.strip()
|
||||
for item in os.getenv("INTERNAL_IPS", "localhost").split(",")
|
||||
if item.strip()
|
||||
]
|
||||
|
||||
DEBUG_TOOLBAR_PANELS = [
|
||||
@@ -228,7 +246,7 @@ if PROFILER: # noqa - trust me its there
|
||||
|
||||
|
||||
def show_toolbar(request):
|
||||
return DEBUG # noqa: from local imports
|
||||
return DEBUG and DEBUG_TOOLBAR_ENABLED # noqa: from local imports
|
||||
|
||||
|
||||
DEBUG_TOOLBAR_CONFIG = {
|
||||
|
||||
11
app/test_settings.py
Normal file
11
app/test_settings.py
Normal file
@@ -0,0 +1,11 @@
|
||||
from app.settings import * # noqa
|
||||
|
||||
|
||||
CACHES = {
|
||||
"default": {
|
||||
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
|
||||
"LOCATION": "gia-test-cache",
|
||||
}
|
||||
}
|
||||
|
||||
CACHALOT_ENABLED = False
|
||||
257
app/urls.py
257
app/urls.py
@@ -13,15 +13,20 @@ Including another URLconf
|
||||
1. Import the include() function: from django.urls import include, path
|
||||
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
|
||||
"""
|
||||
|
||||
from django.conf import settings
|
||||
from django.conf.urls.static import static
|
||||
from django.contrib import admin
|
||||
from django.contrib.auth.views import LogoutView
|
||||
from django.urls import include, path
|
||||
from django.views.generic import RedirectView
|
||||
from two_factor.urls import urlpatterns as tf_urls
|
||||
from two_factor.views.profile import ProfileView
|
||||
|
||||
from core.views import (
|
||||
ais,
|
||||
automation,
|
||||
availability,
|
||||
base,
|
||||
compose,
|
||||
groups,
|
||||
@@ -33,33 +38,175 @@ from core.views import (
|
||||
osint,
|
||||
people,
|
||||
personas,
|
||||
prosody,
|
||||
queues,
|
||||
sessions,
|
||||
signal,
|
||||
system,
|
||||
tasks,
|
||||
whatsapp,
|
||||
workspace,
|
||||
)
|
||||
|
||||
urlpatterns = [
|
||||
path("__debug__/", include("debug_toolbar.urls")),
|
||||
path(
|
||||
"favicon.ico",
|
||||
RedirectView.as_view(url=f"{settings.STATIC_URL}favicon.ico", permanent=False),
|
||||
),
|
||||
path("", base.Home.as_view(), name="home"),
|
||||
path("admin/", admin.site.urls),
|
||||
path(
|
||||
"account/two_factor/",
|
||||
RedirectView.as_view(pattern_name="security_2fa", permanent=False),
|
||||
),
|
||||
# 2FA login urls
|
||||
path("", include(tf_urls)),
|
||||
path("accounts/signup/", base.Signup.as_view(), name="signup"),
|
||||
path("accounts/logout/", LogoutView.as_view(), name="logout"),
|
||||
# Notifications
|
||||
path(
|
||||
"notifications/page/update/",
|
||||
RedirectView.as_view(pattern_name="notifications_settings", permanent=False),
|
||||
),
|
||||
path(
|
||||
"notifications/<str:type>/update/",
|
||||
notifications.NotificationsUpdate.as_view(),
|
||||
name="notifications_update",
|
||||
),
|
||||
path(
|
||||
"settings/notifications/",
|
||||
notifications.NotificationsUpdate.as_view(),
|
||||
{"type": "page"},
|
||||
name="notifications_settings",
|
||||
),
|
||||
path(
|
||||
"settings/security/",
|
||||
system.SecurityPage.as_view(page_mode="encryption"),
|
||||
name="security_settings",
|
||||
),
|
||||
path(
|
||||
"settings/security/encryption/",
|
||||
system.SecurityPage.as_view(page_mode="encryption"),
|
||||
name="encryption_settings",
|
||||
),
|
||||
path(
|
||||
"settings/security/permissions/",
|
||||
system.SecurityPage.as_view(page_mode="permission"),
|
||||
name="permission_settings",
|
||||
),
|
||||
path(
|
||||
"settings/security/permission/",
|
||||
RedirectView.as_view(pattern_name="permission_settings", permanent=False),
|
||||
),
|
||||
path(
|
||||
"settings/security/2fa/",
|
||||
ProfileView.as_view(),
|
||||
name="security_2fa",
|
||||
),
|
||||
path(
|
||||
"settings/accessibility/",
|
||||
system.AccessibilitySettings.as_view(),
|
||||
name="accessibility_settings",
|
||||
),
|
||||
path(
|
||||
"settings/ai/",
|
||||
system.AISettingsPage.as_view(),
|
||||
name="ai_settings",
|
||||
),
|
||||
path(
|
||||
"settings/modules/",
|
||||
system.ModulesSettingsPage.as_view(),
|
||||
name="modules_settings",
|
||||
),
|
||||
path(
|
||||
"settings/system/",
|
||||
system.SystemSettings.as_view(),
|
||||
name="system_settings",
|
||||
),
|
||||
path(
|
||||
"settings/system/capabilities/",
|
||||
system.ServiceCapabilitySnapshotAPI.as_view(),
|
||||
name="system_capabilities",
|
||||
),
|
||||
path(
|
||||
"settings/system/adapter-health/",
|
||||
system.AdapterHealthSummaryAPI.as_view(),
|
||||
name="system_adapter_health",
|
||||
),
|
||||
path(
|
||||
"settings/system/trace/",
|
||||
system.TraceDiagnosticsAPI.as_view(),
|
||||
name="system_trace_diagnostics",
|
||||
),
|
||||
path(
|
||||
"settings/system/projection-shadow/",
|
||||
system.EventProjectionShadowAPI.as_view(),
|
||||
name="system_projection_shadow",
|
||||
),
|
||||
path(
|
||||
"settings/system/event-ledger-smoke/",
|
||||
system.EventLedgerSmokeAPI.as_view(),
|
||||
name="system_event_ledger_smoke",
|
||||
),
|
||||
path(
|
||||
"settings/system/memory-search/status/",
|
||||
system.MemorySearchStatusAPI.as_view(),
|
||||
name="system_memory_search_status",
|
||||
),
|
||||
path(
|
||||
"settings/system/memory-search/query/",
|
||||
system.MemorySearchQueryAPI.as_view(),
|
||||
name="system_memory_search_query",
|
||||
),
|
||||
path(
|
||||
"internal/prosody/auth/",
|
||||
prosody.ProsodyAuthBridge.as_view(),
|
||||
name="prosody_auth_bridge",
|
||||
),
|
||||
path(
|
||||
"settings/command-routing/",
|
||||
automation.CommandRoutingSettings.as_view(),
|
||||
name="command_routing",
|
||||
),
|
||||
path(
|
||||
"settings/ai/traces/",
|
||||
automation.AIExecutionLogSettings.as_view(),
|
||||
name="ai_execution_log",
|
||||
),
|
||||
path(
|
||||
"settings/ai/traces/run/<int:run_id>/",
|
||||
automation.AIExecutionRunDetailView.as_view(),
|
||||
name="ai_execution_run_detail",
|
||||
),
|
||||
path(
|
||||
"settings/ai/traces/run/<int:run_id>/tab/<str:tab_slug>/",
|
||||
automation.AIExecutionRunDetailTabView.as_view(),
|
||||
name="ai_execution_run_detail_tab",
|
||||
),
|
||||
path(
|
||||
"settings/ai-execution/",
|
||||
RedirectView.as_view(pattern_name="ai_execution_log", permanent=False),
|
||||
),
|
||||
path(
|
||||
"settings/translation/",
|
||||
automation.TranslationSettings.as_view(),
|
||||
name="translation_settings",
|
||||
),
|
||||
path(
|
||||
"settings/business-plans/",
|
||||
automation.BusinessPlanInbox.as_view(),
|
||||
name="business_plan_inbox",
|
||||
),
|
||||
path(
|
||||
"settings/business-plan/<str:doc_id>/",
|
||||
automation.BusinessPlanEditor.as_view(),
|
||||
name="business_plan_editor",
|
||||
),
|
||||
path(
|
||||
"settings/translation/preview/",
|
||||
automation.TranslationPreview.as_view(),
|
||||
name="translation_preview",
|
||||
),
|
||||
path(
|
||||
"services/signal/",
|
||||
signal.Signal.as_view(),
|
||||
@@ -120,6 +267,11 @@ urlpatterns = [
|
||||
signal.SignalAccountAdd.as_view(),
|
||||
name="signal_account_add",
|
||||
),
|
||||
path(
|
||||
"services/signal/<str:type>/unlink/<path:account>/",
|
||||
signal.SignalAccountUnlink.as_view(),
|
||||
name="signal_account_unlink",
|
||||
),
|
||||
path(
|
||||
"services/whatsapp/<str:type>/add/",
|
||||
whatsapp.WhatsAppAccountAdd.as_view(),
|
||||
@@ -160,6 +312,11 @@ urlpatterns = [
|
||||
compose.ComposeSend.as_view(),
|
||||
name="compose_send",
|
||||
),
|
||||
path(
|
||||
"compose/react/",
|
||||
compose.ComposeReact.as_view(),
|
||||
name="compose_react",
|
||||
),
|
||||
path(
|
||||
"compose/cancel-send/",
|
||||
compose.ComposeCancelSend.as_view(),
|
||||
@@ -201,9 +358,14 @@ urlpatterns = [
|
||||
name="compose_thread",
|
||||
),
|
||||
path(
|
||||
"compose/history-sync/",
|
||||
compose.ComposeHistorySync.as_view(),
|
||||
name="compose_history_sync",
|
||||
"compose/commands/bp/bind/",
|
||||
compose.ComposeBindBP.as_view(),
|
||||
name="compose_bind_bp",
|
||||
),
|
||||
path(
|
||||
"compose/commands/toggle/",
|
||||
compose.ComposeToggleCommand.as_view(),
|
||||
name="compose_toggle_command",
|
||||
),
|
||||
path(
|
||||
"compose/media/blob/",
|
||||
@@ -220,6 +382,76 @@ urlpatterns = [
|
||||
compose.ComposeContactMatch.as_view(),
|
||||
name="compose_contact_match",
|
||||
),
|
||||
path(
|
||||
"compose/contacts/create/",
|
||||
compose.ComposeContactCreate.as_view(),
|
||||
name="compose_contact_create",
|
||||
),
|
||||
path(
|
||||
"compose/contacts/create-all/",
|
||||
compose.ComposeContactCreateAll.as_view(),
|
||||
name="compose_contact_create_all",
|
||||
),
|
||||
path(
|
||||
"compose/answer-suggestion/send/",
|
||||
tasks.AnswerSuggestionSend.as_view(),
|
||||
name="compose_answer_suggestion_send",
|
||||
),
|
||||
path(
|
||||
"tasks/",
|
||||
tasks.TasksHub.as_view(),
|
||||
name="tasks_hub",
|
||||
),
|
||||
path(
|
||||
"tasks/projects/<str:project_id>/",
|
||||
tasks.TaskProjectDetail.as_view(),
|
||||
name="tasks_project",
|
||||
),
|
||||
path(
|
||||
"tasks/epics/<str:epic_id>/",
|
||||
tasks.TaskEpicDetail.as_view(),
|
||||
name="tasks_epic",
|
||||
),
|
||||
path(
|
||||
"tasks/groups/<str:service>/<path:identifier>/",
|
||||
tasks.TaskGroupDetail.as_view(),
|
||||
name="tasks_group",
|
||||
),
|
||||
path(
|
||||
"tasks/task/<str:task_id>/",
|
||||
tasks.TaskDetail.as_view(),
|
||||
name="tasks_task",
|
||||
),
|
||||
path(
|
||||
"tasks/codex/submit/",
|
||||
tasks.TaskCodexSubmit.as_view(),
|
||||
name="tasks_codex_submit",
|
||||
),
|
||||
path(
|
||||
"settings/tasks/",
|
||||
tasks.TaskSettings.as_view(),
|
||||
name="tasks_settings",
|
||||
),
|
||||
path(
|
||||
"settings/codex/",
|
||||
tasks.CodexSettingsPage.as_view(),
|
||||
name="codex_settings",
|
||||
),
|
||||
path(
|
||||
"settings/codex/approval/",
|
||||
tasks.CodexApprovalAction.as_view(),
|
||||
name="codex_approval",
|
||||
),
|
||||
path(
|
||||
"settings/behavioral/",
|
||||
availability.AvailabilitySettingsPage.as_view(),
|
||||
name="behavioral_signals_settings",
|
||||
),
|
||||
path(
|
||||
"settings/availability/",
|
||||
RedirectView.as_view(pattern_name="behavioral_signals_settings", permanent=False),
|
||||
name="availability_settings",
|
||||
),
|
||||
# AIs
|
||||
path(
|
||||
"ai/workspace/",
|
||||
@@ -341,6 +573,20 @@ urlpatterns = [
|
||||
workspace.AIWorkspaceUpdatePlanMeta.as_view(),
|
||||
name="ai_workspace_mitigation_meta_save",
|
||||
),
|
||||
path(
|
||||
"settings/ai/models/",
|
||||
ais.AIList.as_view(),
|
||||
{"type": "page"},
|
||||
name="ai_models",
|
||||
),
|
||||
path(
|
||||
"ai/models/",
|
||||
RedirectView.as_view(pattern_name="ai_models", permanent=False),
|
||||
),
|
||||
path(
|
||||
"ai/page/",
|
||||
RedirectView.as_view(pattern_name="ai_models", permanent=False),
|
||||
),
|
||||
path(
|
||||
"ai/<str:type>/",
|
||||
ais.AIList.as_view(),
|
||||
@@ -556,3 +802,6 @@ urlpatterns = [
|
||||
name="queue_delete",
|
||||
),
|
||||
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
|
||||
|
||||
if getattr(settings, "DEBUG_TOOLBAR_ENABLED", False):
|
||||
urlpatterns.insert(0, path("__debug__/", include("debug_toolbar.urls")))
|
||||
|
||||
1488
artifacts/audits/1-initial.json
Normal file
1488
artifacts/audits/1-initial.json
Normal file
File diff suppressed because it is too large
Load Diff
674
artifacts/audits/2-first-pass-fix.json
Normal file
674
artifacts/audits/2-first-pass-fix.json
Normal file
@@ -0,0 +1,674 @@
|
||||
|
||||
{
|
||||
"score": 58,
|
||||
"grade": "D",
|
||||
"gradeLabel": "Significant security risks",
|
||||
"totalFindings": 22,
|
||||
"totalDepVulns": 0,
|
||||
"categories": {
|
||||
"secrets": {
|
||||
"label": "Secrets",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"injection": {
|
||||
"label": "Code Vulnerabilities",
|
||||
"findingCount": 3,
|
||||
"deduction": 9,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 3,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"deps": {
|
||||
"label": "Dependencies",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"auth": {
|
||||
"label": "Auth & Access Control",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"config": {
|
||||
"label": "Configuration",
|
||||
"findingCount": 4,
|
||||
"deduction": 10,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 4,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"supply-chain": {
|
||||
"label": "Supply Chain",
|
||||
"findingCount": 1,
|
||||
"deduction": 3,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 1,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"api": {
|
||||
"label": "API Security",
|
||||
"findingCount": 3,
|
||||
"deduction": 10,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 3,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"llm": {
|
||||
"label": "AI/LLM Security",
|
||||
"findingCount": 11,
|
||||
"deduction": 10,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 11,
|
||||
"low": 0
|
||||
}
|
||||
}
|
||||
},
|
||||
"findings": [
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 25,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_RUN_AS_ROOT",
|
||||
"title": "Docker: Running as Root",
|
||||
"description": "No USER instruction found. Container runs as root by default.",
|
||||
"fix": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"cwe": "CWE-250",
|
||||
"owasp": "A05:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 28,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_RUN_AS_ROOT",
|
||||
"title": "Docker: Running as Root",
|
||||
"description": "No USER instruction found. Container runs as root by default.",
|
||||
"fix": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"cwe": "CWE-250",
|
||||
"owasp": "A05:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 30,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_RUN_AS_ROOT",
|
||||
"title": "Docker: Running as Root",
|
||||
"description": "No USER instruction found. Container runs as root by default.",
|
||||
"fix": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"cwe": "CWE-250",
|
||||
"owasp": "A05:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 31,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_RUN_AS_ROOT",
|
||||
"title": "Docker: Running as Root",
|
||||
"description": "No USER instruction found. Container runs as root by default.",
|
||||
"fix": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"cwe": "CWE-250",
|
||||
"owasp": "A05:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/clients/whatsapp.py",
|
||||
"line": 3398,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"rule": "API_UPLOAD_NO_TYPE_CHECK",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"description": "File upload using original filename without type validation.",
|
||||
"fix": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"cwe": "CWE-434",
|
||||
"owasp": "A04:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/clients/xmpp.py",
|
||||
"line": 57,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"rule": "API_UPLOAD_NO_TYPE_CHECK",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"description": "File upload using original filename without type validation.",
|
||||
"fix": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"cwe": "CWE-434",
|
||||
"owasp": "A04:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/security/attachments.py",
|
||||
"line": 83,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"rule": "API_UPLOAD_NO_TYPE_CHECK",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"description": "File upload using original filename without type validation.",
|
||||
"fix": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"cwe": "CWE-434",
|
||||
"owasp": "A04:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/tests/test_attachment_security.py",
|
||||
"line": 29,
|
||||
"severity": "medium",
|
||||
"category": "ssrf",
|
||||
"rule": "SSRF_INTERNAL_IP",
|
||||
"title": "SSRF: Internal IP Pattern",
|
||||
"description": "Internal IP address in code. Verify it is not reachable via user-controlled URLs.",
|
||||
"fix": "Block private IP ranges in URL validation for user-supplied URLs",
|
||||
"cwe": "CWE-918",
|
||||
"owasp": "A10:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/tests/test_attachment_security.py",
|
||||
"line": 34,
|
||||
"severity": "medium",
|
||||
"category": "ssrf",
|
||||
"rule": "SSRF_INTERNAL_IP",
|
||||
"title": "SSRF: Internal IP Pattern",
|
||||
"description": "Internal IP address in code. Verify it is not reachable via user-controlled URLs.",
|
||||
"fix": "Block private IP ranges in URL validation for user-supplied URLs",
|
||||
"cwe": "CWE-918",
|
||||
"owasp": "A10:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/tests/test_attachment_security.py",
|
||||
"line": 35,
|
||||
"severity": "medium",
|
||||
"category": "ssrf",
|
||||
"rule": "SSRF_INTERNAL_IP",
|
||||
"title": "SSRF: Internal IP Pattern",
|
||||
"description": "Internal IP address in code. Verify it is not reachable via user-controlled URLs.",
|
||||
"fix": "Block private IP ranges in URL validation for user-supplied URLs",
|
||||
"cwe": "CWE-918",
|
||||
"owasp": "A10:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/requirements.txt",
|
||||
"line": 23,
|
||||
"severity": "medium",
|
||||
"category": "supply-chain",
|
||||
"rule": "UNPINNED_PYTHON_DEP",
|
||||
"title": "Unpinned Python Dependency: ./vendor/django-crud-mixins",
|
||||
"description": "Python dependency without version pin. Pin to a specific version for reproducible builds.",
|
||||
"fix": "Pin version: ./vendor/django-crud-mixins==x.y.z",
|
||||
"cwe": null,
|
||||
"owasp": null
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/clients/signalapi.py",
|
||||
"line": 411,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_NO_OUTPUT_FILTER",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"description": "LLM output used directly without filtering. May contain sensitive info or hallucinations.",
|
||||
"fix": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"cwe": "CWE-200",
|
||||
"owasp": "LLM02"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 739,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 744,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 758,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 850,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 1377,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 1382,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 1396,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/signal.py",
|
||||
"line": 189,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_NO_OUTPUT_FILTER",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"description": "LLM output used directly without filtering. May contain sensitive info or hallucinations.",
|
||||
"fix": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"cwe": "CWE-200",
|
||||
"owasp": "LLM02"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/signal.py",
|
||||
"line": 197,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_NO_OUTPUT_FILTER",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"description": "LLM output used directly without filtering. May contain sensitive info or hallucinations.",
|
||||
"fix": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"cwe": "CWE-200",
|
||||
"owasp": "LLM02"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/signal.py",
|
||||
"line": 206,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_NO_OUTPUT_FILTER",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"description": "LLM output used directly without filtering. May contain sensitive info or hallucinations.",
|
||||
"fix": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"cwe": "CWE-200",
|
||||
"owasp": "LLM02"
|
||||
}
|
||||
],
|
||||
"depVulns": [],
|
||||
"remediationPlan": [
|
||||
{
|
||||
"priority": 1,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Docker: Running as Root",
|
||||
"file": "Dockerfile:25",
|
||||
"action": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"effort": "low"
|
||||
},
|
||||
{
|
||||
"priority": 2,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Docker: Running as Root",
|
||||
"file": "Dockerfile:28",
|
||||
"action": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"effort": "low"
|
||||
},
|
||||
{
|
||||
"priority": 3,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Docker: Running as Root",
|
||||
"file": "Dockerfile:30",
|
||||
"action": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"effort": "low"
|
||||
},
|
||||
{
|
||||
"priority": 4,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Docker: Running as Root",
|
||||
"file": "Dockerfile:31",
|
||||
"action": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"effort": "low"
|
||||
},
|
||||
{
|
||||
"priority": 5,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"categoryLabel": "API SECURITY",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"file": "core/clients/whatsapp.py:3398",
|
||||
"action": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 6,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"categoryLabel": "API SECURITY",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"file": "core/clients/xmpp.py:57",
|
||||
"action": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 7,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"categoryLabel": "API SECURITY",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"file": "core/security/attachments.py:83",
|
||||
"action": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 8,
|
||||
"severity": "medium",
|
||||
"category": "ssrf",
|
||||
"categoryLabel": "SSRF",
|
||||
"title": "SSRF: Internal IP Pattern",
|
||||
"file": "core/tests/test_attachment_security.py:29",
|
||||
"action": "Block private IP ranges in URL validation for user-supplied URLs",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 9,
|
||||
"severity": "medium",
|
||||
"category": "ssrf",
|
||||
"categoryLabel": "SSRF",
|
||||
"title": "SSRF: Internal IP Pattern",
|
||||
"file": "core/tests/test_attachment_security.py:34",
|
||||
"action": "Block private IP ranges in URL validation for user-supplied URLs",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 10,
|
||||
"severity": "medium",
|
||||
"category": "ssrf",
|
||||
"categoryLabel": "SSRF",
|
||||
"title": "SSRF: Internal IP Pattern",
|
||||
"file": "core/tests/test_attachment_security.py:35",
|
||||
"action": "Block private IP ranges in URL validation for user-supplied URLs",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 11,
|
||||
"severity": "medium",
|
||||
"category": "supply-chain",
|
||||
"categoryLabel": "SUPPLY CHAIN",
|
||||
"title": "Unpinned Python Dependency: ./vendor/django-crud-mixins",
|
||||
"file": "requirements.txt:23",
|
||||
"action": "Pin version: ./vendor/django-crud-mixins==x.y.z",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 12,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"file": "core/clients/signalapi.py:411",
|
||||
"action": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 13,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:739",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 14,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:744",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 15,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:758",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 16,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:850",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 17,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:1377",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 18,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:1382",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 19,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:1396",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 20,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"file": "core/views/signal.py:189",
|
||||
"action": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 21,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"file": "core/views/signal.py:197",
|
||||
"action": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 22,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"file": "core/views/signal.py:206",
|
||||
"action": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"effort": "high"
|
||||
}
|
||||
],
|
||||
"recon": {
|
||||
"frameworks": [
|
||||
"django"
|
||||
],
|
||||
"languages": [
|
||||
"python"
|
||||
],
|
||||
"apiRoutes": [
|
||||
"app/urls.py",
|
||||
"core/management/commands/backfill_xmpp_attachment_urls.py"
|
||||
],
|
||||
"authPatterns": [],
|
||||
"databases": [],
|
||||
"cloudProviders": [],
|
||||
"frontendExposure": [],
|
||||
"packageManagers": [
|
||||
"pip"
|
||||
],
|
||||
"cicd": [],
|
||||
"hasDockerfile": true,
|
||||
"hasTerraform": false,
|
||||
"hasKubernetes": false,
|
||||
"envFiles": [],
|
||||
"configFiles": []
|
||||
},
|
||||
"agents": [
|
||||
{
|
||||
"agent": "InjectionTester",
|
||||
"category": "injection",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "AuthBypassAgent",
|
||||
"category": "auth",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SSRFProber",
|
||||
"category": "ssrf",
|
||||
"findingCount": 3,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SupplyChainAudit",
|
||||
"category": "supply-chain",
|
||||
"findingCount": 1,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "ConfigAuditor",
|
||||
"category": "config",
|
||||
"findingCount": 4,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "LLMRedTeam",
|
||||
"category": "llm",
|
||||
"findingCount": 11,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "MobileScanner",
|
||||
"category": "mobile",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "GitHistoryScanner",
|
||||
"category": "history",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "CICDScanner",
|
||||
"category": "cicd",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "APIFuzzer",
|
||||
"category": "api",
|
||||
"findingCount": 3,
|
||||
"success": true
|
||||
}
|
||||
]
|
||||
}
|
||||
410
artifacts/audits/3-second-pass-fix.json
Normal file
410
artifacts/audits/3-second-pass-fix.json
Normal file
@@ -0,0 +1,410 @@
|
||||
|
||||
{
|
||||
"score": 74,
|
||||
"grade": "C",
|
||||
"gradeLabel": "Fix before shipping",
|
||||
"totalFindings": 10,
|
||||
"totalDepVulns": 0,
|
||||
"categories": {
|
||||
"secrets": {
|
||||
"label": "Secrets",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"injection": {
|
||||
"label": "Code Vulnerabilities",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"deps": {
|
||||
"label": "Dependencies",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"auth": {
|
||||
"label": "Auth & Access Control",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"config": {
|
||||
"label": "Configuration",
|
||||
"findingCount": 1,
|
||||
"deduction": 8,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 1,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"supply-chain": {
|
||||
"label": "Supply Chain",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"api": {
|
||||
"label": "API Security",
|
||||
"findingCount": 1,
|
||||
"deduction": 8,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 1,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"llm": {
|
||||
"label": "AI/LLM Security",
|
||||
"findingCount": 8,
|
||||
"deduction": 10,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 8,
|
||||
"low": 0
|
||||
}
|
||||
}
|
||||
},
|
||||
"findings": [
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 26,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_RUN_AS_ROOT",
|
||||
"title": "Docker: Running as Root",
|
||||
"description": "No USER instruction found. Container runs as root by default.",
|
||||
"fix": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"cwe": "CWE-250",
|
||||
"owasp": "A05:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/security/attachments.py",
|
||||
"line": 113,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"rule": "API_UPLOAD_NO_TYPE_CHECK",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"description": "File upload using original filename without type validation.",
|
||||
"fix": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"cwe": "CWE-434",
|
||||
"owasp": "A04:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 775,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 781,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 795,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 1418,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 1424,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/osint.py",
|
||||
"line": 1438,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_RAG_NO_VALIDATION",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"description": "User input passed directly to vector search/embedding without validation.",
|
||||
"fix": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"cwe": "CWE-20",
|
||||
"owasp": "LLM08"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/signal.py",
|
||||
"line": 202,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_NO_OUTPUT_FILTER",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"description": "LLM output used directly without filtering. May contain sensitive info or hallucinations.",
|
||||
"fix": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"cwe": "CWE-200",
|
||||
"owasp": "LLM02"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/core/views/signal.py",
|
||||
"line": 211,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"rule": "LLM_NO_OUTPUT_FILTER",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"description": "LLM output used directly without filtering. May contain sensitive info or hallucinations.",
|
||||
"fix": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"cwe": "CWE-200",
|
||||
"owasp": "LLM02"
|
||||
}
|
||||
],
|
||||
"depVulns": [],
|
||||
"remediationPlan": [
|
||||
{
|
||||
"priority": 1,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Docker: Running as Root",
|
||||
"file": "Dockerfile:26",
|
||||
"action": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"effort": "low"
|
||||
},
|
||||
{
|
||||
"priority": 2,
|
||||
"severity": "high",
|
||||
"category": "api",
|
||||
"categoryLabel": "API SECURITY",
|
||||
"title": "API: File Upload Without Type Validation",
|
||||
"file": "core/security/attachments.py:113",
|
||||
"action": "Validate file extension and MIME type. Generate random filenames for storage.",
|
||||
"effort": "medium"
|
||||
},
|
||||
{
|
||||
"priority": 3,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:775",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 4,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:781",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 5,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:795",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 6,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:1418",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 7,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:1424",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 8,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "RAG Pipeline Without Input Validation",
|
||||
"file": "core/views/osint.py:1438",
|
||||
"action": "Validate and sanitize input before embedding. Limit query length.",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 9,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"file": "core/views/signal.py:202",
|
||||
"action": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"effort": "high"
|
||||
},
|
||||
{
|
||||
"priority": 10,
|
||||
"severity": "medium",
|
||||
"category": "llm",
|
||||
"categoryLabel": "AI/LLM SECURITY",
|
||||
"title": "LLM Output Without Filtering",
|
||||
"file": "core/views/signal.py:211",
|
||||
"action": "Filter LLM output before displaying: remove PII, validate against expected format",
|
||||
"effort": "high"
|
||||
}
|
||||
],
|
||||
"recon": {
|
||||
"frameworks": [
|
||||
"django"
|
||||
],
|
||||
"languages": [
|
||||
"python"
|
||||
],
|
||||
"apiRoutes": [
|
||||
"app/urls.py",
|
||||
"core/management/commands/backfill_xmpp_attachment_urls.py"
|
||||
],
|
||||
"authPatterns": [],
|
||||
"databases": [],
|
||||
"cloudProviders": [],
|
||||
"frontendExposure": [],
|
||||
"packageManagers": [
|
||||
"pip"
|
||||
],
|
||||
"cicd": [],
|
||||
"hasDockerfile": true,
|
||||
"hasTerraform": false,
|
||||
"hasKubernetes": false,
|
||||
"envFiles": [],
|
||||
"configFiles": []
|
||||
},
|
||||
"agents": [
|
||||
{
|
||||
"agent": "InjectionTester",
|
||||
"category": "injection",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "AuthBypassAgent",
|
||||
"category": "auth",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SSRFProber",
|
||||
"category": "ssrf",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SupplyChainAudit",
|
||||
"category": "supply-chain",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "ConfigAuditor",
|
||||
"category": "config",
|
||||
"findingCount": 1,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "LLMRedTeam",
|
||||
"category": "llm",
|
||||
"findingCount": 8,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "MobileScanner",
|
||||
"category": "mobile",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "GitHistoryScanner",
|
||||
"category": "history",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "CICDScanner",
|
||||
"category": "cicd",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "APIFuzzer",
|
||||
"category": "api",
|
||||
"findingCount": 1,
|
||||
"success": true
|
||||
}
|
||||
]
|
||||
}
|
||||
212
artifacts/audits/4-third-pass-fix.json
Normal file
212
artifacts/audits/4-third-pass-fix.json
Normal file
@@ -0,0 +1,212 @@
|
||||
|
||||
{
|
||||
"score": 92,
|
||||
"grade": "A",
|
||||
"gradeLabel": "Ship it!",
|
||||
"totalFindings": 1,
|
||||
"totalDepVulns": 0,
|
||||
"categories": {
|
||||
"secrets": {
|
||||
"label": "Secrets",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"injection": {
|
||||
"label": "Code Vulnerabilities",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"deps": {
|
||||
"label": "Dependencies",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"auth": {
|
||||
"label": "Auth & Access Control",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"config": {
|
||||
"label": "Configuration",
|
||||
"findingCount": 1,
|
||||
"deduction": 8,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 1,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"supply-chain": {
|
||||
"label": "Supply Chain",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"api": {
|
||||
"label": "API Security",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"llm": {
|
||||
"label": "AI/LLM Security",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
}
|
||||
},
|
||||
"findings": [
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 26,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_RUN_AS_ROOT",
|
||||
"title": "Docker: Running as Root",
|
||||
"description": "No USER instruction found. Container runs as root by default.",
|
||||
"fix": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"cwe": "CWE-250",
|
||||
"owasp": "A05:2021"
|
||||
}
|
||||
],
|
||||
"depVulns": [],
|
||||
"remediationPlan": [
|
||||
{
|
||||
"priority": 1,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Docker: Running as Root",
|
||||
"file": "Dockerfile:26",
|
||||
"action": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"effort": "low"
|
||||
}
|
||||
],
|
||||
"recon": {
|
||||
"frameworks": [
|
||||
"django"
|
||||
],
|
||||
"languages": [
|
||||
"python"
|
||||
],
|
||||
"apiRoutes": [
|
||||
"app/urls.py",
|
||||
"core/management/commands/backfill_xmpp_attachment_urls.py"
|
||||
],
|
||||
"authPatterns": [],
|
||||
"databases": [],
|
||||
"cloudProviders": [],
|
||||
"frontendExposure": [],
|
||||
"packageManagers": [
|
||||
"pip"
|
||||
],
|
||||
"cicd": [],
|
||||
"hasDockerfile": true,
|
||||
"hasTerraform": false,
|
||||
"hasKubernetes": false,
|
||||
"envFiles": [],
|
||||
"configFiles": []
|
||||
},
|
||||
"agents": [
|
||||
{
|
||||
"agent": "InjectionTester",
|
||||
"category": "injection",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "AuthBypassAgent",
|
||||
"category": "auth",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SSRFProber",
|
||||
"category": "ssrf",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SupplyChainAudit",
|
||||
"category": "supply-chain",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "ConfigAuditor",
|
||||
"category": "config",
|
||||
"findingCount": 1,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "LLMRedTeam",
|
||||
"category": "llm",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "MobileScanner",
|
||||
"category": "mobile",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "GitHistoryScanner",
|
||||
"category": "history",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "CICDScanner",
|
||||
"category": "cicd",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "APIFuzzer",
|
||||
"category": "api",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
}
|
||||
]
|
||||
}
|
||||
234
artifacts/audits/5-final-pass-fix.json
Normal file
234
artifacts/audits/5-final-pass-fix.json
Normal file
@@ -0,0 +1,234 @@
|
||||
|
||||
{
|
||||
"score": 90,
|
||||
"grade": "A",
|
||||
"gradeLabel": "Ship it!",
|
||||
"totalFindings": 2,
|
||||
"totalDepVulns": 0,
|
||||
"categories": {
|
||||
"secrets": {
|
||||
"label": "Secrets",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"injection": {
|
||||
"label": "Code Vulnerabilities",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"deps": {
|
||||
"label": "Dependencies",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"auth": {
|
||||
"label": "Auth & Access Control",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"config": {
|
||||
"label": "Configuration",
|
||||
"findingCount": 2,
|
||||
"deduction": 10,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 2,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"supply-chain": {
|
||||
"label": "Supply Chain",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"api": {
|
||||
"label": "API Security",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
},
|
||||
"llm": {
|
||||
"label": "AI/LLM Security",
|
||||
"findingCount": 0,
|
||||
"deduction": 0,
|
||||
"counts": {
|
||||
"critical": 0,
|
||||
"high": 0,
|
||||
"medium": 0,
|
||||
"low": 0
|
||||
}
|
||||
}
|
||||
},
|
||||
"findings": [
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 26,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_RUN_AS_ROOT",
|
||||
"title": "Docker: Running as Root",
|
||||
"description": "No USER instruction found. Container runs as root by default.",
|
||||
"fix": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"cwe": "CWE-250",
|
||||
"owasp": "A05:2021"
|
||||
},
|
||||
{
|
||||
"file": "/code/xf/GIA/Dockerfile",
|
||||
"line": 1,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"rule": "DOCKER_NO_USER",
|
||||
"title": "Dockerfile: No Non-Root USER",
|
||||
"description": "No USER instruction found. Container runs as root, enabling escape attacks.",
|
||||
"fix": "Add before CMD: RUN addgroup -S app && adduser -S app -G app\nUSER app",
|
||||
"cwe": null,
|
||||
"owasp": null
|
||||
}
|
||||
],
|
||||
"depVulns": [],
|
||||
"remediationPlan": [
|
||||
{
|
||||
"priority": 1,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Docker: Running as Root",
|
||||
"file": "Dockerfile:26",
|
||||
"action": "Add USER nonroot before CMD/ENTRYPOINT",
|
||||
"effort": "low"
|
||||
},
|
||||
{
|
||||
"priority": 2,
|
||||
"severity": "high",
|
||||
"category": "config",
|
||||
"categoryLabel": "CONFIGURATION",
|
||||
"title": "Dockerfile: No Non-Root USER",
|
||||
"file": "Dockerfile:1",
|
||||
"action": "Add before CMD: RUN addgroup -S app && adduser -S app -G app\nUSER app",
|
||||
"effort": "low"
|
||||
}
|
||||
],
|
||||
"recon": {
|
||||
"frameworks": [
|
||||
"django"
|
||||
],
|
||||
"languages": [
|
||||
"python"
|
||||
],
|
||||
"apiRoutes": [
|
||||
"app/urls.py",
|
||||
"core/management/commands/backfill_xmpp_attachment_urls.py"
|
||||
],
|
||||
"authPatterns": [],
|
||||
"databases": [],
|
||||
"cloudProviders": [],
|
||||
"frontendExposure": [],
|
||||
"packageManagers": [
|
||||
"pip"
|
||||
],
|
||||
"cicd": [],
|
||||
"hasDockerfile": true,
|
||||
"hasTerraform": false,
|
||||
"hasKubernetes": false,
|
||||
"envFiles": [],
|
||||
"configFiles": []
|
||||
},
|
||||
"agents": [
|
||||
{
|
||||
"agent": "InjectionTester",
|
||||
"category": "injection",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "AuthBypassAgent",
|
||||
"category": "auth",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SSRFProber",
|
||||
"category": "ssrf",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "SupplyChainAudit",
|
||||
"category": "supply-chain",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "ConfigAuditor",
|
||||
"category": "config",
|
||||
"findingCount": 2,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "LLMRedTeam",
|
||||
"category": "llm",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "MobileScanner",
|
||||
"category": "mobile",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "GitHistoryScanner",
|
||||
"category": "history",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "CICDScanner",
|
||||
"category": "cicd",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
},
|
||||
{
|
||||
"agent": "APIFuzzer",
|
||||
"category": "api",
|
||||
"findingCount": 0,
|
||||
"success": true
|
||||
}
|
||||
]
|
||||
}
|
||||
87
artifacts/frontend_asset_playbook.md
Normal file
87
artifacts/frontend_asset_playbook.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# Frontend Asset Vendoring Playbook
|
||||
|
||||
This is the repeatable process used in GIA to self-host third-party frontend assets, pin versions, record provenance, and keep template SRI hashes maintainable.
|
||||
|
||||
## Goals
|
||||
|
||||
- Avoid runtime CDN dependencies for core UI assets.
|
||||
- Pin exact upstream versions and package integrity in source control.
|
||||
- Record provenance, license, and SRI for each third-party asset.
|
||||
- Keep a simple manifest-driven workflow that can be copied into other projects.
|
||||
|
||||
## Files
|
||||
|
||||
- `tools/frontend_assets/package.json`
|
||||
- Exact npm dependency pins for vendored frontend libraries.
|
||||
- `tools/frontend_assets/package-lock.json`
|
||||
- npm lockfile integrity and tarball resolution source of truth.
|
||||
- `tools/frontend_assets/asset-manifest.json`
|
||||
- Maps upstream package files and URL bundles into local `core/static/` targets.
|
||||
- `scripts/vendor_frontend_assets.py`
|
||||
- Copies npm assets, downloads non-npm bundles, computes local SHA-256 and SRI SHA-512, and emits inventory reports.
|
||||
- `artifacts/frontend_libraries.md`
|
||||
- Human-readable inventory with versions, licenses, official URLs, tarballs, upstream integrity, and local SHA/SRI.
|
||||
- `artifacts/frontend_libraries.json`
|
||||
- Machine-readable inventory for template updates and audits.
|
||||
|
||||
## Standard Workflow
|
||||
|
||||
1. Add or update exact versions in `tools/frontend_assets/package.json`.
|
||||
2. Confirm each package is the official distribution, not a typosquat or community fork.
|
||||
3. Update `tools/frontend_assets/asset-manifest.json` with the exact files to vendor.
|
||||
4. Run `npm install --ignore-scripts` in `tools/frontend_assets/`.
|
||||
5. Run `./genv/bin/python scripts/vendor_frontend_assets.py`.
|
||||
6. Copy the emitted `sri_sha512` values into template tags in `core/templates/base.html`.
|
||||
7. Restart the stack so `collectstatic` republishes the new files.
|
||||
8. Verify in the browser that assets load from local static URLs with no CDN fallbacks.
|
||||
|
||||
## Provenance Rules
|
||||
|
||||
- Prefer official project docs or official extension listings before trusting npm search results.
|
||||
- For Bulma extensions, treat `https://bulma.io/extensions/` as the discovery source and verify the linked package/repo matches the npm package name.
|
||||
- Confirm npm metadata matches the expected project:
|
||||
- homepage
|
||||
- repository
|
||||
- maintainers
|
||||
- license
|
||||
- publish recency
|
||||
- Keep exact versions in `package.json`; do not use ranges.
|
||||
- Use `package-lock.json` as the immutable record of tarball URLs and upstream integrity.
|
||||
- If a library is not safely available from npm, vendor it from the official upstream URL as a `url_bundle`.
|
||||
- Do not silently substitute a “free” or differently-scoped package if the templates rely on another asset family.
|
||||
|
||||
## GIA Decisions
|
||||
|
||||
- Font Awesome:
|
||||
- GIA templates use icon classes that are not safely replaceable with `@fortawesome/fontawesome-free`.
|
||||
- The existing Font Awesome Pro `site-assets` v6.1.1 bundle is self-hosted under `core/static/vendor/fontawesome/` instead.
|
||||
- jQuery:
|
||||
- Latest npm is 4.x, but GIA stays on `3.7.1` to avoid breaking older plugins.
|
||||
- Bulma extensions:
|
||||
- `bulma-calendar`, `bulma-tagsinput`, `bulma-switch`, `bulma-slider`, and `bulma-tooltip` were matched against Bulma's official extensions page before pinning.
|
||||
- `bulma-calendar` and `bulma-tooltip` are deprecated on npm, but Bulma still points to the Wikiki ecosystem for these extensions, so they were kept and documented instead of replaced ad hoc.
|
||||
|
||||
## Theme Strategy
|
||||
|
||||
- `core/static/css/gia-theme.css` is now the project override layer instead of an inline `<style>` block in `base.html`.
|
||||
- This is the low-friction step that works with the current stack immediately.
|
||||
- If we want a true Bulma theme build later, add `sass` to `tools/frontend_assets`, create a `gia-bulma.scss` entrypoint that sets Bulma theme variables before importing Bulma, and vendor the compiled output through the same manifest process.
|
||||
|
||||
## Commands
|
||||
|
||||
```bash
|
||||
cd /code/xf/GIA/tools/frontend_assets
|
||||
npm install --ignore-scripts
|
||||
|
||||
cd /code/xf/GIA
|
||||
./genv/bin/python scripts/vendor_frontend_assets.py
|
||||
bash ./scripts/quadlet/manage.sh restart
|
||||
./genv/bin/python manage.py check
|
||||
```
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
- `core/templates/base.html` only references local static URLs for vendored libraries.
|
||||
- Each included stylesheet/script in `base.html` has an `integrity="sha512-..."` attribute.
|
||||
- Browser network panel shows local asset hits and no unexpected CDN requests.
|
||||
- `artifacts/frontend_libraries.md` reflects the exact deployed asset set.
|
||||
469
artifacts/frontend_libraries.json
Normal file
469
artifacts/frontend_libraries.json
Normal file
@@ -0,0 +1,469 @@
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"id": "bulma_css",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma",
|
||||
"version": "1.0.4",
|
||||
"license": "MIT",
|
||||
"homepage": "https://bulma.io",
|
||||
"official_url": "https://bulma.io",
|
||||
"purpose": "Bulma core stylesheet",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma/-/bulma-1.0.4.tgz",
|
||||
"dist_integrity": "sha512-Ffb6YGXDiZYX3cqvSbHWqQ8+LkX6tVoTcZuVB3lm93sbAVXlO0D6QlOTMnV6g18gILpAXqkG2z9hf9z4hCjz2g==",
|
||||
"source_path": "css/bulma.min.css",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/css/bulma.min.css",
|
||||
"sha256": "67fa26df1ca9e95d8f2adc7c04fa1b15fa3d24257470ebc10cc68b9aab914bee",
|
||||
"sri_sha512": "sha512-yh2RE0wZCVZeysGiqTwDTO/dKelCbS9bP2L94UvOFtl/FKXcNAje3Y2oBg/ZMZ3LS1sicYk4dYVGtDex75fvvA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_tooltip_css",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-tooltip",
|
||||
"version": "3.0.2",
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/Wikiki/bulma-tooltip#readme",
|
||||
"official_url": "https://wikiki.github.io/elements/tooltip",
|
||||
"purpose": "Official Bulma tooltip extension from Bulma's extensions page",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-tooltip/-/bulma-tooltip-3.0.2.tgz",
|
||||
"dist_integrity": "sha512-CsT3APjhlZScskFg38n8HYL8oYNUHQtcu4sz6ERarxkUpBRbk9v0h/5KAvXeKapVSn2dp9l7bOGit5SECP8EWQ==",
|
||||
"source_path": "dist/css/bulma-tooltip.min.css",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/css/bulma-tooltip.min.css",
|
||||
"sha256": "5c79d12a40b3532aaec159faa0b85fd3d500e192467761b71e0bda0fd04f3076",
|
||||
"sri_sha512": "sha512-SNDNIUvSYhnqDV9FFXaH/e0xZ6NzkG4Qm5dafLLf0PCMkzICKaOmMTgI3y2t2jZK+hAtP6A7UBcFqjWMhsujIg=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_slider_css",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-slider",
|
||||
"version": "2.0.5",
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/Wikiki/bulma-slider#readme",
|
||||
"official_url": "https://wikiki.github.io/form/slider",
|
||||
"purpose": "Official Bulma slider extension from Bulma's extensions page",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-slider/-/bulma-slider-2.0.5.tgz",
|
||||
"dist_integrity": "sha512-6woD/1E7q1o5bfEaQjNqpWZaCItC1oHe9bN15WYB2ELqz2gDaJYZkf+rlozGpAYOXQGDQGCCv3y+QuKjx6sQuw==",
|
||||
"source_path": "dist/css/bulma-slider.min.css",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/css/bulma-slider.min.css",
|
||||
"sha256": "f9d952627d388b8ba267e1388d6923274cf9e62e758d459c5a045f3933e9dc8a",
|
||||
"sri_sha512": "sha512-9o5SkCRCA9thttRH3Gb5QXLxKdRiuRLdO6ToEPwRHGLXjrhTZwFj0rEHjrCcJvDN9/aNaWMpGOIEA2vZsHmEqw=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_slider_js",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-slider",
|
||||
"version": "2.0.5",
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/Wikiki/bulma-slider#readme",
|
||||
"official_url": "https://wikiki.github.io/form/slider",
|
||||
"purpose": "Official Bulma slider extension runtime",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-slider/-/bulma-slider-2.0.5.tgz",
|
||||
"dist_integrity": "sha512-6woD/1E7q1o5bfEaQjNqpWZaCItC1oHe9bN15WYB2ELqz2gDaJYZkf+rlozGpAYOXQGDQGCCv3y+QuKjx6sQuw==",
|
||||
"source_path": "dist/js/bulma-slider.min.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/bulma-slider.min.js",
|
||||
"sha256": "db68ebe154a25597913c5635f31500fe7a32e5a205fb9a98c9642d0c2de47d9e",
|
||||
"sri_sha512": "sha512-WLKXHCsMXTSIPsmQShJRE6K4IzwvNkhwxr/Oo8N3z+kzjhGleHibspmWLTawNMdl2z9E23XK20+yvUTDZ+zeNQ=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_calendar_css",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-calendar",
|
||||
"version": "7.1.1",
|
||||
"license": "MIT",
|
||||
"homepage": "https://doc.mh-s.de/bulma-calendar",
|
||||
"official_url": "https://wikiki.github.io/components/calendar",
|
||||
"purpose": "Official Bulma calendar extension from Bulma's extensions page",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-calendar/-/bulma-calendar-7.1.1.tgz",
|
||||
"dist_integrity": "sha512-E08i25KOfqMKBndgDF3y3eoQ0dUzVkgV9R53EDRM65GQUQKLzt8gcXVJYs3mYnpq6L3DiLuUt47Fl09tSv9OpA==",
|
||||
"source_path": "src/demo/assets/css/bulma-calendar.min.css",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/css/bulma-calendar.min.css",
|
||||
"sha256": "d18b488ca52584bcd6ea3fb84bf06380e47a3cd18660a235617da017d13ab269",
|
||||
"sri_sha512": "sha512-IOnJQkgQpezPDPTJcRiWD7YVI3sF2RYzYDl4isbDT2geSaEHRQ615UN/8GhJbSkvqkKRZu8SBCQ7XwKMqsqLFQ=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_calendar_js",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-calendar",
|
||||
"version": "7.1.1",
|
||||
"license": "MIT",
|
||||
"homepage": "https://doc.mh-s.de/bulma-calendar",
|
||||
"official_url": "https://wikiki.github.io/components/calendar",
|
||||
"purpose": "Official Bulma calendar extension runtime",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-calendar/-/bulma-calendar-7.1.1.tgz",
|
||||
"dist_integrity": "sha512-E08i25KOfqMKBndgDF3y3eoQ0dUzVkgV9R53EDRM65GQUQKLzt8gcXVJYs3mYnpq6L3DiLuUt47Fl09tSv9OpA==",
|
||||
"source_path": "src/demo/assets/js/bulma-calendar.min.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/bulma-calendar.min.js",
|
||||
"sha256": "58160c87c4d17f9d98ec366fe019492acde50efbc0297af7045547952b306680",
|
||||
"sri_sha512": "sha512-kkEtEtypXzruevjkoxhyEkqkZBtlhK7s8zt7IV2yPabgBwy5xbKL9uWeCS37ldS9AaNTSnveWTu4ivUvGMJUWA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_tagsinput_css",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-tagsinput",
|
||||
"version": "2.0.0",
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/Wikiki/bulma-tagsinput#readme",
|
||||
"official_url": "https://wikiki.github.io/form/tagsinput",
|
||||
"purpose": "Official Bulma tagsinput extension from Bulma's extensions page",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-tagsinput/-/bulma-tagsinput-2.0.0.tgz",
|
||||
"dist_integrity": "sha512-BFvd0oaxgeWHOEh3d4cgETy5vpSSjRRBA9w+8TWEuhjFQg38Rb+3vjDCavL+udpdjf+dRV0SK5T4kYCXTOrz5A==",
|
||||
"source_path": "dist/css/bulma-tagsinput.min.css",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/css/bulma-tagsinput.min.css",
|
||||
"sha256": "8d1de24619c05ddf9045638b52059ab492d4887ce74119eed545d66af859da89",
|
||||
"sri_sha512": "sha512-NWTkcDRubZ3pyXbZZLQBILuVsRFs8c6QGgnfe4dm5/d6yp50U+xdoCDLIcSo51fFy/GXH0O2Oed1Z1sF1faxDA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_tagsinput_js",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-tagsinput",
|
||||
"version": "2.0.0",
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/Wikiki/bulma-tagsinput#readme",
|
||||
"official_url": "https://wikiki.github.io/form/tagsinput",
|
||||
"purpose": "Official Bulma tagsinput extension runtime",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-tagsinput/-/bulma-tagsinput-2.0.0.tgz",
|
||||
"dist_integrity": "sha512-BFvd0oaxgeWHOEh3d4cgETy5vpSSjRRBA9w+8TWEuhjFQg38Rb+3vjDCavL+udpdjf+dRV0SK5T4kYCXTOrz5A==",
|
||||
"source_path": "dist/js/bulma-tagsinput.min.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/bulma-tagsinput.min.js",
|
||||
"sha256": "b355aa94ec519e374d7edf569e3dbde8bbe30ff3a193cb96f2930ee7815939d6",
|
||||
"sri_sha512": "sha512-Je6J++MjmmpxF30JCmRwM2KiK3uWQBQtqiNCjwzEMJKExLaa0BqerlYNa/fJAl5Rra4hMgRZF2fzg+V2vjE4Kw=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "bulma_switch_css",
|
||||
"kind": "npm_file",
|
||||
"package": "bulma-switch",
|
||||
"version": "2.0.4",
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/Wikiki/bulma-switch#readme",
|
||||
"official_url": "https://wikiki.github.io/form/switch",
|
||||
"purpose": "Official Bulma switch extension from Bulma's extensions page",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/bulma-switch/-/bulma-switch-2.0.4.tgz",
|
||||
"dist_integrity": "sha512-kMu4H0Pr0VjvfsnT6viRDCgptUq0Rvy7y7PX6q+IHg1xUynsjszPjhAdal5ysAlCG5HNO+5YXxeiu92qYGQolw==",
|
||||
"source_path": "dist/css/bulma-switch.min.css",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/css/bulma-switch.min.css",
|
||||
"sha256": "f0460ddebdd95425a50590908503a170f5ff08b28bd53573c71791fc7cd1e6f5",
|
||||
"sri_sha512": "sha512-zjrHYubQoNgDVqVKTyGjKcvIeQlduZTvXCvcBwQ0iqJYKLKiz9cuFAN7e98zfKqCTpI/EgFRBRcTwJw20yAFuw=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "gridstack_css",
|
||||
"kind": "npm_file",
|
||||
"package": "gridstack",
|
||||
"version": "12.4.2",
|
||||
"license": "MIT",
|
||||
"homepage": "http://gridstackjs.com/",
|
||||
"official_url": "https://gridstackjs.com/",
|
||||
"purpose": "GridStack stylesheet",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/gridstack/-/gridstack-12.4.2.tgz",
|
||||
"dist_integrity": "sha512-aXbJrQpi3LwpYXYOr4UriPM5uc/dPcjK01SdOE5PDpx2vi8tnLhU7yBg/1i4T59UhNkG/RBfabdFUObuN+gMnw==",
|
||||
"source_path": "dist/gridstack.min.css",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/css/gridstack.min.css",
|
||||
"sha256": "55e9d4ea6d8c6f8f1ea8a449b8af18d8571487c0afc6b433cccf877047cb8457",
|
||||
"sri_sha512": "sha512-ttQfsDTO64bamkJHeLDf0kzMP1NKfkootudPWS2V8Pwy+9z1wexSYjIT6/HXGg/bmtD+DRwsUnQoYEB0yePjbw=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "gridstack_js",
|
||||
"kind": "npm_file",
|
||||
"package": "gridstack",
|
||||
"version": "12.4.2",
|
||||
"license": "MIT",
|
||||
"homepage": "http://gridstackjs.com/",
|
||||
"official_url": "https://gridstackjs.com/",
|
||||
"purpose": "GridStack bundle used by the dashboard",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/gridstack/-/gridstack-12.4.2.tgz",
|
||||
"dist_integrity": "sha512-aXbJrQpi3LwpYXYOr4UriPM5uc/dPcjK01SdOE5PDpx2vi8tnLhU7yBg/1i4T59UhNkG/RBfabdFUObuN+gMnw==",
|
||||
"source_path": "dist/gridstack-all.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/gridstack-all.js",
|
||||
"sha256": "e055ca4eb8bdc65f14f38c966ec31960c4c8dc6dc42e91f421bd629808185518",
|
||||
"sri_sha512": "sha512-djBPxwvBhDep1SvOhliatweHMORhVO3HabrfBjaW6nYsa7UcJYHty31x42m4HBSJXcJSQdoEgRPLVYGGIuIaDQ=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "jquery_js",
|
||||
"kind": "npm_file",
|
||||
"package": "jquery",
|
||||
"version": "3.7.1",
|
||||
"license": "MIT",
|
||||
"homepage": "https://jquery.com",
|
||||
"official_url": "https://jquery.com",
|
||||
"purpose": "Latest jQuery 3.x release for compatibility with legacy plugins",
|
||||
"notes": "The latest npm release is jQuery 4.x, but this project still vendors 3.7.1 to avoid breaking older plugins.",
|
||||
"resolved": "https://registry.npmjs.org/jquery/-/jquery-3.7.1.tgz",
|
||||
"dist_integrity": "sha512-m4avr8yL8kmFN8psrbFFFmB/If14iN5o9nw/NgnnM+kybDJpRsAynV2BsfpTYrTRysYUdADVD7CkUUizgkpLfg==",
|
||||
"source_path": "dist/jquery.min.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/jquery.min.js",
|
||||
"sha256": "fc9a93dd241f6b045cbff0481cf4e1901becd0e12fb45166a8f17f95823f0b1a",
|
||||
"sri_sha512": "sha512-v2CJ7UaYy4JwqLDIrZUI/4hqeoQieOmAZNXBeQyjo21dadnwR+8ZaIJVT8EE2iyI61OV8e6M8PP2/4hpQINQ/g=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "htmx_js",
|
||||
"kind": "npm_file",
|
||||
"package": "htmx.org",
|
||||
"version": "2.0.8",
|
||||
"license": "0BSD",
|
||||
"homepage": "https://htmx.org/",
|
||||
"official_url": "https://htmx.org/",
|
||||
"purpose": "htmx runtime",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/htmx.org/-/htmx.org-2.0.8.tgz",
|
||||
"dist_integrity": "sha512-fm297iru0iWsNJlBrjvtN7V9zjaxd+69Oqjh4F/Vq9Wwi2kFisLcrLCiv5oBX0KLfOX/zG8AUo9ROMU5XUB44Q==",
|
||||
"source_path": "dist/htmx.min.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/htmx.min.js",
|
||||
"sha256": "22283ef68cb7545914f0a88a1bdedc7256a703d1d580c1d255217d0a50d31313",
|
||||
"sri_sha512": "sha512-CGXFnDNv5q48ciFeIyWFcfZhqYW0sSBiPO+HZDO3XLM+p8xjhezz5CCxtkXVDKfCbvF+iUhel7xoeSp19o7x7g=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "hyperscript_js",
|
||||
"kind": "npm_file",
|
||||
"package": "hyperscript.org",
|
||||
"version": "0.9.14",
|
||||
"license": "BSD 2-Clause",
|
||||
"homepage": "https://hyperscript.org/",
|
||||
"official_url": "https://hyperscript.org/",
|
||||
"purpose": "_hyperscript runtime",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/hyperscript.org/-/hyperscript.org-0.9.14.tgz",
|
||||
"dist_integrity": "sha512-ugmojsQQUMmXcnwaXYiYf8L3GbeANy/m59EmE/0Z6C5eQ52fOuSrvFkuEIejG9BdpbYB4iTtoYGqV99eYqDVMA==",
|
||||
"source_path": "dist/_hyperscript.min.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/hyperscript.min.js",
|
||||
"sha256": "3e834a3ffc0334fee54ecff4e37a6ae951cd83e6daa96651ca7cfd8f751ad4d2",
|
||||
"sri_sha512": "sha512-l43sZzpnAddmYhJyfPrgv46XhJvA95gsA28/+eW4XZLSekQ8wlP68i9f22KGkRjY0HNiZrLc5MXGo4z/tM2QNA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "magnet_js",
|
||||
"kind": "npm_file",
|
||||
"package": "@lf2com/magnet.js",
|
||||
"version": "2.0.1",
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/lf2com/magnet.js",
|
||||
"official_url": "https://github.com/lf2com/magnet.js",
|
||||
"purpose": "Magnet.js drag attraction component",
|
||||
"notes": "",
|
||||
"resolved": "https://registry.npmjs.org/@lf2com/magnet.js/-/magnet.js-2.0.1.tgz",
|
||||
"dist_integrity": "sha512-MDgv1s0aNOuftuhY9c9Ve6Yadkmn7G+Ww91cVciyHHMhPPdxTxX3XUSJXFYD3VraGFzcnI4uilik9/I76AsJEg==",
|
||||
"source_path": "dist/magnet.min.js",
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/js/magnet.min.js",
|
||||
"sha256": "05ff8858b5fb7b3ad2a618212571162e2108f580b08527716f4e63c648dcccb1",
|
||||
"sri_sha512": "sha512-aoQ3V4iCM8zTcdMDSUTRG1K9wqZzmDSisuaCLQexk9DdFy92oWvTUoAfCVLnGzzJClst8PmtasZg219REwyNkw=="
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "fontawesome_bundle",
|
||||
"kind": "url_bundle",
|
||||
"version": "6.1.1",
|
||||
"license": "Font Awesome Pro commercial terms via site-assets.fontawesome.com",
|
||||
"homepage": "https://fontawesome.com",
|
||||
"official_url": "https://fontawesome.com",
|
||||
"purpose": "Existing Font Awesome asset family, self-hosted locally to preserve the currently used icon set",
|
||||
"notes": "",
|
||||
"entry_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/all.css",
|
||||
"downloaded_files": [
|
||||
{
|
||||
"relative_path": "css/all.css",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/all.css"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-brands-400.woff2",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-brands-400.woff2"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-brands-400.ttf",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-brands-400.ttf"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-duotone-900.woff2",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-duotone-900.woff2"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-duotone-900.ttf",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-duotone-900.ttf"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-light-300.woff2",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-light-300.woff2"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-light-300.ttf",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-light-300.ttf"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-regular-400.woff2",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-regular-400.woff2"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-regular-400.ttf",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-regular-400.ttf"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-solid-900.woff2",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-solid-900.woff2"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-solid-900.ttf",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-solid-900.ttf"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-thin-100.woff2",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-thin-100.woff2"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-thin-100.ttf",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-thin-100.ttf"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-v4compatibility.woff2",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-v4compatibility.woff2"
|
||||
},
|
||||
{
|
||||
"relative_path": "css/../webfonts/fa-v4compatibility.ttf",
|
||||
"download_url": "https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-v4compatibility.ttf"
|
||||
}
|
||||
],
|
||||
"targets": [
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/all.css",
|
||||
"sha256": "a35f901d01118e5649091bd03ac5784a7db52e111fb3806524c412f3d1dcfc5d",
|
||||
"sri_sha512": "sha512-UKBBxJ5N3/MYiSsYTlEsARsp4vELKVRIklED4Mb6wpuVFOgy5Blt+sXUdz1TDReqWsm64xxBA2QoBJRCxI0x5Q=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-brands-400.woff2",
|
||||
"sha256": "3701cbff3acccd80b1f2eede4311050514f7a64c2039eb77a77368fcd6e3de28",
|
||||
"sri_sha512": "sha512-50+1yWldN03if6k/4jyncfSZyT44ev20Q7jmIGEiKG7v2qeB1nBOcVF9HD0mjSvAxmpS3+RDzoPfqbB4GfCkJg=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-brands-400.ttf",
|
||||
"sha256": "06f6dc20f535066f0e069105329ad69d526d7a16215e7ff8af3dbc2cd0894186",
|
||||
"sri_sha512": "sha512-klNly6qz+BwYPusuoZ0BIE01WW0CbrJszi5GrgZ6yFXQCrxAQRus6vMMATIxwPNYOYXu+UOd2KbRAsenhIkAGQ=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-duotone-900.woff2",
|
||||
"sha256": "6f28dce91f45bc4687582137bb5d82d9771efc774e3b2b83c30018469d191ad8",
|
||||
"sri_sha512": "sha512-Sz0g1G5m7symrIy2LfJOdASOrJQUo+IDIMpLKhFKm4RM7MLjV6wqlVGtDf2lEqE/tSzU5LQTtsc3djfZwDXCFQ=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-duotone-900.ttf",
|
||||
"sha256": "6687687c70d502ff006750978f55d6d5108c364eec78db9b845adcfc838b78a4",
|
||||
"sri_sha512": "sha512-1301N0n8/h7Kjp6E7LbEe7LWCyDPkfvYn9COlQ6ISbQTP/uPThs8NLhSr+nQskAPzjDoMqfojg0EyNfsD84s/w=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-light-300.woff2",
|
||||
"sha256": "515954fe1dc163277d36b51f79fe56265f6b6cf79f99e307bbf6e52b477b9c87",
|
||||
"sri_sha512": "sha512-xdrnQ7rYHIz80KJgGizTu80jCcWF4tGt/inACAoWT3dl3BleiIjq/g90RA42wJNcLpz3n8JAM1Z0ayUGROP5RQ=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-light-300.ttf",
|
||||
"sha256": "7c41f14e1e8bbe7780049512c631b5301936a985dc6bbadd74a0cbc05549769c",
|
||||
"sri_sha512": "sha512-i3tSu80Wfsx/12vQQziHhYQLjBNYLGQZfeXfyurZhCBibO+lq6kX1zARiktltXxBszu513BIKbnfzLTRYamoMQ=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-regular-400.woff2",
|
||||
"sha256": "121b176974226dbc9b1ab227becb657d40b88d2bb7010a746c2360c31d7c373e",
|
||||
"sri_sha512": "sha512-qioT43fXB5q4Bbpn8sPQE9OIZLjKD0c0lVmpm6KmT8k34LM6gkRcOOMi1BOl2lohFG/7p9tzKfTP5G563BQq1g=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-regular-400.ttf",
|
||||
"sha256": "daa07950214eae5603ebb5a582a694da7b31a4c93f3bf38e9f616122860d83a5",
|
||||
"sri_sha512": "sha512-1Ttr+eWt+Z7Na3gRZklbYEnBYGbQUATusZ4oeYLamI4iIjN8ILNV/upwk/h8qIJ2lf/Mukkj5zBVhUdXWJWLwQ=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-solid-900.woff2",
|
||||
"sha256": "f350c708b5e7748a452b4b98600fa49127166d995686e260ccafb58d51a4ea62",
|
||||
"sri_sha512": "sha512-Ph1xTLhfMycYSW+wUN8oL3Ggl56nGIS95EHiKWggcL/GbMNjPdib1Hreb1D4COlMxdiGCkk43nspQnpDuTjgQg=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-solid-900.ttf",
|
||||
"sha256": "bbab2beb558f176d68d3e6002e4ea608633f7e6347dc6245dc67f8ad1c9ca18a",
|
||||
"sri_sha512": "sha512-c/IBq1JAYcB6z9QnrVqSJREvhrp3NudGg8phmemZ1P41RTIc/z2txqbzP/dWVZYzZl9mPxDnDNLGTFWIQsAfPQ=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-thin-100.woff2",
|
||||
"sha256": "92fb7777eb1a6a9c8e94048403db3e197e5e541bfd8142255e74ac69141081b2",
|
||||
"sri_sha512": "sha512-U61hRcHKBdwKMsPzon6yDsO7ReY5gCHX3uztO+D48ynrqy4ZESWdoBtKohmJuXxry3epUhLQLsMd62nb6zmzhA=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-thin-100.ttf",
|
||||
"sha256": "b8ef1ff99503a5611531d09bf5815261c1fb2f8291ae562711a7d7b651f92b9f",
|
||||
"sri_sha512": "sha512-kFuJup+kEGELeelH6CXGKPaQ+cJfpLbOtbFFae6CwZfAidwsoM74wYu1Kzwb+pgeK9ODZc7SQMXpFWj5Mn7KtQ=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-v4compatibility.woff2",
|
||||
"sha256": "4468a4ec439fecc93d86936bc3b5b5db7d0b98ce173faefd18938b2cef79245b",
|
||||
"sri_sha512": "sha512-DzAaC0v4WatRDbN7LVoGXD5pmuOmveGoBimQLfnQFz74+L+hhZVatKhENPHAVkeNtRTBQ63B8w0fOlrQ3bERuw=="
|
||||
},
|
||||
{
|
||||
"path": "core/static/vendor/fontawesome/css/../webfonts/fa-v4compatibility.ttf",
|
||||
"sha256": "0f40286f817bb931f65fc1a5963f1450af855547fa9d2c02a009ad389fcc7d95",
|
||||
"sri_sha512": "sha512-emfG/sIso1lhi9tNNmqIGP2xNU8SP47qPspS7g8V7ltbotbznl/wy00DxyjrtfxhYw3OfYYlxtyiGyoyRAFXTg=="
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
315
artifacts/frontend_libraries.md
Normal file
315
artifacts/frontend_libraries.md
Normal file
@@ -0,0 +1,315 @@
|
||||
# Frontend Library Inventory
|
||||
|
||||
This report is generated by `scripts/vendor_frontend_assets.py` from `tools/frontend_assets/asset-manifest.json`.
|
||||
|
||||
## bulma_css
|
||||
|
||||
- Source: `bulma`
|
||||
- Version: `1.0.4`
|
||||
- Official URL: https://bulma.io
|
||||
- Homepage: https://bulma.io
|
||||
- License: `MIT`
|
||||
- Purpose: Bulma core stylesheet
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma/-/bulma-1.0.4.tgz`
|
||||
- Upstream package integrity: `sha512-Ffb6YGXDiZYX3cqvSbHWqQ8+LkX6tVoTcZuVB3lm93sbAVXlO0D6QlOTMnV6g18gILpAXqkG2z9hf9z4hCjz2g==`
|
||||
- Local targets:
|
||||
- `core/static/css/bulma.min.css`
|
||||
- SHA-256: `67fa26df1ca9e95d8f2adc7c04fa1b15fa3d24257470ebc10cc68b9aab914bee`
|
||||
- SRI sha512: `sha512-yh2RE0wZCVZeysGiqTwDTO/dKelCbS9bP2L94UvOFtl/FKXcNAje3Y2oBg/ZMZ3LS1sicYk4dYVGtDex75fvvA==`
|
||||
|
||||
|
||||
## bulma_tooltip_css
|
||||
|
||||
- Source: `bulma-tooltip`
|
||||
- Version: `3.0.2`
|
||||
- Official URL: https://wikiki.github.io/elements/tooltip
|
||||
- Homepage: https://github.com/Wikiki/bulma-tooltip#readme
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma tooltip extension from Bulma's extensions page
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-tooltip/-/bulma-tooltip-3.0.2.tgz`
|
||||
- Upstream package integrity: `sha512-CsT3APjhlZScskFg38n8HYL8oYNUHQtcu4sz6ERarxkUpBRbk9v0h/5KAvXeKapVSn2dp9l7bOGit5SECP8EWQ==`
|
||||
- Local targets:
|
||||
- `core/static/css/bulma-tooltip.min.css`
|
||||
- SHA-256: `5c79d12a40b3532aaec159faa0b85fd3d500e192467761b71e0bda0fd04f3076`
|
||||
- SRI sha512: `sha512-SNDNIUvSYhnqDV9FFXaH/e0xZ6NzkG4Qm5dafLLf0PCMkzICKaOmMTgI3y2t2jZK+hAtP6A7UBcFqjWMhsujIg==`
|
||||
|
||||
|
||||
## bulma_slider_css
|
||||
|
||||
- Source: `bulma-slider`
|
||||
- Version: `2.0.5`
|
||||
- Official URL: https://wikiki.github.io/form/slider
|
||||
- Homepage: https://github.com/Wikiki/bulma-slider#readme
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma slider extension from Bulma's extensions page
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-slider/-/bulma-slider-2.0.5.tgz`
|
||||
- Upstream package integrity: `sha512-6woD/1E7q1o5bfEaQjNqpWZaCItC1oHe9bN15WYB2ELqz2gDaJYZkf+rlozGpAYOXQGDQGCCv3y+QuKjx6sQuw==`
|
||||
- Local targets:
|
||||
- `core/static/css/bulma-slider.min.css`
|
||||
- SHA-256: `f9d952627d388b8ba267e1388d6923274cf9e62e758d459c5a045f3933e9dc8a`
|
||||
- SRI sha512: `sha512-9o5SkCRCA9thttRH3Gb5QXLxKdRiuRLdO6ToEPwRHGLXjrhTZwFj0rEHjrCcJvDN9/aNaWMpGOIEA2vZsHmEqw==`
|
||||
|
||||
|
||||
## bulma_slider_js
|
||||
|
||||
- Source: `bulma-slider`
|
||||
- Version: `2.0.5`
|
||||
- Official URL: https://wikiki.github.io/form/slider
|
||||
- Homepage: https://github.com/Wikiki/bulma-slider#readme
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma slider extension runtime
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-slider/-/bulma-slider-2.0.5.tgz`
|
||||
- Upstream package integrity: `sha512-6woD/1E7q1o5bfEaQjNqpWZaCItC1oHe9bN15WYB2ELqz2gDaJYZkf+rlozGpAYOXQGDQGCCv3y+QuKjx6sQuw==`
|
||||
- Local targets:
|
||||
- `core/static/js/bulma-slider.min.js`
|
||||
- SHA-256: `db68ebe154a25597913c5635f31500fe7a32e5a205fb9a98c9642d0c2de47d9e`
|
||||
- SRI sha512: `sha512-WLKXHCsMXTSIPsmQShJRE6K4IzwvNkhwxr/Oo8N3z+kzjhGleHibspmWLTawNMdl2z9E23XK20+yvUTDZ+zeNQ==`
|
||||
|
||||
|
||||
## bulma_calendar_css
|
||||
|
||||
- Source: `bulma-calendar`
|
||||
- Version: `7.1.1`
|
||||
- Official URL: https://wikiki.github.io/components/calendar
|
||||
- Homepage: https://doc.mh-s.de/bulma-calendar
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma calendar extension from Bulma's extensions page
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-calendar/-/bulma-calendar-7.1.1.tgz`
|
||||
- Upstream package integrity: `sha512-E08i25KOfqMKBndgDF3y3eoQ0dUzVkgV9R53EDRM65GQUQKLzt8gcXVJYs3mYnpq6L3DiLuUt47Fl09tSv9OpA==`
|
||||
- Local targets:
|
||||
- `core/static/css/bulma-calendar.min.css`
|
||||
- SHA-256: `d18b488ca52584bcd6ea3fb84bf06380e47a3cd18660a235617da017d13ab269`
|
||||
- SRI sha512: `sha512-IOnJQkgQpezPDPTJcRiWD7YVI3sF2RYzYDl4isbDT2geSaEHRQ615UN/8GhJbSkvqkKRZu8SBCQ7XwKMqsqLFQ==`
|
||||
|
||||
|
||||
## bulma_calendar_js
|
||||
|
||||
- Source: `bulma-calendar`
|
||||
- Version: `7.1.1`
|
||||
- Official URL: https://wikiki.github.io/components/calendar
|
||||
- Homepage: https://doc.mh-s.de/bulma-calendar
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma calendar extension runtime
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-calendar/-/bulma-calendar-7.1.1.tgz`
|
||||
- Upstream package integrity: `sha512-E08i25KOfqMKBndgDF3y3eoQ0dUzVkgV9R53EDRM65GQUQKLzt8gcXVJYs3mYnpq6L3DiLuUt47Fl09tSv9OpA==`
|
||||
- Local targets:
|
||||
- `core/static/js/bulma-calendar.min.js`
|
||||
- SHA-256: `58160c87c4d17f9d98ec366fe019492acde50efbc0297af7045547952b306680`
|
||||
- SRI sha512: `sha512-kkEtEtypXzruevjkoxhyEkqkZBtlhK7s8zt7IV2yPabgBwy5xbKL9uWeCS37ldS9AaNTSnveWTu4ivUvGMJUWA==`
|
||||
|
||||
|
||||
## bulma_tagsinput_css
|
||||
|
||||
- Source: `bulma-tagsinput`
|
||||
- Version: `2.0.0`
|
||||
- Official URL: https://wikiki.github.io/form/tagsinput
|
||||
- Homepage: https://github.com/Wikiki/bulma-tagsinput#readme
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma tagsinput extension from Bulma's extensions page
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-tagsinput/-/bulma-tagsinput-2.0.0.tgz`
|
||||
- Upstream package integrity: `sha512-BFvd0oaxgeWHOEh3d4cgETy5vpSSjRRBA9w+8TWEuhjFQg38Rb+3vjDCavL+udpdjf+dRV0SK5T4kYCXTOrz5A==`
|
||||
- Local targets:
|
||||
- `core/static/css/bulma-tagsinput.min.css`
|
||||
- SHA-256: `8d1de24619c05ddf9045638b52059ab492d4887ce74119eed545d66af859da89`
|
||||
- SRI sha512: `sha512-NWTkcDRubZ3pyXbZZLQBILuVsRFs8c6QGgnfe4dm5/d6yp50U+xdoCDLIcSo51fFy/GXH0O2Oed1Z1sF1faxDA==`
|
||||
|
||||
|
||||
## bulma_tagsinput_js
|
||||
|
||||
- Source: `bulma-tagsinput`
|
||||
- Version: `2.0.0`
|
||||
- Official URL: https://wikiki.github.io/form/tagsinput
|
||||
- Homepage: https://github.com/Wikiki/bulma-tagsinput#readme
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma tagsinput extension runtime
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-tagsinput/-/bulma-tagsinput-2.0.0.tgz`
|
||||
- Upstream package integrity: `sha512-BFvd0oaxgeWHOEh3d4cgETy5vpSSjRRBA9w+8TWEuhjFQg38Rb+3vjDCavL+udpdjf+dRV0SK5T4kYCXTOrz5A==`
|
||||
- Local targets:
|
||||
- `core/static/js/bulma-tagsinput.min.js`
|
||||
- SHA-256: `b355aa94ec519e374d7edf569e3dbde8bbe30ff3a193cb96f2930ee7815939d6`
|
||||
- SRI sha512: `sha512-Je6J++MjmmpxF30JCmRwM2KiK3uWQBQtqiNCjwzEMJKExLaa0BqerlYNa/fJAl5Rra4hMgRZF2fzg+V2vjE4Kw==`
|
||||
|
||||
|
||||
## bulma_switch_css
|
||||
|
||||
- Source: `bulma-switch`
|
||||
- Version: `2.0.4`
|
||||
- Official URL: https://wikiki.github.io/form/switch
|
||||
- Homepage: https://github.com/Wikiki/bulma-switch#readme
|
||||
- License: `MIT`
|
||||
- Purpose: Official Bulma switch extension from Bulma's extensions page
|
||||
- Resolved tarball: `https://registry.npmjs.org/bulma-switch/-/bulma-switch-2.0.4.tgz`
|
||||
- Upstream package integrity: `sha512-kMu4H0Pr0VjvfsnT6viRDCgptUq0Rvy7y7PX6q+IHg1xUynsjszPjhAdal5ysAlCG5HNO+5YXxeiu92qYGQolw==`
|
||||
- Local targets:
|
||||
- `core/static/css/bulma-switch.min.css`
|
||||
- SHA-256: `f0460ddebdd95425a50590908503a170f5ff08b28bd53573c71791fc7cd1e6f5`
|
||||
- SRI sha512: `sha512-zjrHYubQoNgDVqVKTyGjKcvIeQlduZTvXCvcBwQ0iqJYKLKiz9cuFAN7e98zfKqCTpI/EgFRBRcTwJw20yAFuw==`
|
||||
|
||||
|
||||
## gridstack_css
|
||||
|
||||
- Source: `gridstack`
|
||||
- Version: `12.4.2`
|
||||
- Official URL: https://gridstackjs.com/
|
||||
- Homepage: http://gridstackjs.com/
|
||||
- License: `MIT`
|
||||
- Purpose: GridStack stylesheet
|
||||
- Resolved tarball: `https://registry.npmjs.org/gridstack/-/gridstack-12.4.2.tgz`
|
||||
- Upstream package integrity: `sha512-aXbJrQpi3LwpYXYOr4UriPM5uc/dPcjK01SdOE5PDpx2vi8tnLhU7yBg/1i4T59UhNkG/RBfabdFUObuN+gMnw==`
|
||||
- Local targets:
|
||||
- `core/static/css/gridstack.min.css`
|
||||
- SHA-256: `55e9d4ea6d8c6f8f1ea8a449b8af18d8571487c0afc6b433cccf877047cb8457`
|
||||
- SRI sha512: `sha512-ttQfsDTO64bamkJHeLDf0kzMP1NKfkootudPWS2V8Pwy+9z1wexSYjIT6/HXGg/bmtD+DRwsUnQoYEB0yePjbw==`
|
||||
|
||||
|
||||
## gridstack_js
|
||||
|
||||
- Source: `gridstack`
|
||||
- Version: `12.4.2`
|
||||
- Official URL: https://gridstackjs.com/
|
||||
- Homepage: http://gridstackjs.com/
|
||||
- License: `MIT`
|
||||
- Purpose: GridStack bundle used by the dashboard
|
||||
- Resolved tarball: `https://registry.npmjs.org/gridstack/-/gridstack-12.4.2.tgz`
|
||||
- Upstream package integrity: `sha512-aXbJrQpi3LwpYXYOr4UriPM5uc/dPcjK01SdOE5PDpx2vi8tnLhU7yBg/1i4T59UhNkG/RBfabdFUObuN+gMnw==`
|
||||
- Local targets:
|
||||
- `core/static/js/gridstack-all.js`
|
||||
- SHA-256: `e055ca4eb8bdc65f14f38c966ec31960c4c8dc6dc42e91f421bd629808185518`
|
||||
- SRI sha512: `sha512-djBPxwvBhDep1SvOhliatweHMORhVO3HabrfBjaW6nYsa7UcJYHty31x42m4HBSJXcJSQdoEgRPLVYGGIuIaDQ==`
|
||||
|
||||
|
||||
## jquery_js
|
||||
|
||||
- Source: `jquery`
|
||||
- Version: `3.7.1`
|
||||
- Official URL: https://jquery.com
|
||||
- Homepage: https://jquery.com
|
||||
- License: `MIT`
|
||||
- Purpose: Latest jQuery 3.x release for compatibility with legacy plugins
|
||||
- Resolved tarball: `https://registry.npmjs.org/jquery/-/jquery-3.7.1.tgz`
|
||||
- Upstream package integrity: `sha512-m4avr8yL8kmFN8psrbFFFmB/If14iN5o9nw/NgnnM+kybDJpRsAynV2BsfpTYrTRysYUdADVD7CkUUizgkpLfg==`
|
||||
- Notes: The latest npm release is jQuery 4.x, but this project still vendors 3.7.1 to avoid breaking older plugins.
|
||||
- Local targets:
|
||||
- `core/static/js/jquery.min.js`
|
||||
- SHA-256: `fc9a93dd241f6b045cbff0481cf4e1901becd0e12fb45166a8f17f95823f0b1a`
|
||||
- SRI sha512: `sha512-v2CJ7UaYy4JwqLDIrZUI/4hqeoQieOmAZNXBeQyjo21dadnwR+8ZaIJVT8EE2iyI61OV8e6M8PP2/4hpQINQ/g==`
|
||||
|
||||
|
||||
## htmx_js
|
||||
|
||||
- Source: `htmx.org`
|
||||
- Version: `2.0.8`
|
||||
- Official URL: https://htmx.org/
|
||||
- Homepage: https://htmx.org/
|
||||
- License: `0BSD`
|
||||
- Purpose: htmx runtime
|
||||
- Resolved tarball: `https://registry.npmjs.org/htmx.org/-/htmx.org-2.0.8.tgz`
|
||||
- Upstream package integrity: `sha512-fm297iru0iWsNJlBrjvtN7V9zjaxd+69Oqjh4F/Vq9Wwi2kFisLcrLCiv5oBX0KLfOX/zG8AUo9ROMU5XUB44Q==`
|
||||
- Local targets:
|
||||
- `core/static/js/htmx.min.js`
|
||||
- SHA-256: `22283ef68cb7545914f0a88a1bdedc7256a703d1d580c1d255217d0a50d31313`
|
||||
- SRI sha512: `sha512-CGXFnDNv5q48ciFeIyWFcfZhqYW0sSBiPO+HZDO3XLM+p8xjhezz5CCxtkXVDKfCbvF+iUhel7xoeSp19o7x7g==`
|
||||
|
||||
|
||||
## hyperscript_js
|
||||
|
||||
- Source: `hyperscript.org`
|
||||
- Version: `0.9.14`
|
||||
- Official URL: https://hyperscript.org/
|
||||
- Homepage: https://hyperscript.org/
|
||||
- License: `BSD 2-Clause`
|
||||
- Purpose: _hyperscript runtime
|
||||
- Resolved tarball: `https://registry.npmjs.org/hyperscript.org/-/hyperscript.org-0.9.14.tgz`
|
||||
- Upstream package integrity: `sha512-ugmojsQQUMmXcnwaXYiYf8L3GbeANy/m59EmE/0Z6C5eQ52fOuSrvFkuEIejG9BdpbYB4iTtoYGqV99eYqDVMA==`
|
||||
- Local targets:
|
||||
- `core/static/js/hyperscript.min.js`
|
||||
- SHA-256: `3e834a3ffc0334fee54ecff4e37a6ae951cd83e6daa96651ca7cfd8f751ad4d2`
|
||||
- SRI sha512: `sha512-l43sZzpnAddmYhJyfPrgv46XhJvA95gsA28/+eW4XZLSekQ8wlP68i9f22KGkRjY0HNiZrLc5MXGo4z/tM2QNA==`
|
||||
|
||||
|
||||
## magnet_js
|
||||
|
||||
- Source: `@lf2com/magnet.js`
|
||||
- Version: `2.0.1`
|
||||
- Official URL: https://github.com/lf2com/magnet.js
|
||||
- Homepage: https://github.com/lf2com/magnet.js
|
||||
- License: `MIT`
|
||||
- Purpose: Magnet.js drag attraction component
|
||||
- Resolved tarball: `https://registry.npmjs.org/@lf2com/magnet.js/-/magnet.js-2.0.1.tgz`
|
||||
- Upstream package integrity: `sha512-MDgv1s0aNOuftuhY9c9Ve6Yadkmn7G+Ww91cVciyHHMhPPdxTxX3XUSJXFYD3VraGFzcnI4uilik9/I76AsJEg==`
|
||||
- Local targets:
|
||||
- `core/static/js/magnet.min.js`
|
||||
- SHA-256: `05ff8858b5fb7b3ad2a618212571162e2108f580b08527716f4e63c648dcccb1`
|
||||
- SRI sha512: `sha512-aoQ3V4iCM8zTcdMDSUTRG1K9wqZzmDSisuaCLQexk9DdFy92oWvTUoAfCVLnGzzJClst8PmtasZg219REwyNkw==`
|
||||
|
||||
|
||||
## fontawesome_bundle
|
||||
|
||||
- Source: `https://site-assets.fontawesome.com/releases/v6.1.1/css/all.css`
|
||||
- Version: `6.1.1`
|
||||
- Official URL: https://fontawesome.com
|
||||
- Homepage: https://fontawesome.com
|
||||
- License: `Font Awesome Pro commercial terms via site-assets.fontawesome.com`
|
||||
- Purpose: Existing Font Awesome asset family, self-hosted locally to preserve the currently used icon set
|
||||
- Downloaded bundle files:
|
||||
- `css/all.css` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/all.css`
|
||||
- `css/../webfonts/fa-brands-400.woff2` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-brands-400.woff2`
|
||||
- `css/../webfonts/fa-brands-400.ttf` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-brands-400.ttf`
|
||||
- `css/../webfonts/fa-duotone-900.woff2` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-duotone-900.woff2`
|
||||
- `css/../webfonts/fa-duotone-900.ttf` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-duotone-900.ttf`
|
||||
- `css/../webfonts/fa-light-300.woff2` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-light-300.woff2`
|
||||
- `css/../webfonts/fa-light-300.ttf` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-light-300.ttf`
|
||||
- `css/../webfonts/fa-regular-400.woff2` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-regular-400.woff2`
|
||||
- `css/../webfonts/fa-regular-400.ttf` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-regular-400.ttf`
|
||||
- `css/../webfonts/fa-solid-900.woff2` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-solid-900.woff2`
|
||||
- `css/../webfonts/fa-solid-900.ttf` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-solid-900.ttf`
|
||||
- `css/../webfonts/fa-thin-100.woff2` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-thin-100.woff2`
|
||||
- `css/../webfonts/fa-thin-100.ttf` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-thin-100.ttf`
|
||||
- `css/../webfonts/fa-v4compatibility.woff2` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-v4compatibility.woff2`
|
||||
- `css/../webfonts/fa-v4compatibility.ttf` from `https://site-assets.fontawesome.com/releases/v6.1.1/css/webfonts/fa-v4compatibility.ttf`
|
||||
- Local targets:
|
||||
- `core/static/vendor/fontawesome/css/all.css`
|
||||
- SHA-256: `a35f901d01118e5649091bd03ac5784a7db52e111fb3806524c412f3d1dcfc5d`
|
||||
- SRI sha512: `sha512-UKBBxJ5N3/MYiSsYTlEsARsp4vELKVRIklED4Mb6wpuVFOgy5Blt+sXUdz1TDReqWsm64xxBA2QoBJRCxI0x5Q==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-brands-400.woff2`
|
||||
- SHA-256: `3701cbff3acccd80b1f2eede4311050514f7a64c2039eb77a77368fcd6e3de28`
|
||||
- SRI sha512: `sha512-50+1yWldN03if6k/4jyncfSZyT44ev20Q7jmIGEiKG7v2qeB1nBOcVF9HD0mjSvAxmpS3+RDzoPfqbB4GfCkJg==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-brands-400.ttf`
|
||||
- SHA-256: `06f6dc20f535066f0e069105329ad69d526d7a16215e7ff8af3dbc2cd0894186`
|
||||
- SRI sha512: `sha512-klNly6qz+BwYPusuoZ0BIE01WW0CbrJszi5GrgZ6yFXQCrxAQRus6vMMATIxwPNYOYXu+UOd2KbRAsenhIkAGQ==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-duotone-900.woff2`
|
||||
- SHA-256: `6f28dce91f45bc4687582137bb5d82d9771efc774e3b2b83c30018469d191ad8`
|
||||
- SRI sha512: `sha512-Sz0g1G5m7symrIy2LfJOdASOrJQUo+IDIMpLKhFKm4RM7MLjV6wqlVGtDf2lEqE/tSzU5LQTtsc3djfZwDXCFQ==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-duotone-900.ttf`
|
||||
- SHA-256: `6687687c70d502ff006750978f55d6d5108c364eec78db9b845adcfc838b78a4`
|
||||
- SRI sha512: `sha512-1301N0n8/h7Kjp6E7LbEe7LWCyDPkfvYn9COlQ6ISbQTP/uPThs8NLhSr+nQskAPzjDoMqfojg0EyNfsD84s/w==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-light-300.woff2`
|
||||
- SHA-256: `515954fe1dc163277d36b51f79fe56265f6b6cf79f99e307bbf6e52b477b9c87`
|
||||
- SRI sha512: `sha512-xdrnQ7rYHIz80KJgGizTu80jCcWF4tGt/inACAoWT3dl3BleiIjq/g90RA42wJNcLpz3n8JAM1Z0ayUGROP5RQ==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-light-300.ttf`
|
||||
- SHA-256: `7c41f14e1e8bbe7780049512c631b5301936a985dc6bbadd74a0cbc05549769c`
|
||||
- SRI sha512: `sha512-i3tSu80Wfsx/12vQQziHhYQLjBNYLGQZfeXfyurZhCBibO+lq6kX1zARiktltXxBszu513BIKbnfzLTRYamoMQ==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-regular-400.woff2`
|
||||
- SHA-256: `121b176974226dbc9b1ab227becb657d40b88d2bb7010a746c2360c31d7c373e`
|
||||
- SRI sha512: `sha512-qioT43fXB5q4Bbpn8sPQE9OIZLjKD0c0lVmpm6KmT8k34LM6gkRcOOMi1BOl2lohFG/7p9tzKfTP5G563BQq1g==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-regular-400.ttf`
|
||||
- SHA-256: `daa07950214eae5603ebb5a582a694da7b31a4c93f3bf38e9f616122860d83a5`
|
||||
- SRI sha512: `sha512-1Ttr+eWt+Z7Na3gRZklbYEnBYGbQUATusZ4oeYLamI4iIjN8ILNV/upwk/h8qIJ2lf/Mukkj5zBVhUdXWJWLwQ==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-solid-900.woff2`
|
||||
- SHA-256: `f350c708b5e7748a452b4b98600fa49127166d995686e260ccafb58d51a4ea62`
|
||||
- SRI sha512: `sha512-Ph1xTLhfMycYSW+wUN8oL3Ggl56nGIS95EHiKWggcL/GbMNjPdib1Hreb1D4COlMxdiGCkk43nspQnpDuTjgQg==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-solid-900.ttf`
|
||||
- SHA-256: `bbab2beb558f176d68d3e6002e4ea608633f7e6347dc6245dc67f8ad1c9ca18a`
|
||||
- SRI sha512: `sha512-c/IBq1JAYcB6z9QnrVqSJREvhrp3NudGg8phmemZ1P41RTIc/z2txqbzP/dWVZYzZl9mPxDnDNLGTFWIQsAfPQ==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-thin-100.woff2`
|
||||
- SHA-256: `92fb7777eb1a6a9c8e94048403db3e197e5e541bfd8142255e74ac69141081b2`
|
||||
- SRI sha512: `sha512-U61hRcHKBdwKMsPzon6yDsO7ReY5gCHX3uztO+D48ynrqy4ZESWdoBtKohmJuXxry3epUhLQLsMd62nb6zmzhA==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-thin-100.ttf`
|
||||
- SHA-256: `b8ef1ff99503a5611531d09bf5815261c1fb2f8291ae562711a7d7b651f92b9f`
|
||||
- SRI sha512: `sha512-kFuJup+kEGELeelH6CXGKPaQ+cJfpLbOtbFFae6CwZfAidwsoM74wYu1Kzwb+pgeK9ODZc7SQMXpFWj5Mn7KtQ==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-v4compatibility.woff2`
|
||||
- SHA-256: `4468a4ec439fecc93d86936bc3b5b5db7d0b98ce173faefd18938b2cef79245b`
|
||||
- SRI sha512: `sha512-DzAaC0v4WatRDbN7LVoGXD5pmuOmveGoBimQLfnQFz74+L+hhZVatKhENPHAVkeNtRTBQ63B8w0fOlrQ3bERuw==`
|
||||
- `core/static/vendor/fontawesome/css/../webfonts/fa-v4compatibility.ttf`
|
||||
- SHA-256: `0f40286f817bb931f65fc1a5963f1450af855547fa9d2c02a009ad389fcc7d95`
|
||||
- SRI sha512: `sha512-emfG/sIso1lhi9tNNmqIGP2xNU8SP47qPspS7g8V7ltbotbznl/wy00DxyjrtfxhYw3OfYYlxtyiGyoyRAFXTg==`
|
||||
71
artifacts/mcp/manticore-mcp-server.md
Normal file
71
artifacts/mcp/manticore-mcp-server.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Manticore MCP Server (GIA)
|
||||
|
||||
This document describes the MCP server wired for task + memory operations in GIA.
|
||||
|
||||
## Server entrypoint
|
||||
|
||||
- Django management command: `python manage.py mcp_manticore_server`
|
||||
- Python module: `core.mcp.server`
|
||||
- Tool handlers: `core.mcp.tools`
|
||||
|
||||
## Rust worker frontend (optional)
|
||||
|
||||
For low-overhead direct Manticore operations, a Rust stdio MCP worker is included:
|
||||
|
||||
- Project: `rust/manticore-mcp-worker`
|
||||
- Build: `make mcp-rust-build`
|
||||
- Binary: `rust/manticore-mcp-worker/target/release/manticore-mcp-worker`
|
||||
- VS Code server name: `manticore-rust-worker` (disabled by default in `/code/xf/.vscode/mcp.json`)
|
||||
|
||||
This worker exposes fast table/status/query/maintenance operations and can be enabled when you want a minimal MCP process in front of Manticore.
|
||||
|
||||
## VS Code wiring
|
||||
|
||||
Workspace config is in `/code/xf/.vscode/mcp.json`:
|
||||
|
||||
- Server name: `manticore`
|
||||
- Launch method: `podman exec -i ur_gia /venv/bin/python manage.py mcp_manticore_server`
|
||||
- Forced env:
|
||||
- `MEMORY_SEARCH_BACKEND=manticore`
|
||||
|
||||
`MANTICORE_HTTP_URL` is inherited from container environment so each deployment can set the correct reachable address.
|
||||
|
||||
This allows MCP tool calls from VS Code to run against the live GIA container without requiring local Django dependencies.
|
||||
|
||||
## Implemented MCP tools
|
||||
|
||||
- `manticore.status`
|
||||
- `manticore.query`
|
||||
- `manticore.reindex`
|
||||
- `memory.list`
|
||||
- `memory.propose`
|
||||
- `memory.pending`
|
||||
- `memory.review`
|
||||
- `memory.suggest_from_messages`
|
||||
- `tasks.list`
|
||||
- `tasks.search`
|
||||
- `tasks.get`
|
||||
- `tasks.events`
|
||||
- `tasks.create_note`
|
||||
- `tasks.link_artifact`
|
||||
- `wiki.create_article`
|
||||
- `wiki.update_article`
|
||||
- `wiki.list`
|
||||
- `wiki.get`
|
||||
- `project.get_guidelines`
|
||||
- `project.get_layout`
|
||||
- `project.get_runbook`
|
||||
- `docs.append_run_note`
|
||||
|
||||
`docs.append_run_note` appends markdown notes to `/tmp/gia-mcp-run-notes.md` by default (or a project path you pass explicitly).
|
||||
|
||||
All MCP tool invocations are audit-logged in `core_mcptoolauditlog` (`MCPToolAuditLog` model).
|
||||
|
||||
## Runtime notes
|
||||
|
||||
1. Ensure GIA services are running (`make run`).
|
||||
2. Start Manticore container:
|
||||
- `./utilities/memory/manage_manticore_container.sh up`
|
||||
3. Optional initial index:
|
||||
- `podman exec ur_gia /venv/bin/python manage.py memory_search_reindex --user-id <id> --statuses active`
|
||||
4. In VS Code, approve/enabled the workspace MCP server when prompted.
|
||||
28
artifacts/plans/03-media-asset-normalization.md
Normal file
28
artifacts/plans/03-media-asset-normalization.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Feature Plan: Media Asset Normalization
|
||||
|
||||
## Goal
|
||||
Normalize media into reusable assets so messages reference media IDs rather than duplicate transport blobs.
|
||||
|
||||
## Why This Fits GIA
|
||||
- GIA already has shared media prep and cross-transport relay.
|
||||
- Normalized assets reduce duplicate downloads/uploads and improve traceability.
|
||||
|
||||
## Scope
|
||||
- New `MediaAsset` entity with checksum + metadata.
|
||||
- New link table between `Message` and `MediaAsset`.
|
||||
- De-dup by `sha256` and file size.
|
||||
|
||||
## Implementation
|
||||
1. Add models/migrations: `MediaAsset`, `MessageMediaRef`.
|
||||
2. Update inbound media ingestion to upsert `MediaAsset`.
|
||||
3. Update outbound transport prep to consume `MediaAsset` references.
|
||||
4. Add background backfill for existing attachments.
|
||||
5. Add retention/GC strategy for orphaned assets.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Same file across transports resolves to one `MediaAsset`.
|
||||
- Message records keep only pointers and display metadata.
|
||||
- Relay path can reuse existing asset without re-download.
|
||||
|
||||
## Out of Scope
|
||||
- External CDN migration in this phase.
|
||||
27
artifacts/plans/04-identity-resolution-and-merge.md
Normal file
27
artifacts/plans/04-identity-resolution-and-merge.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Feature Plan: Identity Resolution and Merge
|
||||
|
||||
## Goal
|
||||
Improve person graph quality by suggesting and applying safe merges across transport identifiers.
|
||||
|
||||
## Why This Fits GIA
|
||||
- Core value of universal inbox depends on clean `Person` identity graph.
|
||||
|
||||
## Scope
|
||||
- Heuristic scoring for candidate merges.
|
||||
- Manual review queue with approve/reject.
|
||||
- Merge operation with audit trail and undo window.
|
||||
|
||||
## Implementation
|
||||
1. Add scoring service using signals: normalized phone, username similarity, name overlap, shared chat co-occurrence.
|
||||
2. Add `IdentityMergeSuggestion` model with score + reasons.
|
||||
3. Add UI panel to review/approve merges.
|
||||
4. Implement safe merge transaction (`PersonIdentifier` reassignment, metadata merge rules).
|
||||
5. Emit audit events and rollback snapshots.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Suggestions generated deterministically and explain reasons.
|
||||
- Merge is idempotent and reversible within configured window.
|
||||
- No identifier is lost or orphaned.
|
||||
|
||||
## Out of Scope
|
||||
- Fully automatic merge without human approval.
|
||||
27
artifacts/plans/05-adapter-resilience-supervisor.md
Normal file
27
artifacts/plans/05-adapter-resilience-supervisor.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Feature Plan: Adapter Resilience Supervisor
|
||||
|
||||
## Goal
|
||||
Make adapters self-healing and observable under disconnects, API drift, and transient faults.
|
||||
|
||||
## Why This Fits GIA
|
||||
- Bridge reliability is a product requirement for multi-network ops.
|
||||
|
||||
## Scope
|
||||
- Health probes per adapter.
|
||||
- Reconnect/backoff supervisor.
|
||||
- Circuit-breaker for repeated failure classes.
|
||||
|
||||
## Implementation
|
||||
1. Add adapter health state model (healthy/degraded/down).
|
||||
2. Add watchdog jobs for Signal/WhatsApp/Instagram/XMPP.
|
||||
3. Implement exponential backoff + jitter reconnect policies.
|
||||
4. Emit structured adapter health events + alerts.
|
||||
5. Add status surface in settings/services pages.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Adapter restarts automatically on transient failures.
|
||||
- Repeated failures degrade state and stop spam retries.
|
||||
- Operators can view current health + last error + retry schedule.
|
||||
|
||||
## Out of Scope
|
||||
- Full protocol hot-reload without process restart.
|
||||
27
artifacts/plans/06-end-to-end-observability.md
Normal file
27
artifacts/plans/06-end-to-end-observability.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Feature Plan: End-to-End Observability and Traceability
|
||||
|
||||
## Goal
|
||||
Provide trace-level visibility from ingress transport event to UI delivery/ack.
|
||||
|
||||
## Why This Fits GIA
|
||||
- Multi-hop messaging systems require correlation IDs to debug reliably.
|
||||
|
||||
## Scope
|
||||
- Global trace IDs for message lifecycle.
|
||||
- Structured logs and timeline diagnostics view.
|
||||
- Basic metrics and SLA dashboards.
|
||||
|
||||
## Implementation
|
||||
1. Inject `trace_id` at ingress/send initiation.
|
||||
2. Propagate through router, persistence, websocket, command/task flows.
|
||||
3. Standardize structured log schema across services.
|
||||
4. Add timeline diagnostics page by trace ID and session.
|
||||
5. Add core metrics: ingress latency, send latency, drop rate, retry counts.
|
||||
|
||||
## Acceptance Criteria
|
||||
- One trace ID can reconstruct full message path.
|
||||
- At least 95% of critical paths emit structured trace logs.
|
||||
- Operators can isolate bottleneck stage in under 2 minutes.
|
||||
|
||||
## Out of Scope
|
||||
- Full distributed tracing vendor integration.
|
||||
25
artifacts/plans/07-unified-conversation-graph-ui.md
Normal file
25
artifacts/plans/07-unified-conversation-graph-ui.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Feature Plan: Unified Conversation Graph UI
|
||||
|
||||
## Goal
|
||||
Expose one-person-many-identifiers graph in UI so users work per person, not per transport fragment.
|
||||
|
||||
## Why This Fits GIA
|
||||
- GIA already models `Person` + `PersonIdentifier`; this surfaces it clearly.
|
||||
|
||||
## Scope
|
||||
- Person profile panel with linked identifiers and active sessions.
|
||||
- Cross-transport thread pivoting from one card.
|
||||
- Merge/split controls linked to identity suggestions.
|
||||
|
||||
## Implementation
|
||||
1. Add graph view endpoint and serializer for person graph.
|
||||
2. Update compose sidebar/person pages with linked transport pills.
|
||||
3. Add quick actions: open thread, relink identifier, propose merge.
|
||||
4. Integrate with identity merge queue from feature 04.
|
||||
|
||||
## Acceptance Criteria
|
||||
- User can see and navigate all transport identities for a person from one place.
|
||||
- Switching transport context preserves person-centric history access.
|
||||
|
||||
## Out of Scope
|
||||
- Force-merged unified thread rendering across all transports.
|
||||
27
artifacts/plans/08-conversation-intelligence-suite.md
Normal file
27
artifacts/plans/08-conversation-intelligence-suite.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Feature Plan: Conversation Intelligence Suite
|
||||
|
||||
## Goal
|
||||
Add actionable conversation analytics and AI-assisted summaries with transparent outputs.
|
||||
|
||||
## Why This Fits GIA
|
||||
- AI workspace already exists; this extends it with communication-specific signals.
|
||||
|
||||
## Scope
|
||||
- Metrics: response latency, engagement symmetry, activity trend.
|
||||
- AI summary for recent window with action item extraction.
|
||||
- Draft mediation suggestions with raw/original always visible.
|
||||
|
||||
## Implementation
|
||||
1. Add analytics service computing per-thread/person metrics.
|
||||
2. Add summary generation endpoint with cached recent windows.
|
||||
3. Add action-item extraction that can feed tasks pipeline.
|
||||
4. Compose UI card: metrics + summary + suggested next reply.
|
||||
5. Add "show raw" indicator whenever mediation is displayed.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Metrics refresh reliably on new events.
|
||||
- Summaries are attributable to explicit message windows.
|
||||
- Mediation never hides original user text.
|
||||
|
||||
## Out of Scope
|
||||
- Fully autonomous outbound sending.
|
||||
27
artifacts/plans/10-policy-engine-middleware.md
Normal file
27
artifacts/plans/10-policy-engine-middleware.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Feature Plan: Policy Engine Middleware
|
||||
|
||||
## Goal
|
||||
Implement deterministic middleware policies for inbound/outbound message transforms and automations.
|
||||
|
||||
## Why This Fits GIA
|
||||
- Existing command/task/approval flows provide natural control points.
|
||||
|
||||
## Scope
|
||||
- Per-chat and per-person policy rules.
|
||||
- Rule actions: notify, suggest rewrite, create task, rate-limit send, require approval.
|
||||
- Dry-run/audit mode for every policy decision.
|
||||
|
||||
## Implementation
|
||||
1. Add policy schema and evaluator (`if conditions -> actions`).
|
||||
2. Hook evaluator into compose send and inbound task intelligence paths.
|
||||
3. Add policy decision logs with trace IDs.
|
||||
4. Add UI for policy creation/testing with preview mode.
|
||||
5. Add safe defaults: all transform rules start as suggest-only.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Policy decisions are deterministic and replayable.
|
||||
- Any transform is visibly annotated in UI.
|
||||
- Users can disable policy per chat instantly.
|
||||
|
||||
## Out of Scope
|
||||
- Natural-language-to-policy compiler in first release.
|
||||
95
artifacts/plans/17-person-enrichment-without-llm.md
Normal file
95
artifacts/plans/17-person-enrichment-without-llm.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Feature Plan: Person Model Enrichment (Non-LLM First)
|
||||
|
||||
## Goal
|
||||
Populate `Person` fields from existing message history without spending OpenAI tokens by default:
|
||||
- `summary`
|
||||
- `profile`
|
||||
- `revealed`
|
||||
- `likes`
|
||||
- `dislikes`
|
||||
- `sentiment`
|
||||
- `timezone`
|
||||
- `last_interaction`
|
||||
|
||||
## Problem We Are Solving
|
||||
- We have high-volume message data but limited durable person intelligence.
|
||||
- LLM analysis is expensive for continuous/background processing.
|
||||
- We need fast, deterministic extraction first, with optional semantic ranking.
|
||||
|
||||
## Design Decisions
|
||||
1. Config scope:
|
||||
- global defaults
|
||||
- optional group-level overrides
|
||||
- per-user overrides
|
||||
2. Resolution order:
|
||||
- `user > group > global`
|
||||
3. Global toggle:
|
||||
- hard kill-switch (`PERSON_ENRICHMENT_ENABLED`)
|
||||
4. Per-user/group controls:
|
||||
- enable/disable enrichment
|
||||
- write mode (`proposal_required` or `direct`)
|
||||
- confidence threshold
|
||||
- max messages scanned per run
|
||||
- semantic-ranking toggle
|
||||
|
||||
## Proposed Data Additions
|
||||
- `PersonEnrichmentSettings`:
|
||||
- scope fields (`user`, optional `group`)
|
||||
- toggle/threshold/runtime limits
|
||||
- `PersonSignal`:
|
||||
- normalized extracted clue
|
||||
- source references (message ids/events)
|
||||
- confidence and detector name
|
||||
- `PersonUpdateProposal`:
|
||||
- pending/approved/rejected person field updates
|
||||
- reason and provenance
|
||||
- Optional `PersonFieldRevision`:
|
||||
- before/after snapshots for auditability
|
||||
|
||||
## Processing Flow
|
||||
1. Select message window:
|
||||
- recent inbound/outbound messages per person/service
|
||||
- bounded by configurable caps
|
||||
2. Fast extraction:
|
||||
- deterministic rules/regex for:
|
||||
- timezone cues
|
||||
- explicit likes/dislikes
|
||||
- self-revealed facts
|
||||
- interaction-derived sentiment hints
|
||||
3. Semantic ranking (optional):
|
||||
- use Manticore-backed similarity search for classifier labels
|
||||
- rank candidate signals; do not call OpenAI in default path
|
||||
4. Signal aggregation:
|
||||
- merge repeated evidence
|
||||
- decay stale evidence
|
||||
- detect contradictions
|
||||
5. Apply update:
|
||||
- `proposal_required`: create `PersonUpdateProposal`
|
||||
- `direct`: write only above confidence threshold and with no conflict
|
||||
6. Persist audit trail:
|
||||
- record detector/classifier source and exact message provenance
|
||||
|
||||
## Field-Specific Policy
|
||||
- `summary/profile`: generated from stable high-confidence aggregates only.
|
||||
- `revealed`: only explicit self-disclosures.
|
||||
- `likes/dislikes`: require explicit statement or repeated pattern.
|
||||
- `sentiment`: rolling value with recency decay; never absolute truth label.
|
||||
- `timezone`: explicit declaration preferred; behavioral inference secondary.
|
||||
- `last_interaction`: deterministic from most recent message timestamps.
|
||||
|
||||
## Rollout
|
||||
1. Schema and settings models.
|
||||
2. Deterministic extractor pipeline and commands.
|
||||
3. Proposal queue + review flow.
|
||||
4. Optional Manticore semantic ranking layer.
|
||||
5. Backfill job for existing persons with safe rate limits.
|
||||
|
||||
## Acceptance Criteria
|
||||
- Default enrichment path runs with zero OpenAI usage.
|
||||
- Person updates are traceable to concrete message evidence.
|
||||
- Config hierarchy behaves predictably (`user > group > global`).
|
||||
- Operators can switch between proposal and direct write modes per scope.
|
||||
|
||||
## Out of Scope
|
||||
- Cross-user shared person graph.
|
||||
- Autonomous LLM-generated profile writing as default.
|
||||
@@ -0,0 +1,158 @@
|
||||
# Plan: Settings Integrity and Controls Reorganization
|
||||
|
||||
## Objective
|
||||
Create a single coherent configuration model for Security, Commands, and Tasks so UI labels, enforcement behavior, docs, and navigation all match actual runtime behavior.
|
||||
|
||||
## Current Integrity Findings
|
||||
|
||||
### 1) Scope Registry and Enforcement Are Out of Sync
|
||||
- Gateway command routes enforce scopes that are not exposed in Fine-Grained Security Scopes:
|
||||
- `gateway.contacts`
|
||||
- `gateway.help`
|
||||
- `gateway.whoami`
|
||||
- Fine-Grained Security Scopes currently expose only:
|
||||
- `gateway.tasks`, `gateway.approval`, `tasks.submit`, `tasks.commands`, `command.bp`, `command.codex`, `command.claude`
|
||||
- Result: users cannot configure all enforced gateway capabilities from the UI.
|
||||
|
||||
### 2) “Require Trusted Fingerprint” Semantics Are Incorrect
|
||||
- UI and labels imply trust-list based enforcement.
|
||||
- Runtime policy enforcement checks `UserXmppOmemoState.latest_client_key` equality, not `UserXmppOmemoTrustedKey` trust records.
|
||||
- Result: behavior is “match latest observed key,” not “require trusted fingerprint.”
|
||||
|
||||
### 3) Command Surfaces Are Split Across Inconsistent Places
|
||||
- Command Routing UI create flow exposes command slugs: `bp`, `codex`.
|
||||
- Runtime command engine auto-bootstraps `claude` profile and bindings.
|
||||
- Security scopes include `command.claude`, but Command Routing create UI does not.
|
||||
- Result: commands are partially configurable depending on entrypoint.
|
||||
|
||||
### 4) Task and Command Control Planes Interlock Implicitly, Not Explicitly
|
||||
- Task settings contain provider approval routing (Codex/Claude approver service/identifier).
|
||||
- Security permissions contain policy gates (`tasks.*`, `command.*`, `gateway.*`).
|
||||
- Command Routing controls profile/binding/variant policy.
|
||||
- These are tightly coupled but not represented as one layered model in UI/docs.
|
||||
|
||||
### 5) Settings Shell Coverage Is Incomplete
|
||||
- Shared settings hierarchy nav is context-processor driven and route-name based.
|
||||
- Settings routes missing from modules/general/security group coverage include:
|
||||
- `codex_settings`
|
||||
- `codex_approval`
|
||||
- `translation_preview`
|
||||
- Result: some settings pages can miss expected local tabs/title context.
|
||||
|
||||
### 6) Documentation Drift
|
||||
- Undocumented or under-documented features now in production behavior:
|
||||
- Fine-Grained Security Scopes + Global Scope Override
|
||||
- OMEMO trust management and per-direction encryption toggles
|
||||
- Business Plan Inbox under Settings Modules
|
||||
- Potentially misleading documentation:
|
||||
- Security wording implies trusted-key enforcement that is not implemented.
|
||||
|
||||
## Reorganization Principles
|
||||
1. One capability registry, reused by:
|
||||
- Security Permissions UI
|
||||
- command/task/gateway dispatch
|
||||
- documentation generation
|
||||
2. One settings-shell contract for every `/settings/*` page:
|
||||
- title
|
||||
- category tabs
|
||||
- breadcrumb
|
||||
3. Explicit layered model:
|
||||
- Layer A: transport encryption/security
|
||||
- Layer B: capability permissions (scope policy)
|
||||
- Layer C: feature configuration (tasks/commands/providers)
|
||||
4. No hardcoded duplicated scope lists in multiple files.
|
||||
|
||||
## Target Information Architecture
|
||||
|
||||
### Security
|
||||
- `Encryption`: OMEMO transport controls + trust management.
|
||||
- `Permissions`: Fine-Grained Security Scopes (capability policy only).
|
||||
- `2FA`: account factor settings.
|
||||
|
||||
### Modules
|
||||
- `Commands`: command profiles, bindings, variant policies.
|
||||
- `Tasks`: extraction/defaults/overrides/provider pipelines.
|
||||
- `Translation`: translation bridge settings.
|
||||
- `Availability`: adapter availability controls.
|
||||
- `Business Plans`: inbox/editor for generated artifacts.
|
||||
|
||||
### General
|
||||
- `Notifications`
|
||||
- `System`
|
||||
- `Accessibility`
|
||||
|
||||
### AI
|
||||
- `Models`
|
||||
- `Traces`
|
||||
|
||||
## Phased Execution Plan
|
||||
|
||||
## Phase 1: Canonical Capability Registry
|
||||
1. Add a central capability registry module (single source of truth):
|
||||
- key
|
||||
- label
|
||||
- description
|
||||
- group (`gateway`, `tasks`, `commands`, `agentic`, etc.)
|
||||
- owning feature page URL
|
||||
2. Migrate SecurityPage scope rendering to this registry.
|
||||
3. Migrate gateway/command/task dispatchers to reference registry keys.
|
||||
4. Add automated integrity test:
|
||||
- every enforced scope key must exist in registry
|
||||
- every registry key marked user-configurable must appear in Permissions UI
|
||||
|
||||
## Phase 2: Trusted Key Enforcement Correction
|
||||
1. Define authoritative trust policy behavior:
|
||||
- `require_trusted_fingerprint` must validate against `UserXmppOmemoTrustedKey`.
|
||||
2. Preserve backwards compatibility via migration path:
|
||||
- existing latest-key behavior can be temporarily represented as an optional fallback mode.
|
||||
3. Update labels/help to match exact behavior.
|
||||
4. Add tests:
|
||||
- trusted key allows
|
||||
- untrusted key denies
|
||||
- unknown key denies
|
||||
|
||||
## Phase 3: Commands/Tasks Control Plane Alignment
|
||||
1. Unify command surface definitions:
|
||||
- Command Routing create/edit options include all supported command slugs (`bp`, `codex`, `claude`).
|
||||
2. Add explicit cross-links:
|
||||
- Tasks settings references Command Routing and Permissions scopes directly.
|
||||
- Command Routing references Permissions scopes affecting each profile.
|
||||
3. Introduce capability-impact preview panel:
|
||||
- for each command/task action, show effective allow/deny by scope and channel.
|
||||
|
||||
## Phase 4: Settings Shell Normalization
|
||||
1. Replace route-name allowlists in `settings_hierarchy_nav` with category mapping table.
|
||||
2. Ensure all `/settings/*` pages declare category + tab metadata.
|
||||
3. Include missing routes (`codex_settings`, `codex_approval`, `translation_preview`) in shell.
|
||||
4. Add test to fail when a `/settings/*` route lacks shell metadata.
|
||||
|
||||
## Phase 5: Documentation Synchronization
|
||||
1. Add a settings matrix doc generated (or validated) from the capability registry:
|
||||
- capability key
|
||||
- UI location
|
||||
- enforced by code path
|
||||
2. Update `README.md` and `INSTALL.md` security/modules sections.
|
||||
3. Add "policy semantics" section clarifying:
|
||||
- encryption-required vs per-scope OMEMO requirements
|
||||
- trusted key behavior
|
||||
- global override precedence
|
||||
|
||||
## Acceptance Criteria
|
||||
- Every enforced scope is user-visible/configurable (or intentionally internal and documented).
|
||||
- “Require Trusted Fingerprint” enforcement uses trust records, not only latest observed key.
|
||||
- Command Routing and runtime-supported command slugs are aligned.
|
||||
- All `/settings/*` pages show consistent settings shell navigation.
|
||||
- Security/tasks/commands docs reflect real behavior and pass integrity checks.
|
||||
|
||||
## Risks and Mitigations
|
||||
- Risk: policy behavior change blocks existing workflows.
|
||||
- Mitigation: add compatibility flag and staged rollout.
|
||||
- Risk: registry migration introduces missing scope mappings.
|
||||
- Mitigation: integrity test that compares runtime-enforced keys vs registry.
|
||||
- Risk: UI complexity increase.
|
||||
- Mitigation: keep layered model with concise, context-aware summaries.
|
||||
|
||||
## Implementation Notes
|
||||
- Keep migration incremental; avoid big-bang rewrite.
|
||||
- Prioritize Phase 1 + Phase 2 first, because they are correctness and security semantics issues.
|
||||
- Do not add new transport-specific branches; keep service-agnostic evaluation path in policy engine.
|
||||
@@ -1,85 +0,0 @@
|
||||
# Create a debug log to confirm script execution
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
import django
|
||||
|
||||
LOG_PATH = os.environ.get("AUTH_DEBUG_LOG", "/tmp/auth_debug.log")
|
||||
|
||||
|
||||
def log(data):
|
||||
try:
|
||||
with open(LOG_PATH, "a") as f:
|
||||
f.write(f"{data}\n")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
# Set up Django environment
|
||||
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings") # Adjust if needed
|
||||
django.setup()
|
||||
|
||||
from django.contrib.auth import authenticate # noqa: E402
|
||||
from django.contrib.auth.models import User # noqa: E402
|
||||
|
||||
|
||||
def check_credentials(username, password):
|
||||
"""Authenticate user via Django"""
|
||||
user = authenticate(username=username, password=password)
|
||||
return user is not None and user.is_active
|
||||
|
||||
|
||||
def main():
|
||||
"""Process authentication requests from Prosody"""
|
||||
while True:
|
||||
try:
|
||||
# Read a single line from stdin
|
||||
line = sys.stdin.readline().strip()
|
||||
if not line:
|
||||
break # Exit if input is empty (EOF)
|
||||
|
||||
# Log received command (for debugging)
|
||||
# log(f"Received: {line}")
|
||||
|
||||
parts = line.split(":")
|
||||
if len(parts) < 3:
|
||||
log("Sending 0")
|
||||
print("0", flush=True) # Invalid format, return failure
|
||||
continue
|
||||
|
||||
command, username, domain = parts[:3]
|
||||
password = (
|
||||
":".join(parts[3:]) if len(parts) > 3 else None
|
||||
) # Reconstruct password
|
||||
|
||||
if command == "auth":
|
||||
if password and check_credentials(username, password):
|
||||
log("Authentication success")
|
||||
log("Sent 1")
|
||||
print("1", flush=True) # Success
|
||||
else:
|
||||
log("Authentication failure")
|
||||
log("Sent 0")
|
||||
print("0", flush=True) # Failure
|
||||
|
||||
elif command == "isuser":
|
||||
if User.objects.filter(username=username).exists():
|
||||
print("1", flush=True) # User exists
|
||||
else:
|
||||
print("0", flush=True) # User does not exist
|
||||
|
||||
elif command == "setpass":
|
||||
print("0", flush=True) # Not supported
|
||||
|
||||
else:
|
||||
print("0", flush=True) # Unknown command, return failure
|
||||
|
||||
except Exception as e:
|
||||
# Log any unexpected errors
|
||||
log(f"Error: {str(e)}\n")
|
||||
print("0", flush=True) # Return failure for any error
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,10 +0,0 @@
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
|
||||
# Prosody external auth expects a single long-lived stdin/stdout process.
|
||||
# Keep one stable process chain and hand off with exec.
|
||||
exec podman exec -i gia sh -lc '
|
||||
cd /code &&
|
||||
. /venv/bin/activate &&
|
||||
exec python -u auth_django.py
|
||||
'
|
||||
0
core/assist/__init__.py
Normal file
0
core/assist/__init__.py
Normal file
13
core/assist/engine.py
Normal file
13
core/assist/engine.py
Normal file
@@ -0,0 +1,13 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from core.assist.repeat_answer import find_repeat_answer, learn_from_message
|
||||
from core.models import Message
|
||||
from core.tasks.engine import process_inbound_task_intelligence
|
||||
|
||||
|
||||
async def process_inbound_assist(message: Message) -> None:
|
||||
if message is None:
|
||||
return
|
||||
await learn_from_message(message)
|
||||
await find_repeat_answer(message.user, message)
|
||||
await process_inbound_task_intelligence(message)
|
||||
153
core/assist/repeat_answer.py
Normal file
153
core/assist/repeat_answer.py
Normal file
@@ -0,0 +1,153 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
from django.utils import timezone
|
||||
|
||||
from core.models import AnswerMemory, AnswerSuggestionEvent, Message
|
||||
|
||||
_WORD_RE = re.compile(r"[^a-z0-9\s]+", re.IGNORECASE)
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class RepeatAnswerSuggestion:
|
||||
answer_memory_id: str
|
||||
answer_text: str
|
||||
score: float
|
||||
|
||||
|
||||
def _normalize_question(text: str) -> str:
|
||||
body = str(text or "").strip().lower()
|
||||
body = _WORD_RE.sub(" ", body)
|
||||
body = re.sub(r"\s+", " ", body).strip()
|
||||
return body
|
||||
|
||||
|
||||
def _fingerprint(text: str) -> str:
|
||||
norm = _normalize_question(text)
|
||||
if not norm:
|
||||
return ""
|
||||
return hashlib.sha1(norm.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def _is_question(text: str) -> bool:
|
||||
body = str(text or "").strip()
|
||||
if not body:
|
||||
return False
|
||||
low = body.lower()
|
||||
return body.endswith("?") or low.startswith(
|
||||
(
|
||||
"what",
|
||||
"why",
|
||||
"how",
|
||||
"when",
|
||||
"where",
|
||||
"who",
|
||||
"can ",
|
||||
"do ",
|
||||
"did ",
|
||||
"is ",
|
||||
"are ",
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def _is_group_channel(message: Message) -> bool:
|
||||
channel = str(getattr(message, "source_chat_id", "") or "").strip().lower()
|
||||
if channel.endswith("@g.us"):
|
||||
return True
|
||||
return (
|
||||
str(getattr(message, "source_service", "") or "").strip().lower() == "xmpp"
|
||||
and "conference." in channel
|
||||
)
|
||||
|
||||
|
||||
async def learn_from_message(message: Message) -> None:
|
||||
if message is None:
|
||||
return
|
||||
text = str(message.text or "").strip()
|
||||
if not text:
|
||||
return
|
||||
if dict(message.message_meta or {}).get("origin_tag"):
|
||||
return
|
||||
|
||||
# Build memory by linking obvious reply answers to prior questions.
|
||||
if message.reply_to_id and message.reply_to:
|
||||
q_text = str(message.reply_to.text or "").strip()
|
||||
if _is_question(q_text):
|
||||
fp = _fingerprint(q_text)
|
||||
if fp:
|
||||
await sync_to_async(AnswerMemory.objects.create)(
|
||||
user=message.user,
|
||||
service=message.source_service or "web",
|
||||
channel_identifier=message.source_chat_id or "",
|
||||
question_fingerprint=fp,
|
||||
question_text=q_text,
|
||||
answer_message=message,
|
||||
answer_text=text,
|
||||
confidence_meta={"source": "reply_pair"},
|
||||
)
|
||||
|
||||
|
||||
async def find_repeat_answer(user, message: Message) -> RepeatAnswerSuggestion | None:
|
||||
if message is None:
|
||||
return None
|
||||
if not _is_group_channel(message):
|
||||
return None
|
||||
if dict(message.message_meta or {}).get("origin_tag"):
|
||||
return None
|
||||
text = str(message.text or "").strip()
|
||||
if not _is_question(text):
|
||||
return None
|
||||
|
||||
fp = _fingerprint(text)
|
||||
if not fp:
|
||||
return None
|
||||
|
||||
# channel cooldown for repeated suggestions in short windows
|
||||
cooldown_cutoff = timezone.now() - timezone.timedelta(minutes=3)
|
||||
cooldown_exists = await sync_to_async(
|
||||
lambda: AnswerSuggestionEvent.objects.filter(
|
||||
user=user,
|
||||
message__source_service=message.source_service,
|
||||
message__source_chat_id=message.source_chat_id,
|
||||
status="suggested",
|
||||
created_at__gte=cooldown_cutoff,
|
||||
).exists()
|
||||
)()
|
||||
if cooldown_exists:
|
||||
return None
|
||||
|
||||
memory = await sync_to_async(
|
||||
lambda: AnswerMemory.objects.filter(
|
||||
user=user,
|
||||
service=message.source_service or "web",
|
||||
channel_identifier=message.source_chat_id or "",
|
||||
question_fingerprint=fp,
|
||||
)
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)()
|
||||
if not memory:
|
||||
return None
|
||||
|
||||
answer = str(memory.answer_text or "").strip()
|
||||
if not answer:
|
||||
return None
|
||||
|
||||
score = 0.99
|
||||
await sync_to_async(AnswerSuggestionEvent.objects.create)(
|
||||
user=user,
|
||||
message=message,
|
||||
status="suggested",
|
||||
candidate_answer=memory,
|
||||
score=score,
|
||||
)
|
||||
return RepeatAnswerSuggestion(
|
||||
answer_memory_id=str(memory.id),
|
||||
answer_text=answer,
|
||||
score=score,
|
||||
)
|
||||
@@ -12,8 +12,7 @@ class ClientBase(ABC):
|
||||
self.log.info(f"{self.service.capitalize()} client initialising...")
|
||||
|
||||
@abstractmethod
|
||||
def start(self):
|
||||
...
|
||||
def start(self): ...
|
||||
|
||||
# @abstractmethod
|
||||
# async def send_message(self, recipient, message):
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,7 @@
|
||||
import asyncio
|
||||
import base64
|
||||
import logging
|
||||
import re
|
||||
|
||||
import aiohttp
|
||||
import orjson
|
||||
@@ -7,6 +9,43 @@ import requests
|
||||
from django.conf import settings
|
||||
from rest_framework import status
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
SIGNAL_UUID_PATTERN = re.compile(
|
||||
r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
|
||||
|
||||
def _safe_parse_send_response(payload_value) -> int | bool:
|
||||
payload = payload_value
|
||||
if isinstance(payload_value, str):
|
||||
try:
|
||||
payload = orjson.loads(payload_value)
|
||||
except orjson.JSONDecodeError:
|
||||
return False
|
||||
if not isinstance(payload, dict):
|
||||
return False
|
||||
try:
|
||||
ts = payload.get("timestamp")
|
||||
return int(ts) if ts else False
|
||||
except (TypeError, ValueError):
|
||||
return False
|
||||
|
||||
|
||||
def normalize_signal_recipient(recipient: str) -> str:
|
||||
raw = str(recipient or "").strip()
|
||||
if not raw:
|
||||
return ""
|
||||
if SIGNAL_UUID_PATTERN.fullmatch(raw):
|
||||
return raw
|
||||
if raw.startswith("+"):
|
||||
digits = re.sub(r"[^0-9]", "", raw)
|
||||
return f"+{digits}" if digits else raw
|
||||
digits_only = re.sub(r"[^0-9]", "", raw)
|
||||
if digits_only and raw.isdigit():
|
||||
return f"+{digits_only}"
|
||||
return raw
|
||||
|
||||
|
||||
async def start_typing(uuid):
|
||||
base = getattr(settings, "SIGNAL_HTTP_URL", "http://signal:8080").rstrip("/")
|
||||
@@ -70,7 +109,9 @@ async def download_and_encode_base64(file_url, filename, content_type, session=N
|
||||
return None
|
||||
|
||||
|
||||
async def send_message_raw(recipient_uuid, text=None, attachments=None):
|
||||
async def send_message_raw(
|
||||
recipient_uuid, text=None, attachments=None, metadata=None, detailed=False
|
||||
):
|
||||
"""
|
||||
Sends a message using the Signal REST API, ensuring attachment links are not included in the text body.
|
||||
|
||||
@@ -85,11 +126,13 @@ async def send_message_raw(recipient_uuid, text=None, attachments=None):
|
||||
base = getattr(settings, "SIGNAL_HTTP_URL", "http://signal:8080").rstrip("/")
|
||||
url = f"{base}/v2/send"
|
||||
|
||||
normalized_recipient = normalize_signal_recipient(recipient_uuid)
|
||||
data = {
|
||||
"recipients": [recipient_uuid],
|
||||
"recipients": [normalized_recipient],
|
||||
"number": settings.SIGNAL_NUMBER,
|
||||
"base64_attachments": [],
|
||||
}
|
||||
meta = dict(metadata or {})
|
||||
|
||||
async def _attachment_to_base64(attachment, session):
|
||||
row = dict(attachment or {})
|
||||
@@ -132,15 +175,76 @@ async def send_message_raw(recipient_uuid, text=None, attachments=None):
|
||||
if text:
|
||||
data["message"] = text
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(url, json=data) as response:
|
||||
response_text = await response.text()
|
||||
response_status = response.status
|
||||
quote_timestamp = int(meta.get("quote_timestamp") or 0)
|
||||
quote_author = str(meta.get("quote_author") or "").strip()
|
||||
quote_text = str(meta.get("quote_text") or "").strip()
|
||||
has_quote = quote_timestamp > 0 and bool(quote_author)
|
||||
|
||||
if response_status == status.HTTP_201_CREATED:
|
||||
ts = orjson.loads(response_text).get("timestamp", None)
|
||||
return ts if ts else False
|
||||
return False
|
||||
payloads = [dict(data)]
|
||||
if has_quote:
|
||||
flat_quote_payload = dict(data)
|
||||
flat_quote_payload["quote_timestamp"] = int(quote_timestamp)
|
||||
flat_quote_payload["quote_author"] = quote_author
|
||||
if quote_text:
|
||||
flat_quote_payload["quote_message"] = quote_text
|
||||
|
||||
nested_quote_payload = dict(data)
|
||||
nested_quote_payload["quote"] = {
|
||||
"id": int(quote_timestamp),
|
||||
"author": quote_author,
|
||||
}
|
||||
if quote_text:
|
||||
nested_quote_payload["quote"]["text"] = quote_text
|
||||
|
||||
payloads = [flat_quote_payload, nested_quote_payload, dict(data)]
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
for index, payload in enumerate(payloads):
|
||||
async with session.post(url, json=payload) as response:
|
||||
response_text = await response.text()
|
||||
response_status = response.status
|
||||
if response_status == status.HTTP_201_CREATED:
|
||||
ts = orjson.loads(response_text).get("timestamp", None)
|
||||
return ts if ts else False
|
||||
if index == len(payloads) - 1:
|
||||
log.warning(
|
||||
"Signal send failed status=%s recipient=%s body=%s",
|
||||
response_status,
|
||||
normalized_recipient,
|
||||
response_text[:300],
|
||||
)
|
||||
if detailed:
|
||||
return {
|
||||
"ok": False,
|
||||
"status": int(response_status),
|
||||
"error": str(response_text or "").strip()[:500],
|
||||
"recipient": normalized_recipient,
|
||||
}
|
||||
return False
|
||||
if response_status not in {
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
}:
|
||||
log.warning(
|
||||
"Signal send failed early status=%s recipient=%s body=%s",
|
||||
response_status,
|
||||
normalized_recipient,
|
||||
response_text[:300],
|
||||
)
|
||||
if detailed:
|
||||
return {
|
||||
"ok": False,
|
||||
"status": int(response_status),
|
||||
"error": str(response_text or "").strip()[:500],
|
||||
"recipient": normalized_recipient,
|
||||
}
|
||||
return False
|
||||
log.warning(
|
||||
"signal send quote payload rejected (%s), trying fallback shape: %s",
|
||||
response_status,
|
||||
response_text[:200],
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
async def send_reaction(
|
||||
@@ -152,13 +256,17 @@ async def send_reaction(
|
||||
):
|
||||
base = getattr(settings, "SIGNAL_HTTP_URL", "http://signal:8080").rstrip("/")
|
||||
sender_number = settings.SIGNAL_NUMBER
|
||||
if not recipient_uuid or not target_timestamp:
|
||||
normalized_recipient = normalize_signal_recipient(recipient_uuid)
|
||||
normalized_target_author = normalize_signal_recipient(
|
||||
str(target_author or normalized_recipient)
|
||||
)
|
||||
if not normalized_recipient or not target_timestamp:
|
||||
return False
|
||||
|
||||
payload = {
|
||||
"recipient": recipient_uuid,
|
||||
"recipient": normalized_recipient,
|
||||
"reaction": str(emoji or ""),
|
||||
"target_author": str(target_author or recipient_uuid),
|
||||
"target_author": normalized_target_author,
|
||||
"timestamp": int(target_timestamp),
|
||||
"remove": bool(remove),
|
||||
}
|
||||
@@ -269,7 +377,7 @@ def send_message_raw_sync(recipient_uuid, text=None, attachments=None):
|
||||
base = getattr(settings, "SIGNAL_HTTP_URL", "http://signal:8080").rstrip("/")
|
||||
url = f"{base}/v2/send"
|
||||
data = {
|
||||
"recipients": [recipient_uuid],
|
||||
"recipients": [normalize_signal_recipient(recipient_uuid)],
|
||||
"number": settings.SIGNAL_NUMBER,
|
||||
"base64_attachments": [],
|
||||
}
|
||||
@@ -303,8 +411,8 @@ def send_message_raw_sync(recipient_uuid, text=None, attachments=None):
|
||||
response.status_code == status.HTTP_201_CREATED
|
||||
): # Signal server returns 201 on success
|
||||
try:
|
||||
ts = orjson.loads(response.text).get("timestamp", None)
|
||||
return ts if ts else False
|
||||
except orjson.JSONDecodeError:
|
||||
return False
|
||||
payload = response.json()
|
||||
except ValueError:
|
||||
payload = {}
|
||||
return _safe_parse_send_response(payload)
|
||||
return False # If response status is not 201
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
import asyncio
|
||||
import base64
|
||||
import io
|
||||
import os
|
||||
import secrets
|
||||
import shutil
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from urllib.parse import quote_plus
|
||||
|
||||
@@ -15,6 +18,11 @@ from django.core.cache import cache
|
||||
|
||||
from core.clients import signalapi
|
||||
from core.messaging import media_bridge
|
||||
from core.security.attachments import (
|
||||
validate_attachment_metadata,
|
||||
validate_attachment_url,
|
||||
)
|
||||
from core.transports.capabilities import supports, unsupported_reason
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("transport")
|
||||
@@ -30,6 +38,10 @@ def _service_key(service: str) -> str:
|
||||
return str(service or "").strip().lower()
|
||||
|
||||
|
||||
def _capability_checks_enabled() -> bool:
|
||||
return bool(getattr(settings, "CAPABILITY_ENFORCEMENT_ENABLED", True))
|
||||
|
||||
|
||||
def _runtime_key(service: str) -> str:
|
||||
return f"gia:service:runtime:{_service_key(service)}"
|
||||
|
||||
@@ -433,23 +445,6 @@ def list_accounts(service: str):
|
||||
Return account identifiers for service UI list.
|
||||
"""
|
||||
service_key = _service_key(service)
|
||||
if service_key == "signal":
|
||||
import requests
|
||||
|
||||
base = str(getattr(settings, "SIGNAL_HTTP_URL", "http://signal:8080")).rstrip(
|
||||
"/"
|
||||
)
|
||||
try:
|
||||
response = requests.get(f"{base}/v1/accounts", timeout=20)
|
||||
if not response.ok:
|
||||
return []
|
||||
payload = orjson.loads(response.text or "[]")
|
||||
if isinstance(payload, list):
|
||||
return payload
|
||||
except Exception:
|
||||
return []
|
||||
return []
|
||||
|
||||
state = get_runtime_state(service_key)
|
||||
accounts = state.get("accounts") or []
|
||||
if service_key == "whatsapp" and not accounts:
|
||||
@@ -479,6 +474,57 @@ def _account_key(value: str) -> str:
|
||||
return raw
|
||||
|
||||
|
||||
def _wipe_signal_cli_local_state() -> bool:
|
||||
"""
|
||||
Best-effort local signal-cli state reset for json-rpc deployments where
|
||||
REST account delete endpoints are unavailable.
|
||||
"""
|
||||
config_roots = []
|
||||
base_dir = getattr(settings, "BASE_DIR", None)
|
||||
if base_dir:
|
||||
config_roots.append(str(Path(base_dir) / "signal-cli-config"))
|
||||
config_roots.extend(
|
||||
[
|
||||
"/code/signal-cli-config",
|
||||
"/signal-cli-config",
|
||||
"/home/.local/share/signal-cli",
|
||||
]
|
||||
)
|
||||
removed_any = False
|
||||
seen_roots = set()
|
||||
for root in config_roots:
|
||||
root = str(root or "").strip()
|
||||
if not root or root in seen_roots:
|
||||
continue
|
||||
seen_roots.add(root)
|
||||
if not os.path.isdir(root):
|
||||
continue
|
||||
try:
|
||||
entries = os.listdir(root)
|
||||
except Exception:
|
||||
continue
|
||||
for entry in entries:
|
||||
if not entry:
|
||||
continue
|
||||
# Keep runtime configuration scaffold; wipe account/pairing state.
|
||||
if entry in {"jsonrpc2.yml", "jsonrpc.yml"}:
|
||||
continue
|
||||
path = os.path.join(root, entry)
|
||||
if os.path.isdir(path):
|
||||
try:
|
||||
shutil.rmtree(path)
|
||||
removed_any = True
|
||||
except Exception:
|
||||
continue
|
||||
else:
|
||||
try:
|
||||
os.remove(path)
|
||||
removed_any = True
|
||||
except Exception:
|
||||
continue
|
||||
return removed_any
|
||||
|
||||
|
||||
def unlink_account(service: str, account: str) -> bool:
|
||||
service_key = _service_key(service)
|
||||
account_value = str(account or "").strip()
|
||||
@@ -492,14 +538,37 @@ def unlink_account(service: str, account: str) -> bool:
|
||||
"/"
|
||||
)
|
||||
target = quote_plus(account_value)
|
||||
unlinked = False
|
||||
for path in (f"/v1/accounts/{target}", f"/v1/account/{target}"):
|
||||
try:
|
||||
response = requests.delete(f"{base}{path}", timeout=20)
|
||||
if response.ok:
|
||||
return True
|
||||
unlinked = True
|
||||
break
|
||||
except Exception:
|
||||
continue
|
||||
return False
|
||||
if unlinked:
|
||||
return True
|
||||
wiped = _wipe_signal_cli_local_state()
|
||||
if not wiped:
|
||||
return False
|
||||
# Best-effort verification: if the REST API still reports the same account,
|
||||
# the runtime likely still holds active linked state and the UI should not
|
||||
# claim relink is ready yet.
|
||||
remaining_accounts = list_accounts("signal")
|
||||
for row in remaining_accounts:
|
||||
if isinstance(row, dict):
|
||||
candidate = (
|
||||
row.get("number")
|
||||
or row.get("id")
|
||||
or row.get("jid")
|
||||
or row.get("account")
|
||||
)
|
||||
else:
|
||||
candidate = row
|
||||
if _account_key(str(candidate or "")) == _account_key(account_value):
|
||||
return False
|
||||
return True
|
||||
|
||||
if service_key in {"whatsapp", "instagram"}:
|
||||
state = get_runtime_state(service_key)
|
||||
@@ -614,17 +683,21 @@ async def _normalize_gateway_attachment(service: str, row: dict, session):
|
||||
if isinstance(content, memoryview):
|
||||
content = content.tobytes()
|
||||
if isinstance(content, bytes):
|
||||
filename, content_type = validate_attachment_metadata(
|
||||
filename=normalized.get("filename") or "attachment.bin",
|
||||
content_type=normalized.get("content_type") or "application/octet-stream",
|
||||
size=normalized.get("size") or len(content),
|
||||
)
|
||||
blob_key = media_bridge.put_blob(
|
||||
service=service,
|
||||
content=content,
|
||||
filename=normalized.get("filename") or "attachment.bin",
|
||||
content_type=normalized.get("content_type") or "application/octet-stream",
|
||||
filename=filename,
|
||||
content_type=content_type,
|
||||
)
|
||||
return {
|
||||
"blob_key": blob_key,
|
||||
"filename": normalized.get("filename") or "attachment.bin",
|
||||
"content_type": normalized.get("content_type")
|
||||
or "application/octet-stream",
|
||||
"filename": filename,
|
||||
"content_type": content_type,
|
||||
"size": normalized.get("size") or len(content),
|
||||
}
|
||||
|
||||
@@ -634,33 +707,39 @@ async def _normalize_gateway_attachment(service: str, row: dict, session):
|
||||
source_url = normalized.get("url")
|
||||
if source_url:
|
||||
try:
|
||||
async with session.get(source_url) as response:
|
||||
safe_url = validate_attachment_url(source_url)
|
||||
async with session.get(safe_url) as response:
|
||||
if response.status == 200:
|
||||
payload = await response.read()
|
||||
blob_key = media_bridge.put_blob(
|
||||
service=service,
|
||||
content=payload,
|
||||
filename, content_type = validate_attachment_metadata(
|
||||
filename=normalized.get("filename")
|
||||
or source_url.rstrip("/").split("/")[-1]
|
||||
or safe_url.rstrip("/").split("/")[-1]
|
||||
or "attachment.bin",
|
||||
content_type=normalized.get("content_type")
|
||||
or response.headers.get(
|
||||
"Content-Type", "application/octet-stream"
|
||||
),
|
||||
size=normalized.get("size") or len(payload),
|
||||
)
|
||||
blob_key = media_bridge.put_blob(
|
||||
service=service,
|
||||
content=payload,
|
||||
filename=filename,
|
||||
content_type=content_type,
|
||||
)
|
||||
return {
|
||||
"blob_key": blob_key,
|
||||
"filename": normalized.get("filename")
|
||||
or source_url.rstrip("/").split("/")[-1]
|
||||
or "attachment.bin",
|
||||
"content_type": normalized.get("content_type")
|
||||
or response.headers.get(
|
||||
"Content-Type", "application/octet-stream"
|
||||
),
|
||||
"filename": filename,
|
||||
"content_type": content_type,
|
||||
"size": normalized.get("size") or len(payload),
|
||||
}
|
||||
except Exception:
|
||||
log.warning("%s attachment fetch failed for %s", service, source_url)
|
||||
except Exception as exc:
|
||||
log.warning(
|
||||
"%s attachment fetch failed for %s: %s",
|
||||
service,
|
||||
source_url,
|
||||
exc,
|
||||
)
|
||||
return normalized
|
||||
|
||||
|
||||
@@ -711,12 +790,25 @@ async def send_message_raw(
|
||||
Unified outbound send path used by models/views/UR.
|
||||
"""
|
||||
service_key = _service_key(service)
|
||||
if _capability_checks_enabled() and not supports(service_key, "send"):
|
||||
reason = unsupported_reason(service_key, "send")
|
||||
log.warning(
|
||||
"capability-check failed service=%s feature=send: %s",
|
||||
service_key,
|
||||
reason,
|
||||
)
|
||||
return False
|
||||
if service_key == "signal":
|
||||
prepared_attachments = await prepare_outbound_attachments(
|
||||
service_key, attachments or []
|
||||
)
|
||||
result = await signalapi.send_message_raw(recipient, text, prepared_attachments)
|
||||
meta = dict(metadata or {})
|
||||
result = await signalapi.send_message_raw(
|
||||
recipient,
|
||||
text,
|
||||
prepared_attachments,
|
||||
metadata=meta,
|
||||
)
|
||||
xmpp_source_id = str(meta.get("xmpp_source_id") or "").strip()
|
||||
if xmpp_source_id and result:
|
||||
from core.models import PersonIdentifier
|
||||
@@ -764,9 +856,7 @@ async def send_message_raw(
|
||||
runtime_result = await runtime_client.send_message_raw(
|
||||
recipient,
|
||||
text=text,
|
||||
attachments=await prepare_outbound_attachments(
|
||||
service_key, attachments or []
|
||||
),
|
||||
attachments=attachments or [],
|
||||
metadata=dict(metadata or {}),
|
||||
)
|
||||
if runtime_result is not False and runtime_result is not None:
|
||||
@@ -775,11 +865,8 @@ async def send_message_raw(
|
||||
log.warning("%s runtime send failed: %s", service_key, exc)
|
||||
# Web/UI process cannot access UR in-process runtime client directly.
|
||||
# Hand off send to UR via shared cache command queue.
|
||||
prepared_attachments = await prepare_outbound_attachments(
|
||||
service_key, attachments or []
|
||||
)
|
||||
command_attachments = []
|
||||
for att in prepared_attachments:
|
||||
for att in (attachments or []):
|
||||
row = dict(att or {})
|
||||
# Keep payload cache-friendly and avoid embedding raw bytes.
|
||||
for key in ("content",):
|
||||
@@ -847,6 +934,14 @@ async def send_reaction(
|
||||
remove: bool = False,
|
||||
):
|
||||
service_key = _service_key(service)
|
||||
if _capability_checks_enabled() and not supports(service_key, "reactions"):
|
||||
reason = unsupported_reason(service_key, "reactions")
|
||||
log.warning(
|
||||
"capability-check failed service=%s feature=reactions: %s",
|
||||
service_key,
|
||||
reason,
|
||||
)
|
||||
return False
|
||||
if not str(emoji or "").strip() and not remove:
|
||||
return False
|
||||
|
||||
@@ -917,6 +1012,13 @@ async def send_reaction(
|
||||
|
||||
async def start_typing(service: str, recipient: str):
|
||||
service_key = _service_key(service)
|
||||
if _capability_checks_enabled() and not supports(service_key, "typing"):
|
||||
log.warning(
|
||||
"capability-check failed service=%s feature=typing: %s",
|
||||
service_key,
|
||||
unsupported_reason(service_key, "typing"),
|
||||
)
|
||||
return False
|
||||
if service_key == "signal":
|
||||
await signalapi.start_typing(recipient)
|
||||
return True
|
||||
@@ -947,6 +1049,13 @@ async def start_typing(service: str, recipient: str):
|
||||
|
||||
async def stop_typing(service: str, recipient: str):
|
||||
service_key = _service_key(service)
|
||||
if _capability_checks_enabled() and not supports(service_key, "typing"):
|
||||
log.warning(
|
||||
"capability-check failed service=%s feature=typing: %s",
|
||||
service_key,
|
||||
unsupported_reason(service_key, "typing"),
|
||||
)
|
||||
return False
|
||||
if service_key == "signal":
|
||||
await signalapi.stop_typing(recipient)
|
||||
return True
|
||||
@@ -982,9 +1091,8 @@ async def fetch_attachment(service: str, attachment_ref: dict):
|
||||
service_key = _service_key(service)
|
||||
if service_key == "signal":
|
||||
attachment_id = attachment_ref.get("id") or attachment_ref.get("attachment_id")
|
||||
if not attachment_id:
|
||||
return None
|
||||
return await signalapi.fetch_signal_attachment(attachment_id)
|
||||
if attachment_id:
|
||||
return await signalapi.fetch_signal_attachment(attachment_id)
|
||||
|
||||
runtime_client = get_runtime_client(service_key)
|
||||
if runtime_client and hasattr(runtime_client, "fetch_attachment"):
|
||||
@@ -1000,21 +1108,27 @@ async def fetch_attachment(service: str, attachment_ref: dict):
|
||||
if blob_key:
|
||||
return media_bridge.get_blob(blob_key)
|
||||
if direct_url:
|
||||
safe_url = validate_attachment_url(direct_url)
|
||||
timeout = aiohttp.ClientTimeout(total=20)
|
||||
async with aiohttp.ClientSession(timeout=timeout) as session:
|
||||
async with session.get(direct_url) as response:
|
||||
async with session.get(safe_url) as response:
|
||||
if response.status != 200:
|
||||
return None
|
||||
content = await response.read()
|
||||
return {
|
||||
"content": content,
|
||||
"content_type": response.headers.get(
|
||||
filename, content_type = validate_attachment_metadata(
|
||||
filename=attachment_ref.get("filename")
|
||||
or safe_url.rstrip("/").split("/")[-1]
|
||||
or "attachment.bin",
|
||||
content_type=response.headers.get(
|
||||
"Content-Type",
|
||||
attachment_ref.get("content_type", "application/octet-stream"),
|
||||
),
|
||||
"filename": attachment_ref.get("filename")
|
||||
or direct_url.rstrip("/").split("/")[-1]
|
||||
or "attachment.bin",
|
||||
size=len(content),
|
||||
)
|
||||
return {
|
||||
"content": content,
|
||||
"content_type": content_type,
|
||||
"filename": filename,
|
||||
"size": len(content),
|
||||
}
|
||||
return None
|
||||
@@ -1054,7 +1168,7 @@ def get_link_qr(service: str, device_name: str):
|
||||
response = requests.get(
|
||||
f"{base}/v1/qrcodelink",
|
||||
params={"device_name": device},
|
||||
timeout=20,
|
||||
timeout=5,
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.content
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
1674
core/clients/xmpp.py
1674
core/clients/xmpp.py
File diff suppressed because it is too large
Load Diff
0
core/commands/__init__.py
Normal file
0
core/commands/__init__.py
Normal file
29
core/commands/base.py
Normal file
29
core/commands/base.py
Normal file
@@ -0,0 +1,29 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class CommandContext:
|
||||
service: str
|
||||
channel_identifier: str
|
||||
message_id: str
|
||||
user_id: int
|
||||
message_text: str
|
||||
payload: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class CommandResult:
|
||||
ok: bool
|
||||
status: str = "ok"
|
||||
error: str = ""
|
||||
payload: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
class CommandHandler:
|
||||
slug = ""
|
||||
|
||||
async def execute(self, ctx: CommandContext) -> CommandResult:
|
||||
raise NotImplementedError
|
||||
129
core/commands/delivery.py
Normal file
129
core/commands/delivery.py
Normal file
@@ -0,0 +1,129 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
|
||||
from core.clients import transport
|
||||
from core.models import ChatSession, Message
|
||||
|
||||
STATUS_VISIBLE_SOURCE_SERVICES = {"web", "xmpp", "signal", "whatsapp"}
|
||||
|
||||
|
||||
def chunk_for_transport(text: str, limit: int = 3000) -> list[str]:
|
||||
body = str(text or "").strip()
|
||||
if not body:
|
||||
return []
|
||||
if len(body) <= limit:
|
||||
return [body]
|
||||
parts = []
|
||||
remaining = body
|
||||
while len(remaining) > limit:
|
||||
cut = remaining.rfind("\n\n", 0, limit)
|
||||
if cut < int(limit * 0.45):
|
||||
cut = remaining.rfind("\n", 0, limit)
|
||||
if cut < int(limit * 0.35):
|
||||
cut = limit
|
||||
parts.append(remaining[:cut].rstrip())
|
||||
remaining = remaining[cut:].lstrip()
|
||||
if remaining:
|
||||
parts.append(remaining)
|
||||
return [part for part in parts if part]
|
||||
|
||||
|
||||
async def post_status_in_source(
|
||||
trigger_message: Message, text: str, origin_tag: str
|
||||
) -> bool:
|
||||
service = str(trigger_message.source_service or "").strip().lower()
|
||||
if service not in STATUS_VISIBLE_SOURCE_SERVICES:
|
||||
return False
|
||||
if service == "web":
|
||||
await sync_to_async(Message.objects.create)(
|
||||
user=trigger_message.user,
|
||||
session=trigger_message.session,
|
||||
sender_uuid="",
|
||||
text=text,
|
||||
ts=int(time.time() * 1000),
|
||||
custom_author="BOT",
|
||||
source_service="web",
|
||||
source_chat_id=trigger_message.source_chat_id or "",
|
||||
message_meta={"origin_tag": origin_tag},
|
||||
)
|
||||
return True
|
||||
# For non-web, route through transport raw API.
|
||||
if not str(trigger_message.source_chat_id or "").strip():
|
||||
return False
|
||||
try:
|
||||
await transport.send_message_raw(
|
||||
service,
|
||||
str(trigger_message.source_chat_id or "").strip(),
|
||||
text=text,
|
||||
attachments=[],
|
||||
metadata={"origin_tag": origin_tag},
|
||||
)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
async def post_to_channel_binding(
|
||||
trigger_message: Message,
|
||||
binding_service: str,
|
||||
binding_channel_identifier: str,
|
||||
text: str,
|
||||
origin_tag: str,
|
||||
command_slug: str,
|
||||
) -> bool:
|
||||
service = str(binding_service or "").strip().lower()
|
||||
channel_identifier = str(binding_channel_identifier or "").strip()
|
||||
if service == "web":
|
||||
session = None
|
||||
if (
|
||||
channel_identifier
|
||||
and channel_identifier == str(trigger_message.source_chat_id or "").strip()
|
||||
):
|
||||
session = trigger_message.session
|
||||
if session is None and channel_identifier:
|
||||
session = await sync_to_async(
|
||||
lambda: ChatSession.objects.filter(
|
||||
user=trigger_message.user,
|
||||
identifier__identifier=channel_identifier,
|
||||
)
|
||||
.order_by("-last_interaction")
|
||||
.first()
|
||||
)()
|
||||
if session is None:
|
||||
session = trigger_message.session
|
||||
await sync_to_async(Message.objects.create)(
|
||||
user=trigger_message.user,
|
||||
session=session,
|
||||
sender_uuid="",
|
||||
text=text,
|
||||
ts=int(time.time() * 1000),
|
||||
custom_author="BOT",
|
||||
source_service="web",
|
||||
source_chat_id=channel_identifier
|
||||
or str(trigger_message.source_chat_id or ""),
|
||||
message_meta={"origin_tag": origin_tag},
|
||||
)
|
||||
return True
|
||||
try:
|
||||
chunks = chunk_for_transport(text, limit=3000)
|
||||
if not chunks:
|
||||
return False
|
||||
for chunk in chunks:
|
||||
ts = await transport.send_message_raw(
|
||||
service,
|
||||
channel_identifier,
|
||||
text=chunk,
|
||||
attachments=[],
|
||||
metadata={
|
||||
"origin_tag": origin_tag,
|
||||
"command_slug": command_slug,
|
||||
},
|
||||
)
|
||||
if not ts:
|
||||
return False
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
501
core/commands/engine.py
Normal file
501
core/commands/engine.py
Normal file
@@ -0,0 +1,501 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
|
||||
from core.commands.base import CommandContext, CommandResult
|
||||
from core.commands.handlers.bp import (
|
||||
BPCommandHandler,
|
||||
bp_reply_is_optional_for_trigger,
|
||||
bp_subcommands_enabled,
|
||||
bp_trigger_matches,
|
||||
)
|
||||
from core.commands.handlers.claude import ClaudeCommandHandler, claude_trigger_matches
|
||||
from core.commands.handlers.codex import CodexCommandHandler, codex_trigger_matches
|
||||
from core.commands.policies import ensure_variant_policies_for_profile
|
||||
from core.commands.registry import get as get_handler
|
||||
from core.commands.registry import register
|
||||
from core.messaging.reply_sync import is_mirrored_origin
|
||||
from core.models import (
|
||||
CommandAction,
|
||||
CommandChannelBinding,
|
||||
CommandProfile,
|
||||
Message,
|
||||
PersonIdentifier,
|
||||
)
|
||||
from core.security.command_policy import CommandSecurityContext, evaluate_command_policy
|
||||
from core.tasks.chat_defaults import ensure_default_source_for_chat
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("command_engine")
|
||||
|
||||
_REGISTERED = False
|
||||
|
||||
|
||||
def _channel_variants(service: str, channel_identifier: str) -> list[str]:
|
||||
value = str(channel_identifier or "").strip()
|
||||
if not value:
|
||||
return []
|
||||
variants = [value]
|
||||
svc = str(service or "").strip().lower()
|
||||
if svc == "whatsapp":
|
||||
bare = value.split("@", 1)[0].strip()
|
||||
if bare and bare not in variants:
|
||||
variants.append(bare)
|
||||
group = f"{bare}@g.us" if bare else ""
|
||||
if group and group not in variants:
|
||||
variants.append(group)
|
||||
return variants
|
||||
|
||||
|
||||
def _canonical_channel_identifier(service: str, channel_identifier: str) -> str:
|
||||
value = str(channel_identifier or "").strip()
|
||||
if not value:
|
||||
return ""
|
||||
if str(service or "").strip().lower() == "whatsapp":
|
||||
return value.split("@", 1)[0].strip()
|
||||
return value
|
||||
|
||||
|
||||
def _signal_identifier_rank(identifier_value: str) -> int:
|
||||
identifier_text = str(identifier_value or "").strip()
|
||||
if not identifier_text:
|
||||
return 99
|
||||
if identifier_text.startswith("group."):
|
||||
return 0
|
||||
if identifier_text.startswith("+"):
|
||||
return 1
|
||||
return 2
|
||||
|
||||
|
||||
def _expand_service_channel_variants(
|
||||
user_id: int,
|
||||
service: str,
|
||||
identifiers: list[str],
|
||||
) -> list[str]:
|
||||
variants: list[str] = []
|
||||
for identifier in identifiers:
|
||||
for value in _channel_variants(service, identifier):
|
||||
if value and value not in variants:
|
||||
variants.append(value)
|
||||
if str(service or "").strip().lower() != "signal" or not variants:
|
||||
return variants
|
||||
person_ids = list(
|
||||
PersonIdentifier.objects.filter(
|
||||
user_id=user_id,
|
||||
service="signal",
|
||||
identifier__in=variants,
|
||||
)
|
||||
.values_list("person_id", flat=True)
|
||||
.distinct()
|
||||
)
|
||||
if not person_ids:
|
||||
return variants
|
||||
alias_rows = list(
|
||||
PersonIdentifier.objects.filter(
|
||||
user_id=user_id,
|
||||
service="signal",
|
||||
person_id__in=person_ids,
|
||||
).values_list("identifier", flat=True)
|
||||
)
|
||||
for value in alias_rows:
|
||||
cleaned = str(value or "").strip()
|
||||
if cleaned and cleaned not in variants:
|
||||
variants.append(cleaned)
|
||||
variants.sort(key=lambda value: (_signal_identifier_rank(value), value))
|
||||
return variants
|
||||
|
||||
|
||||
def _preferred_channel_identifier(service: str, identifiers: list[str]) -> str:
|
||||
cleaned = [str(value or "").strip() for value in identifiers if str(value or "").strip()]
|
||||
if not cleaned:
|
||||
return ""
|
||||
if str(service or "").strip().lower() == "signal":
|
||||
cleaned.sort(key=lambda value: (_signal_identifier_rank(value), value))
|
||||
return cleaned[0]
|
||||
return cleaned[0]
|
||||
|
||||
|
||||
def _effective_bootstrap_scope(
|
||||
ctx: CommandContext,
|
||||
trigger_message: Message,
|
||||
) -> tuple[str, str]:
|
||||
service = str(ctx.service or "").strip().lower()
|
||||
identifier = str(ctx.channel_identifier or "").strip()
|
||||
if service != "web":
|
||||
return service, identifier
|
||||
session_identifier = getattr(
|
||||
getattr(trigger_message, "session", None), "identifier", None
|
||||
)
|
||||
fallback_service = (
|
||||
str(getattr(session_identifier, "service", "") or "").strip().lower()
|
||||
)
|
||||
fallback_identifier = str(
|
||||
getattr(session_identifier, "identifier", "") or ""
|
||||
).strip()
|
||||
if fallback_service and fallback_identifier and fallback_service != "web":
|
||||
return fallback_service, fallback_identifier
|
||||
return service, identifier
|
||||
|
||||
|
||||
def _ensure_bp_profile(user_id: int) -> CommandProfile:
|
||||
profile, _ = CommandProfile.objects.get_or_create(
|
||||
user_id=user_id,
|
||||
slug="bp",
|
||||
defaults={
|
||||
"name": "Business Plan",
|
||||
"enabled": True,
|
||||
"trigger_token": ".bp",
|
||||
"reply_required": True,
|
||||
"exact_match_only": True,
|
||||
"window_scope": "conversation",
|
||||
"visibility_mode": "status_in_source",
|
||||
},
|
||||
)
|
||||
updated = False
|
||||
if not profile.enabled:
|
||||
profile.enabled = True
|
||||
updated = True
|
||||
if updated:
|
||||
profile.save(update_fields=["enabled", "updated_at"])
|
||||
if str(profile.trigger_token or "").strip() != ".bp":
|
||||
profile.trigger_token = ".bp"
|
||||
profile.save(update_fields=["trigger_token", "updated_at"])
|
||||
for action_type, position in (
|
||||
("extract_bp", 0),
|
||||
("save_document", 1),
|
||||
("post_result", 2),
|
||||
):
|
||||
action, created = CommandAction.objects.get_or_create(
|
||||
profile=profile,
|
||||
action_type=action_type,
|
||||
defaults={"enabled": True, "position": position},
|
||||
)
|
||||
if (not created) and (not action.enabled):
|
||||
action.enabled = True
|
||||
action.save(update_fields=["enabled", "updated_at"])
|
||||
ensure_variant_policies_for_profile(profile)
|
||||
return profile
|
||||
|
||||
|
||||
def _ensure_codex_profile(user_id: int) -> CommandProfile:
|
||||
profile, _ = CommandProfile.objects.get_or_create(
|
||||
user_id=user_id,
|
||||
slug="codex",
|
||||
defaults={
|
||||
"name": "Codex",
|
||||
"enabled": True,
|
||||
"trigger_token": ".codex",
|
||||
"reply_required": False,
|
||||
"exact_match_only": False,
|
||||
"window_scope": "conversation",
|
||||
"visibility_mode": "status_in_source",
|
||||
},
|
||||
)
|
||||
if not profile.enabled:
|
||||
profile.enabled = True
|
||||
profile.save(update_fields=["enabled", "updated_at"])
|
||||
if str(profile.trigger_token or "").strip() != ".codex":
|
||||
profile.trigger_token = ".codex"
|
||||
profile.save(update_fields=["trigger_token", "updated_at"])
|
||||
return profile
|
||||
|
||||
|
||||
def _ensure_claude_profile(user_id: int) -> CommandProfile:
|
||||
profile, _ = CommandProfile.objects.get_or_create(
|
||||
user_id=user_id,
|
||||
slug="claude",
|
||||
defaults={
|
||||
"name": "Claude",
|
||||
"enabled": True,
|
||||
"trigger_token": ".claude",
|
||||
"reply_required": False,
|
||||
"exact_match_only": False,
|
||||
"window_scope": "conversation",
|
||||
"visibility_mode": "status_in_source",
|
||||
},
|
||||
)
|
||||
if not profile.enabled:
|
||||
profile.enabled = True
|
||||
profile.save(update_fields=["enabled", "updated_at"])
|
||||
if str(profile.trigger_token or "").strip() != ".claude":
|
||||
profile.trigger_token = ".claude"
|
||||
profile.save(update_fields=["trigger_token", "updated_at"])
|
||||
return profile
|
||||
|
||||
|
||||
def _ensure_profile_for_slug(user_id: int, slug: str) -> CommandProfile | None:
|
||||
if slug == "bp":
|
||||
return _ensure_bp_profile(user_id)
|
||||
if slug == "codex":
|
||||
return _ensure_codex_profile(user_id)
|
||||
if slug == "claude":
|
||||
return _ensure_claude_profile(user_id)
|
||||
return None
|
||||
|
||||
|
||||
def _detected_bootstrap_slugs(message_text: str) -> list[str]:
|
||||
slugs: list[str] = []
|
||||
if bp_trigger_matches(message_text, ".bp", False):
|
||||
slugs.append("bp")
|
||||
if codex_trigger_matches(message_text, ".codex", False):
|
||||
slugs.append("codex")
|
||||
if claude_trigger_matches(message_text, ".claude", False):
|
||||
slugs.append("claude")
|
||||
return slugs
|
||||
|
||||
|
||||
def _auto_setup_profile_bindings_for_first_command(
|
||||
ctx: CommandContext,
|
||||
trigger_message: Message,
|
||||
) -> None:
|
||||
author = str(getattr(trigger_message, "custom_author", "") or "").strip().upper()
|
||||
if author != "USER":
|
||||
return
|
||||
slugs = _detected_bootstrap_slugs(ctx.message_text)
|
||||
if not slugs:
|
||||
return
|
||||
service, identifier = _effective_bootstrap_scope(ctx, trigger_message)
|
||||
service = str(service or "").strip().lower()
|
||||
canonical = _canonical_channel_identifier(service, identifier)
|
||||
variants = _expand_service_channel_variants(ctx.user_id, service, [canonical])
|
||||
canonical = _preferred_channel_identifier(service, variants) or canonical
|
||||
if not service or not variants:
|
||||
return
|
||||
for slug in slugs:
|
||||
profile = _ensure_profile_for_slug(ctx.user_id, slug)
|
||||
if profile is None:
|
||||
continue
|
||||
already_enabled = CommandChannelBinding.objects.filter(
|
||||
profile=profile,
|
||||
enabled=True,
|
||||
direction="ingress",
|
||||
service=service,
|
||||
channel_identifier__in=variants,
|
||||
).exists()
|
||||
if already_enabled:
|
||||
continue
|
||||
for direction in ("ingress", "egress"):
|
||||
binding, _ = CommandChannelBinding.objects.get_or_create(
|
||||
profile=profile,
|
||||
direction=direction,
|
||||
service=service,
|
||||
channel_identifier=canonical,
|
||||
defaults={"enabled": True},
|
||||
)
|
||||
if not binding.enabled:
|
||||
binding.enabled = True
|
||||
binding.save(update_fields=["enabled", "updated_at"])
|
||||
alternate_variants = [value for value in variants if value != canonical]
|
||||
if alternate_variants:
|
||||
CommandChannelBinding.objects.filter(
|
||||
profile=profile,
|
||||
direction=direction,
|
||||
service=service,
|
||||
channel_identifier__in=alternate_variants,
|
||||
).update(enabled=False)
|
||||
ensure_default_source_for_chat(
|
||||
user=trigger_message.user,
|
||||
service=service,
|
||||
channel_identifier=canonical,
|
||||
message=trigger_message,
|
||||
)
|
||||
|
||||
|
||||
def ensure_handlers_registered():
|
||||
global _REGISTERED
|
||||
if _REGISTERED:
|
||||
return
|
||||
register(BPCommandHandler())
|
||||
register(CodexCommandHandler())
|
||||
register(ClaudeCommandHandler())
|
||||
_REGISTERED = True
|
||||
|
||||
|
||||
async def _eligible_profiles(ctx: CommandContext) -> list[CommandProfile]:
|
||||
def _load():
|
||||
trigger = (
|
||||
Message.objects.select_related("session", "session__identifier")
|
||||
.filter(id=ctx.message_id, user_id=ctx.user_id)
|
||||
.first()
|
||||
)
|
||||
direct_variants = _expand_service_channel_variants(
|
||||
ctx.user_id,
|
||||
ctx.service,
|
||||
[ctx.channel_identifier],
|
||||
)
|
||||
source_channel = str(getattr(trigger, "source_chat_id", "") or "").strip()
|
||||
for expanded in _expand_service_channel_variants(
|
||||
ctx.user_id,
|
||||
ctx.service,
|
||||
[source_channel],
|
||||
):
|
||||
if expanded and expanded not in direct_variants:
|
||||
direct_variants.append(expanded)
|
||||
if not direct_variants:
|
||||
return []
|
||||
direct = list(
|
||||
CommandProfile.objects.filter(
|
||||
user_id=ctx.user_id,
|
||||
enabled=True,
|
||||
channel_bindings__enabled=True,
|
||||
channel_bindings__direction="ingress",
|
||||
channel_bindings__service=ctx.service,
|
||||
channel_bindings__channel_identifier__in=direct_variants,
|
||||
).distinct()
|
||||
)
|
||||
if direct:
|
||||
return direct
|
||||
# Compose-originated messages use `web` service even when the
|
||||
# underlying conversation is mapped to a platform identifier.
|
||||
if str(ctx.service or "").strip().lower() != "web":
|
||||
return []
|
||||
identifier = getattr(getattr(trigger, "session", None), "identifier", None)
|
||||
fallback_service = str(getattr(identifier, "service", "") or "").strip().lower()
|
||||
fallback_identifier = str(getattr(identifier, "identifier", "") or "").strip()
|
||||
fallback_variants = _expand_service_channel_variants(
|
||||
ctx.user_id,
|
||||
fallback_service,
|
||||
[fallback_identifier],
|
||||
)
|
||||
for expanded in _expand_service_channel_variants(
|
||||
ctx.user_id,
|
||||
fallback_service,
|
||||
[source_channel],
|
||||
):
|
||||
if expanded and expanded not in fallback_variants:
|
||||
fallback_variants.append(expanded)
|
||||
if not fallback_service or not fallback_variants:
|
||||
return []
|
||||
return list(
|
||||
CommandProfile.objects.filter(
|
||||
user_id=ctx.user_id,
|
||||
enabled=True,
|
||||
channel_bindings__enabled=True,
|
||||
channel_bindings__direction="ingress",
|
||||
channel_bindings__service=fallback_service,
|
||||
channel_bindings__channel_identifier__in=fallback_variants,
|
||||
).distinct()
|
||||
)
|
||||
|
||||
return await sync_to_async(_load)()
|
||||
|
||||
|
||||
def _matches_trigger(profile: CommandProfile, text: str) -> bool:
|
||||
if profile.slug == "bp" and bp_subcommands_enabled():
|
||||
return bp_trigger_matches(
|
||||
message_text=text,
|
||||
trigger_token=profile.trigger_token,
|
||||
exact_match_only=profile.exact_match_only,
|
||||
)
|
||||
if profile.slug == "codex":
|
||||
return codex_trigger_matches(
|
||||
message_text=text,
|
||||
trigger_token=profile.trigger_token,
|
||||
exact_match_only=profile.exact_match_only,
|
||||
)
|
||||
if profile.slug == "claude":
|
||||
return claude_trigger_matches(
|
||||
message_text=text,
|
||||
trigger_token=profile.trigger_token,
|
||||
exact_match_only=profile.exact_match_only,
|
||||
)
|
||||
body = str(text or "").strip()
|
||||
trigger = str(profile.trigger_token or "").strip()
|
||||
if not trigger:
|
||||
return False
|
||||
if profile.exact_match_only:
|
||||
return body == trigger
|
||||
return trigger in body
|
||||
|
||||
|
||||
async def process_inbound_message(ctx: CommandContext) -> list[CommandResult]:
|
||||
ensure_handlers_registered()
|
||||
trigger_message = await sync_to_async(
|
||||
lambda: Message.objects.select_related("user", "session", "session__identifier")
|
||||
.filter(id=ctx.message_id)
|
||||
.first()
|
||||
)()
|
||||
if trigger_message is None:
|
||||
return []
|
||||
if is_mirrored_origin(trigger_message.message_meta):
|
||||
return []
|
||||
effective_service, effective_channel = _effective_bootstrap_scope(
|
||||
ctx, trigger_message
|
||||
)
|
||||
security_context = CommandSecurityContext(
|
||||
service=effective_service,
|
||||
channel_identifier=effective_channel,
|
||||
message_meta=dict(getattr(trigger_message, "message_meta", {}) or {}),
|
||||
payload=dict(ctx.payload or {}),
|
||||
)
|
||||
await sync_to_async(_auto_setup_profile_bindings_for_first_command)(
|
||||
ctx,
|
||||
trigger_message,
|
||||
)
|
||||
|
||||
profiles = await _eligible_profiles(ctx)
|
||||
results: list[CommandResult] = []
|
||||
for profile in profiles:
|
||||
if not _matches_trigger(profile, ctx.message_text):
|
||||
continue
|
||||
decision = await sync_to_async(evaluate_command_policy)(
|
||||
user=trigger_message.user,
|
||||
scope_key=f"command.{profile.slug}",
|
||||
context=security_context,
|
||||
)
|
||||
if not decision.allowed:
|
||||
results.append(
|
||||
CommandResult(
|
||||
ok=False,
|
||||
status="skipped",
|
||||
error=f"policy_denied:{decision.code}",
|
||||
payload={
|
||||
"profile": profile.slug,
|
||||
"scope": f"command.{profile.slug}",
|
||||
"reason": decision.reason,
|
||||
},
|
||||
)
|
||||
)
|
||||
continue
|
||||
if profile.reply_required and trigger_message.reply_to_id is None:
|
||||
if (
|
||||
profile.slug == "bp"
|
||||
and bp_subcommands_enabled()
|
||||
and bp_reply_is_optional_for_trigger(ctx.message_text)
|
||||
):
|
||||
pass
|
||||
else:
|
||||
results.append(
|
||||
CommandResult(
|
||||
ok=False,
|
||||
status="skipped",
|
||||
error="reply_required",
|
||||
payload={"profile": profile.slug},
|
||||
)
|
||||
)
|
||||
continue
|
||||
handler = get_handler(profile.slug)
|
||||
if handler is None:
|
||||
results.append(
|
||||
CommandResult(
|
||||
ok=False,
|
||||
status="failed",
|
||||
error=f"missing_handler:{profile.slug}",
|
||||
)
|
||||
)
|
||||
continue
|
||||
try:
|
||||
result = await handler.execute(ctx)
|
||||
results.append(result)
|
||||
except Exception as exc:
|
||||
log.exception(
|
||||
"command execution failed for profile=%s: %s", profile.slug, exc
|
||||
)
|
||||
results.append(
|
||||
CommandResult(
|
||||
ok=False,
|
||||
status="failed",
|
||||
error=f"handler_exception:{exc}",
|
||||
)
|
||||
)
|
||||
return results
|
||||
0
core/commands/handlers/__init__.py
Normal file
0
core/commands/handlers/__init__.py
Normal file
715
core/commands/handlers/bp.py
Normal file
715
core/commands/handlers/bp.py
Normal file
@@ -0,0 +1,715 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
import time
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
from django.conf import settings
|
||||
|
||||
from core.commands.base import CommandContext, CommandHandler, CommandResult
|
||||
from core.commands.delivery import post_status_in_source, post_to_channel_binding
|
||||
from core.commands.policies import BP_VARIANT_META, load_variant_policy
|
||||
from core.messaging import ai as ai_runner
|
||||
from core.messaging.text_export import plain_text_blob
|
||||
from core.messaging.utils import messages_to_string
|
||||
from core.models import (
|
||||
AI,
|
||||
BusinessPlanDocument,
|
||||
BusinessPlanRevision,
|
||||
CommandAction,
|
||||
CommandChannelBinding,
|
||||
CommandRun,
|
||||
CommandVariantPolicy,
|
||||
Message,
|
||||
)
|
||||
|
||||
_BP_ROOT_RE = re.compile(r"^\s*(?:\.bp\b|#bp#?)\s*$", re.IGNORECASE)
|
||||
_BP_SET_RE = re.compile(
|
||||
r"^\s*(?:\.bp\s+set\b|#bp\s+set#?)(?P<rest>.*)$",
|
||||
re.IGNORECASE | re.DOTALL,
|
||||
)
|
||||
_BP_SET_RANGE_RE = re.compile(
|
||||
r"^\s*(?:\.bp\s+set\s+range\b|#bp\s+set\s+range#?)(?:.*)$",
|
||||
re.IGNORECASE | re.DOTALL,
|
||||
)
|
||||
|
||||
|
||||
class BPParsedCommand(dict):
|
||||
@property
|
||||
def command(self) -> str | None:
|
||||
value = self.get("command")
|
||||
return str(value) if value else None
|
||||
|
||||
@property
|
||||
def remainder_text(self) -> str:
|
||||
return str(self.get("remainder_text") or "")
|
||||
|
||||
|
||||
def parse_bp_subcommand(text: str) -> BPParsedCommand:
|
||||
body = str(text or "")
|
||||
if _BP_SET_RANGE_RE.match(body):
|
||||
return BPParsedCommand(command="set_range", remainder_text="")
|
||||
match = _BP_SET_RE.match(body)
|
||||
if match:
|
||||
return BPParsedCommand(
|
||||
command="set", remainder_text=str(match.group("rest") or "").strip()
|
||||
)
|
||||
return BPParsedCommand(command=None, remainder_text="")
|
||||
|
||||
|
||||
def bp_subcommands_enabled() -> bool:
|
||||
raw = getattr(settings, "BP_SUBCOMMANDS_V1", True)
|
||||
if raw is None:
|
||||
return True
|
||||
return bool(raw)
|
||||
|
||||
|
||||
def bp_trigger_matches(
|
||||
message_text: str, trigger_token: str, exact_match_only: bool
|
||||
) -> bool:
|
||||
body = str(message_text or "").strip()
|
||||
trigger = str(trigger_token or "").strip()
|
||||
parsed = parse_bp_subcommand(body)
|
||||
if parsed.command and bp_subcommands_enabled():
|
||||
return True
|
||||
if _BP_ROOT_RE.match(body):
|
||||
return True
|
||||
if not trigger:
|
||||
return False
|
||||
if exact_match_only:
|
||||
return body.lower() == trigger.lower()
|
||||
return trigger.lower() in body.lower()
|
||||
|
||||
|
||||
def bp_reply_is_optional_for_trigger(message_text: str) -> bool:
|
||||
parsed = parse_bp_subcommand(message_text)
|
||||
return parsed.command == "set"
|
||||
|
||||
|
||||
def _bp_system_prompt():
|
||||
return (
|
||||
"Create a structured business plan using the given template. "
|
||||
"Follow the template section order exactly. "
|
||||
"If data is missing, write concise assumptions and risks. "
|
||||
"Return markdown only."
|
||||
)
|
||||
|
||||
|
||||
def _clamp_transcript(transcript: str, max_chars: int) -> str:
|
||||
text = str(transcript or "")
|
||||
if max_chars <= 0 or len(text) <= max_chars:
|
||||
return text
|
||||
head_size = min(2000, max_chars // 3)
|
||||
tail_size = max(0, max_chars - head_size - 140)
|
||||
omitted = len(text) - head_size - tail_size
|
||||
return (
|
||||
text[:head_size].rstrip()
|
||||
+ f"\n\n[... truncated {max(0, omitted)} chars ...]\n\n"
|
||||
+ text[-tail_size:].lstrip()
|
||||
)
|
||||
|
||||
|
||||
class BPCommandHandler(CommandHandler):
|
||||
slug = "bp"
|
||||
|
||||
def _variant_key_for_text(self, text: str) -> str:
|
||||
parsed = parse_bp_subcommand(text)
|
||||
if parsed.command == "set":
|
||||
return "bp_set"
|
||||
if parsed.command == "set_range":
|
||||
return "bp_set_range"
|
||||
return "bp"
|
||||
|
||||
def _variant_display_name(self, variant_key: str) -> str:
|
||||
meta = BP_VARIANT_META.get(str(variant_key or "").strip(), {})
|
||||
return str(meta.get("name") or variant_key or "bp")
|
||||
|
||||
async def _effective_policy(
|
||||
self,
|
||||
*,
|
||||
profile,
|
||||
variant_key: str,
|
||||
action_types: set[str],
|
||||
) -> dict:
|
||||
policy = await sync_to_async(load_variant_policy)(profile, variant_key)
|
||||
if isinstance(policy, CommandVariantPolicy):
|
||||
return {
|
||||
"enabled": bool(policy.enabled),
|
||||
"generation_mode": str(policy.generation_mode or "verbatim"),
|
||||
"send_plan_to_egress": bool(policy.send_plan_to_egress)
|
||||
and ("post_result" in action_types),
|
||||
"send_status_to_source": bool(policy.send_status_to_source)
|
||||
or str(profile.visibility_mode or "") == "status_in_source",
|
||||
"send_status_to_egress": bool(policy.send_status_to_egress),
|
||||
"store_document": bool(getattr(policy, "store_document", True)),
|
||||
}
|
||||
return {
|
||||
"enabled": True,
|
||||
"generation_mode": "ai" if variant_key == "bp" else "verbatim",
|
||||
"send_plan_to_egress": "post_result" in action_types,
|
||||
"send_status_to_source": str(profile.visibility_mode or "")
|
||||
== "status_in_source",
|
||||
"send_status_to_egress": False,
|
||||
"store_document": True,
|
||||
}
|
||||
|
||||
async def _fanout(self, run: CommandRun, text: str) -> dict:
|
||||
profile = run.profile
|
||||
trigger = await sync_to_async(
|
||||
lambda: Message.objects.select_related("session", "user")
|
||||
.filter(id=run.trigger_message_id)
|
||||
.first()
|
||||
)()
|
||||
if trigger is None:
|
||||
return {"sent_bindings": 0, "failed_bindings": 0}
|
||||
bindings = await sync_to_async(list)(
|
||||
CommandChannelBinding.objects.filter(
|
||||
profile=profile,
|
||||
enabled=True,
|
||||
direction="egress",
|
||||
)
|
||||
)
|
||||
sent_bindings = 0
|
||||
failed_bindings = 0
|
||||
for binding in bindings:
|
||||
ok = await post_to_channel_binding(
|
||||
trigger_message=trigger,
|
||||
binding_service=binding.service,
|
||||
binding_channel_identifier=binding.channel_identifier,
|
||||
text=text,
|
||||
origin_tag=f"bp:{run.id}",
|
||||
command_slug=self.slug,
|
||||
)
|
||||
if ok:
|
||||
sent_bindings += 1
|
||||
else:
|
||||
failed_bindings += 1
|
||||
return {"sent_bindings": sent_bindings, "failed_bindings": failed_bindings}
|
||||
|
||||
async def _fanout_status(self, run: CommandRun, text: str) -> dict:
|
||||
profile = run.profile
|
||||
trigger = await sync_to_async(
|
||||
lambda: Message.objects.select_related("session", "user")
|
||||
.filter(id=run.trigger_message_id)
|
||||
.first()
|
||||
)()
|
||||
if trigger is None:
|
||||
return {"sent_bindings": 0, "failed_bindings": 0}
|
||||
bindings = await sync_to_async(list)(
|
||||
CommandChannelBinding.objects.filter(
|
||||
profile=profile,
|
||||
enabled=True,
|
||||
direction="egress",
|
||||
)
|
||||
)
|
||||
sent_bindings = 0
|
||||
failed_bindings = 0
|
||||
for binding in bindings:
|
||||
ok = await post_to_channel_binding(
|
||||
trigger_message=trigger,
|
||||
binding_service=binding.service,
|
||||
binding_channel_identifier=binding.channel_identifier,
|
||||
text=text,
|
||||
origin_tag=f"bp-status-egress:{run.id}",
|
||||
command_slug=self.slug,
|
||||
)
|
||||
if ok:
|
||||
sent_bindings += 1
|
||||
else:
|
||||
failed_bindings += 1
|
||||
return {"sent_bindings": sent_bindings, "failed_bindings": failed_bindings}
|
||||
|
||||
async def _load_window(self, trigger: Message, anchor: Message) -> list[Message]:
|
||||
return await sync_to_async(list)(
|
||||
Message.objects.filter(
|
||||
user=trigger.user,
|
||||
session=trigger.session,
|
||||
ts__gte=int(anchor.ts or 0),
|
||||
ts__lte=int(trigger.ts or 0),
|
||||
)
|
||||
.order_by("ts")
|
||||
.select_related(
|
||||
"session", "session__identifier", "session__identifier__person"
|
||||
)
|
||||
)
|
||||
|
||||
def _annotation(
|
||||
self, mode: str, message_count: int, has_addendum: bool = False
|
||||
) -> str:
|
||||
if mode == "set" and has_addendum:
|
||||
return "Generated from 1 message + 1 addendum."
|
||||
if message_count == 1:
|
||||
return "Generated from 1 message."
|
||||
return f"Generated from {int(message_count)} messages."
|
||||
|
||||
async def _persist_document(
|
||||
self,
|
||||
*,
|
||||
run: CommandRun,
|
||||
trigger: Message,
|
||||
profile,
|
||||
anchor: Message | None,
|
||||
content: str,
|
||||
mode: str,
|
||||
source_message_ids: list[str],
|
||||
annotation: str,
|
||||
) -> BusinessPlanDocument:
|
||||
payload = {
|
||||
"mode": mode,
|
||||
"source_message_ids": list(source_message_ids),
|
||||
"annotation": annotation,
|
||||
}
|
||||
document = await sync_to_async(BusinessPlanDocument.objects.create)(
|
||||
user=trigger.user,
|
||||
command_profile=profile,
|
||||
source_service=trigger.source_service or "web",
|
||||
source_channel_identifier=trigger.source_chat_id or "",
|
||||
trigger_message=trigger,
|
||||
anchor_message=anchor,
|
||||
title=f"Business Plan {time.strftime('%Y-%m-%d %H:%M:%S')}",
|
||||
status="draft",
|
||||
content_markdown=content,
|
||||
structured_payload=payload,
|
||||
)
|
||||
await sync_to_async(BusinessPlanRevision.objects.create)(
|
||||
document=document,
|
||||
editor_user=trigger.user,
|
||||
content_markdown=content,
|
||||
structured_payload=payload,
|
||||
)
|
||||
run.result_ref = document
|
||||
await sync_to_async(run.save)(update_fields=["result_ref", "updated_at"])
|
||||
return document
|
||||
|
||||
async def _execute_set_or_range(
|
||||
self,
|
||||
*,
|
||||
trigger: Message,
|
||||
run: CommandRun,
|
||||
profile,
|
||||
policy: dict,
|
||||
variant_key: str,
|
||||
parsed: BPParsedCommand,
|
||||
) -> CommandResult:
|
||||
mode = str(parsed.command or "")
|
||||
remainder = parsed.remainder_text
|
||||
anchor = trigger.reply_to
|
||||
|
||||
if mode == "set_range":
|
||||
if anchor is None:
|
||||
run.status = "failed"
|
||||
run.error = "bp_set_range_requires_reply_target"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
rows = await self._load_window(trigger, anchor)
|
||||
deterministic_content = plain_text_blob(rows)
|
||||
if not deterministic_content.strip():
|
||||
run.status = "failed"
|
||||
run.error = "bp_set_range_empty_content"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
if str(policy.get("generation_mode") or "verbatim") == "ai":
|
||||
ai_obj = await sync_to_async(
|
||||
lambda: AI.objects.filter(user=trigger.user).first()
|
||||
)()
|
||||
if ai_obj is None:
|
||||
run.status = "failed"
|
||||
run.error = "ai_not_configured"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
prompt = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": (
|
||||
"Transform source chat text into a structured business plan in markdown. "
|
||||
"Do not reference any user template."
|
||||
),
|
||||
},
|
||||
{"role": "user", "content": deterministic_content},
|
||||
]
|
||||
try:
|
||||
content = str(
|
||||
await ai_runner.run_prompt(
|
||||
prompt,
|
||||
ai_obj,
|
||||
operation="command_bp_set_range_extract",
|
||||
)
|
||||
or ""
|
||||
).strip()
|
||||
except Exception as exc:
|
||||
run.status = "failed"
|
||||
run.error = f"bp_ai_failed:{exc}"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
if not content:
|
||||
run.status = "failed"
|
||||
run.error = "empty_ai_response"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
else:
|
||||
content = deterministic_content
|
||||
annotation = self._annotation("set_range", len(rows))
|
||||
doc = None
|
||||
if bool(policy.get("store_document", True)):
|
||||
doc = await self._persist_document(
|
||||
run=run,
|
||||
trigger=trigger,
|
||||
profile=profile,
|
||||
anchor=anchor,
|
||||
content=content,
|
||||
mode="set_range",
|
||||
source_message_ids=[str(row.id) for row in rows],
|
||||
annotation=annotation,
|
||||
)
|
||||
elif mode == "set":
|
||||
source_ids: list[str] = []
|
||||
if anchor is not None and not remainder:
|
||||
content = str(anchor.text or "").strip() or "(no text)"
|
||||
source_ids.append(str(anchor.id))
|
||||
has_addendum = False
|
||||
elif anchor is not None and remainder:
|
||||
base = str(anchor.text or "").strip() or "(no text)"
|
||||
content = (
|
||||
f"{base}\n" "--- Addendum (newer message text) ---\n" f"{remainder}"
|
||||
)
|
||||
source_ids.extend([str(anchor.id), str(trigger.id)])
|
||||
has_addendum = True
|
||||
elif remainder:
|
||||
content = remainder
|
||||
source_ids.append(str(trigger.id))
|
||||
has_addendum = False
|
||||
else:
|
||||
run.status = "failed"
|
||||
run.error = "bp_set_empty_content"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
|
||||
if str(policy.get("generation_mode") or "verbatim") == "ai":
|
||||
ai_obj = await sync_to_async(
|
||||
lambda: AI.objects.filter(user=trigger.user).first()
|
||||
)()
|
||||
if ai_obj is None:
|
||||
run.status = "failed"
|
||||
run.error = "ai_not_configured"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
prompt = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": (
|
||||
"Transform source chat text into a structured business plan in markdown. "
|
||||
"Do not reference any user template."
|
||||
),
|
||||
},
|
||||
{"role": "user", "content": content},
|
||||
]
|
||||
try:
|
||||
ai_content = str(
|
||||
await ai_runner.run_prompt(
|
||||
prompt,
|
||||
ai_obj,
|
||||
operation="command_bp_set_extract",
|
||||
)
|
||||
or ""
|
||||
).strip()
|
||||
except Exception as exc:
|
||||
run.status = "failed"
|
||||
run.error = f"bp_ai_failed:{exc}"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
if not ai_content:
|
||||
run.status = "failed"
|
||||
run.error = "empty_ai_response"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
content = ai_content
|
||||
|
||||
annotation = self._annotation(
|
||||
"set", 1 if not has_addendum else 2, has_addendum
|
||||
)
|
||||
doc = None
|
||||
if bool(policy.get("store_document", True)):
|
||||
doc = await self._persist_document(
|
||||
run=run,
|
||||
trigger=trigger,
|
||||
profile=profile,
|
||||
anchor=anchor,
|
||||
content=content,
|
||||
mode="set",
|
||||
source_message_ids=source_ids,
|
||||
annotation=annotation,
|
||||
)
|
||||
else:
|
||||
run.status = "failed"
|
||||
run.error = "bp_unknown_subcommand"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
|
||||
fanout_stats = {"sent_bindings": 0, "failed_bindings": 0}
|
||||
if bool(policy.get("send_plan_to_egress")):
|
||||
fanout_body = f"{content}\n\n{annotation}".strip()
|
||||
fanout_stats = await self._fanout(run, fanout_body)
|
||||
|
||||
sent_count = int(fanout_stats.get("sent_bindings") or 0)
|
||||
failed_count = int(fanout_stats.get("failed_bindings") or 0)
|
||||
status_text = (
|
||||
f"[bp:{self._variant_display_name(variant_key)}:{policy.get('generation_mode')}] "
|
||||
f"{annotation.strip()} "
|
||||
f"{'Saved as ' + doc.title + ' · ' if doc else 'Not saved (store_document disabled) · '}"
|
||||
f"fanout sent:{sent_count}"
|
||||
).strip()
|
||||
if failed_count:
|
||||
status_text += f" failed:{failed_count}"
|
||||
|
||||
if bool(policy.get("send_status_to_source")):
|
||||
await post_status_in_source(
|
||||
trigger_message=trigger,
|
||||
text=status_text,
|
||||
origin_tag=f"bp-status:{trigger.id}",
|
||||
)
|
||||
if bool(policy.get("send_status_to_egress")):
|
||||
await self._fanout_status(run, status_text)
|
||||
|
||||
run.status = "ok"
|
||||
run.error = ""
|
||||
await sync_to_async(run.save)(update_fields=["status", "error", "updated_at"])
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"document_id": str(doc.id) if doc else ""},
|
||||
)
|
||||
|
||||
async def _execute_legacy_ai(
|
||||
self,
|
||||
*,
|
||||
trigger: Message,
|
||||
run: CommandRun,
|
||||
profile,
|
||||
policy: dict,
|
||||
variant_key: str,
|
||||
) -> CommandResult:
|
||||
if trigger.reply_to_id is None:
|
||||
run.status = "failed"
|
||||
run.error = "bp_requires_reply_target"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
|
||||
anchor = trigger.reply_to
|
||||
rows = await self._load_window(trigger, anchor)
|
||||
transcript = messages_to_string(
|
||||
rows,
|
||||
author_rewrites={"USER": "Operator", "BOT": "Assistant"},
|
||||
)
|
||||
max_transcript_chars = int(
|
||||
getattr(settings, "BP_MAX_TRANSCRIPT_CHARS", 12000) or 12000
|
||||
)
|
||||
transcript = _clamp_transcript(transcript, max_transcript_chars)
|
||||
default_template = (
|
||||
"Business Plan:\n"
|
||||
"- Objective\n"
|
||||
"- Audience\n"
|
||||
"- Offer\n"
|
||||
"- GTM\n"
|
||||
"- Risks"
|
||||
)
|
||||
template_text = profile.template_text or default_template
|
||||
max_template_chars = int(
|
||||
getattr(settings, "BP_MAX_TEMPLATE_CHARS", 5000) or 5000
|
||||
)
|
||||
template_text = str(template_text or "")[:max_template_chars]
|
||||
generation_mode = str(policy.get("generation_mode") or "ai")
|
||||
if generation_mode == "verbatim":
|
||||
summary = plain_text_blob(rows)
|
||||
if not summary.strip():
|
||||
run.status = "failed"
|
||||
run.error = "bp_verbatim_empty_content"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
else:
|
||||
ai_obj = await sync_to_async(
|
||||
lambda: AI.objects.filter(user=trigger.user).first()
|
||||
)()
|
||||
if ai_obj is None:
|
||||
run.status = "failed"
|
||||
run.error = "ai_not_configured"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
|
||||
prompt = [
|
||||
{"role": "system", "content": _bp_system_prompt()},
|
||||
{
|
||||
"role": "user",
|
||||
"content": (
|
||||
"Template:\n"
|
||||
f"{template_text}\n\n"
|
||||
"Messages:\n"
|
||||
f"{transcript}"
|
||||
),
|
||||
},
|
||||
]
|
||||
try:
|
||||
summary = str(
|
||||
await ai_runner.run_prompt(
|
||||
prompt, ai_obj, operation="command_bp_extract"
|
||||
)
|
||||
or ""
|
||||
).strip()
|
||||
if not summary:
|
||||
raise RuntimeError("empty_ai_response")
|
||||
except Exception as exc:
|
||||
run.status = "failed"
|
||||
run.error = f"bp_ai_failed:{exc}"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="failed", error=run.error)
|
||||
|
||||
annotation = self._annotation("legacy", len(rows))
|
||||
document = None
|
||||
if bool(policy.get("store_document", True)):
|
||||
document = await self._persist_document(
|
||||
run=run,
|
||||
trigger=trigger,
|
||||
profile=profile,
|
||||
anchor=anchor,
|
||||
content=summary,
|
||||
mode="legacy_ai",
|
||||
source_message_ids=[str(row.id) for row in rows],
|
||||
annotation=annotation,
|
||||
)
|
||||
|
||||
fanout_stats = {"sent_bindings": 0, "failed_bindings": 0}
|
||||
if bool(policy.get("send_plan_to_egress")):
|
||||
fanout_stats = await self._fanout(run, summary)
|
||||
|
||||
sent_count = int(fanout_stats.get("sent_bindings") or 0)
|
||||
failed_count = int(fanout_stats.get("failed_bindings") or 0)
|
||||
status_text = (
|
||||
f"[bp:{self._variant_display_name(variant_key)}:{generation_mode}] "
|
||||
f"Generated business plan: "
|
||||
f"{document.title if document else 'not saved (store_document disabled)'} "
|
||||
f"· fanout sent:{sent_count}"
|
||||
)
|
||||
if failed_count:
|
||||
status_text += f" failed:{failed_count}"
|
||||
|
||||
if bool(policy.get("send_status_to_source")):
|
||||
await post_status_in_source(
|
||||
trigger_message=trigger,
|
||||
text=status_text,
|
||||
origin_tag=f"bp-status:{trigger.id}",
|
||||
)
|
||||
if bool(policy.get("send_status_to_egress")):
|
||||
await self._fanout_status(run, status_text)
|
||||
|
||||
run.status = "ok"
|
||||
run.error = ""
|
||||
await sync_to_async(run.save)(update_fields=["status", "error", "updated_at"])
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"document_id": str(document.id) if document else ""},
|
||||
)
|
||||
|
||||
async def execute(self, ctx: CommandContext) -> CommandResult:
|
||||
trigger = await sync_to_async(
|
||||
lambda: Message.objects.select_related("user", "session")
|
||||
.filter(id=ctx.message_id)
|
||||
.first()
|
||||
)()
|
||||
if trigger is None:
|
||||
return CommandResult(ok=False, status="failed", error="trigger_not_found")
|
||||
|
||||
profile = await sync_to_async(
|
||||
lambda: trigger.user.commandprofile_set.filter(
|
||||
slug=self.slug, enabled=True
|
||||
).first()
|
||||
)()
|
||||
if profile is None:
|
||||
return CommandResult(ok=False, status="skipped", error="profile_missing")
|
||||
|
||||
actions = await sync_to_async(list)(
|
||||
CommandAction.objects.filter(profile=profile, enabled=True).order_by(
|
||||
"position", "id"
|
||||
)
|
||||
)
|
||||
action_types = {row.action_type for row in actions}
|
||||
if "extract_bp" not in action_types:
|
||||
return CommandResult(
|
||||
ok=False, status="skipped", error="extract_bp_disabled"
|
||||
)
|
||||
|
||||
run, created = await sync_to_async(CommandRun.objects.get_or_create)(
|
||||
profile=profile,
|
||||
trigger_message=trigger,
|
||||
defaults={"user": trigger.user, "status": "running"},
|
||||
)
|
||||
if not created and run.status in {"ok", "running"}:
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"document_id": str(run.result_ref_id or "")},
|
||||
)
|
||||
|
||||
run.status = "running"
|
||||
run.error = ""
|
||||
await sync_to_async(run.save)(update_fields=["status", "error", "updated_at"])
|
||||
|
||||
variant_key = self._variant_key_for_text(ctx.message_text)
|
||||
policy = await self._effective_policy(
|
||||
profile=profile,
|
||||
variant_key=variant_key,
|
||||
action_types=action_types,
|
||||
)
|
||||
if not bool(policy.get("enabled")):
|
||||
run.status = "skipped"
|
||||
run.error = f"variant_disabled:{variant_key}"
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
return CommandResult(ok=False, status="skipped", error=run.error)
|
||||
|
||||
parsed = parse_bp_subcommand(ctx.message_text)
|
||||
if parsed.command and bp_subcommands_enabled():
|
||||
return await self._execute_set_or_range(
|
||||
trigger=trigger,
|
||||
run=run,
|
||||
profile=profile,
|
||||
policy=policy,
|
||||
variant_key=variant_key,
|
||||
parsed=parsed,
|
||||
)
|
||||
|
||||
return await self._execute_legacy_ai(
|
||||
trigger=trigger,
|
||||
run=run,
|
||||
profile=profile,
|
||||
policy=policy,
|
||||
variant_key=variant_key,
|
||||
)
|
||||
629
core/commands/handlers/claude.py
Normal file
629
core/commands/handlers/claude.py
Normal file
@@ -0,0 +1,629 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import re
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
from django.utils import timezone
|
||||
|
||||
from core.commands.base import CommandContext, CommandHandler, CommandResult
|
||||
from core.commands.delivery import post_status_in_source
|
||||
from core.messaging.text_export import plain_text_blob
|
||||
from core.models import (
|
||||
ChatTaskSource,
|
||||
CodexPermissionRequest,
|
||||
CodexRun,
|
||||
CommandProfile,
|
||||
DerivedTask,
|
||||
ExternalSyncEvent,
|
||||
Message,
|
||||
TaskProject,
|
||||
TaskProviderConfig,
|
||||
)
|
||||
from core.tasks.codex_approval import queue_codex_event_with_pre_approval
|
||||
from core.tasks.codex_support import channel_variants, resolve_external_chat_id
|
||||
|
||||
_CLAUDE_DEFAULT_RE = re.compile(
|
||||
r"^\s*(?:\.claude\b|#claude#?)(?P<body>.*)$",
|
||||
re.IGNORECASE | re.DOTALL,
|
||||
)
|
||||
_CLAUDE_PLAN_RE = re.compile(
|
||||
r"^\s*(?:\.claude\s+plan\b|#claude\s+plan#?)(?P<body>.*)$",
|
||||
re.IGNORECASE | re.DOTALL,
|
||||
)
|
||||
_CLAUDE_STATUS_RE = re.compile(
|
||||
r"^\s*(?:\.claude\s+status\b|#claude\s+status#?)\s*$", re.IGNORECASE
|
||||
)
|
||||
_CLAUDE_APPROVE_DENY_RE = re.compile(
|
||||
r"^\s*(?:\.claude|#claude)\s+(?P<action>approve|deny)\s+(?P<approval_key>[A-Za-z0-9._:-]+)#?\s*$",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
_PROJECT_TOKEN_RE = re.compile(r"\[\s*project\s*:\s*([^\]]+)\]", re.IGNORECASE)
|
||||
_REFERENCE_RE = re.compile(r"(?<!\w)#([A-Za-z0-9_-]+)\b")
|
||||
|
||||
|
||||
class ClaudeParsedCommand(dict):
|
||||
@property
|
||||
def command(self) -> str | None:
|
||||
value = self.get("command")
|
||||
return str(value) if value else None
|
||||
|
||||
@property
|
||||
def body_text(self) -> str:
|
||||
return str(self.get("body_text") or "")
|
||||
|
||||
@property
|
||||
def approval_key(self) -> str:
|
||||
return str(self.get("approval_key") or "")
|
||||
|
||||
|
||||
def parse_claude_command(text: str) -> ClaudeParsedCommand:
|
||||
body = str(text or "")
|
||||
m = _CLAUDE_APPROVE_DENY_RE.match(body)
|
||||
if m:
|
||||
return ClaudeParsedCommand(
|
||||
command=str(m.group("action") or "").strip().lower(),
|
||||
body_text="",
|
||||
approval_key=str(m.group("approval_key") or "").strip(),
|
||||
)
|
||||
if _CLAUDE_STATUS_RE.match(body):
|
||||
return ClaudeParsedCommand(command="status", body_text="", approval_key="")
|
||||
m = _CLAUDE_PLAN_RE.match(body)
|
||||
if m:
|
||||
return ClaudeParsedCommand(
|
||||
command="plan",
|
||||
body_text=str(m.group("body") or "").strip(),
|
||||
approval_key="",
|
||||
)
|
||||
m = _CLAUDE_DEFAULT_RE.match(body)
|
||||
if m:
|
||||
return ClaudeParsedCommand(
|
||||
command="default",
|
||||
body_text=str(m.group("body") or "").strip(),
|
||||
approval_key="",
|
||||
)
|
||||
return ClaudeParsedCommand(command=None, body_text="", approval_key="")
|
||||
|
||||
|
||||
def claude_trigger_matches(
|
||||
message_text: str, trigger_token: str, exact_match_only: bool
|
||||
) -> bool:
|
||||
body = str(message_text or "").strip()
|
||||
parsed = parse_claude_command(body)
|
||||
if parsed.command:
|
||||
return True
|
||||
trigger = str(trigger_token or "").strip()
|
||||
if not trigger:
|
||||
return False
|
||||
if exact_match_only:
|
||||
return body.lower() == trigger.lower()
|
||||
return trigger.lower() in body.lower()
|
||||
|
||||
|
||||
class ClaudeCommandHandler(CommandHandler):
|
||||
slug = "claude"
|
||||
_provider_name = "claude_cli"
|
||||
_approval_prefix = "claude_approval"
|
||||
|
||||
async def _load_trigger(self, message_id: str) -> Message | None:
|
||||
return await sync_to_async(
|
||||
lambda: Message.objects.select_related(
|
||||
"user", "session", "session__identifier", "reply_to"
|
||||
)
|
||||
.filter(id=message_id)
|
||||
.first()
|
||||
)()
|
||||
|
||||
def _effective_scope(self, trigger: Message) -> tuple[str, str]:
|
||||
service = str(getattr(trigger, "source_service", "") or "").strip().lower()
|
||||
channel = str(getattr(trigger, "source_chat_id", "") or "").strip()
|
||||
identifier = getattr(getattr(trigger, "session", None), "identifier", None)
|
||||
fallback_service = str(getattr(identifier, "service", "") or "").strip().lower()
|
||||
fallback_identifier = str(getattr(identifier, "identifier", "") or "").strip()
|
||||
if (
|
||||
service == "web"
|
||||
and fallback_service
|
||||
and fallback_identifier
|
||||
and fallback_service != "web"
|
||||
):
|
||||
return fallback_service, fallback_identifier
|
||||
return service or "web", channel
|
||||
|
||||
async def _mapped_sources(
|
||||
self, user, service: str, channel: str
|
||||
) -> list[ChatTaskSource]:
|
||||
variants = channel_variants(service, channel)
|
||||
if not variants:
|
||||
return []
|
||||
return await sync_to_async(list)(
|
||||
ChatTaskSource.objects.filter(
|
||||
user=user,
|
||||
enabled=True,
|
||||
service=service,
|
||||
channel_identifier__in=variants,
|
||||
).select_related("project", "epic")
|
||||
)
|
||||
|
||||
async def _linked_task_from_reply(
|
||||
self, user, reply_to: Message | None
|
||||
) -> DerivedTask | None:
|
||||
if reply_to is None:
|
||||
return None
|
||||
by_origin = await sync_to_async(
|
||||
lambda: DerivedTask.objects.filter(user=user, origin_message=reply_to)
|
||||
.select_related("project", "epic")
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)()
|
||||
if by_origin is not None:
|
||||
return by_origin
|
||||
return await sync_to_async(
|
||||
lambda: DerivedTask.objects.filter(
|
||||
user=user, events__source_message=reply_to
|
||||
)
|
||||
.select_related("project", "epic")
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)()
|
||||
|
||||
def _extract_project_token(self, body_text: str) -> tuple[str, str]:
|
||||
text = str(body_text or "")
|
||||
m = _PROJECT_TOKEN_RE.search(text)
|
||||
if not m:
|
||||
return "", text
|
||||
token = str(m.group(1) or "").strip()
|
||||
cleaned = _PROJECT_TOKEN_RE.sub("", text).strip()
|
||||
return token, cleaned
|
||||
|
||||
def _extract_reference(self, body_text: str) -> str:
|
||||
m = _REFERENCE_RE.search(str(body_text or ""))
|
||||
if not m:
|
||||
return ""
|
||||
return str(m.group(1) or "").strip()
|
||||
|
||||
async def _resolve_task(
|
||||
self, user, reference_code: str, reply_task: DerivedTask | None
|
||||
) -> DerivedTask | None:
|
||||
if reference_code:
|
||||
return await sync_to_async(
|
||||
lambda: DerivedTask.objects.filter(
|
||||
user=user, reference_code=reference_code
|
||||
)
|
||||
.select_related("project", "epic")
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)()
|
||||
return reply_task
|
||||
|
||||
async def _resolve_project(
|
||||
self,
|
||||
*,
|
||||
user,
|
||||
service: str,
|
||||
channel: str,
|
||||
task: DerivedTask | None,
|
||||
reply_task: DerivedTask | None,
|
||||
project_token: str,
|
||||
) -> tuple[TaskProject | None, str]:
|
||||
if task is not None:
|
||||
return task.project, ""
|
||||
if reply_task is not None:
|
||||
return reply_task.project, ""
|
||||
if project_token:
|
||||
project = await sync_to_async(
|
||||
lambda: TaskProject.objects.filter(
|
||||
user=user, name__iexact=project_token
|
||||
).first()
|
||||
)()
|
||||
if project is not None:
|
||||
return project, ""
|
||||
return None, f"project_not_found:{project_token}"
|
||||
|
||||
mapped = await self._mapped_sources(user, service, channel)
|
||||
project_ids = sorted({str(row.project_id) for row in mapped if row.project_id})
|
||||
if len(project_ids) == 1:
|
||||
project = next(
|
||||
(
|
||||
row.project
|
||||
for row in mapped
|
||||
if str(row.project_id) == project_ids[0]
|
||||
),
|
||||
None,
|
||||
)
|
||||
return project, ""
|
||||
if len(project_ids) > 1:
|
||||
return None, "project_required:[project:Name]"
|
||||
return None, "project_unresolved"
|
||||
|
||||
async def _post_source_status(
|
||||
self, trigger: Message, text: str, suffix: str
|
||||
) -> None:
|
||||
await post_status_in_source(
|
||||
trigger_message=trigger,
|
||||
text=text,
|
||||
origin_tag=f"claude-status:{suffix}",
|
||||
)
|
||||
|
||||
async def _run_status(
|
||||
self, trigger: Message, service: str, channel: str, project: TaskProject | None
|
||||
) -> CommandResult:
|
||||
def _load_runs():
|
||||
qs = CodexRun.objects.filter(user=trigger.user)
|
||||
if service:
|
||||
qs = qs.filter(source_service=service)
|
||||
if channel:
|
||||
qs = qs.filter(source_channel=channel)
|
||||
if project is not None:
|
||||
qs = qs.filter(project=project)
|
||||
return list(qs.order_by("-created_at")[:10])
|
||||
|
||||
runs = await sync_to_async(_load_runs)()
|
||||
if not runs:
|
||||
await self._post_source_status(
|
||||
trigger, "[claude] no recent runs for this scope.", "empty"
|
||||
)
|
||||
return CommandResult(ok=True, status="ok", payload={"count": 0})
|
||||
lines = ["[claude] recent runs:"]
|
||||
for row in runs:
|
||||
ref = str(getattr(getattr(row, "task", None), "reference_code", "") or "-")
|
||||
summary = str((row.result_payload or {}).get("summary") or "").strip()
|
||||
summary_part = f" · {summary}" if summary else ""
|
||||
lines.append(f"- {row.status} run={row.id} task=#{ref}{summary_part}")
|
||||
await self._post_source_status(trigger, "\n".join(lines), "runs")
|
||||
return CommandResult(ok=True, status="ok", payload={"count": len(runs)})
|
||||
|
||||
async def _run_approval_action(
|
||||
self,
|
||||
trigger: Message,
|
||||
parsed: ClaudeParsedCommand,
|
||||
current_service: str,
|
||||
current_channel: str,
|
||||
) -> CommandResult:
|
||||
cfg = await sync_to_async(
|
||||
lambda: TaskProviderConfig.objects.filter(
|
||||
user=trigger.user, provider=self._provider_name
|
||||
).first()
|
||||
)()
|
||||
settings_payload = dict(getattr(cfg, "settings", {}) or {})
|
||||
approver_service = (
|
||||
str(settings_payload.get("approver_service") or "").strip().lower()
|
||||
)
|
||||
approver_identifier = str(
|
||||
settings_payload.get("approver_identifier") or ""
|
||||
).strip()
|
||||
if not approver_service or not approver_identifier:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="approver_channel_not_configured"
|
||||
)
|
||||
|
||||
if str(current_service or "").strip().lower() != approver_service or str(
|
||||
current_channel or ""
|
||||
).strip() not in set(channel_variants(approver_service, approver_identifier)):
|
||||
return CommandResult(
|
||||
ok=False,
|
||||
status="failed",
|
||||
error="approval_command_not_allowed_in_this_channel",
|
||||
)
|
||||
|
||||
approval_key = parsed.approval_key
|
||||
request = await sync_to_async(
|
||||
lambda: CodexPermissionRequest.objects.select_related(
|
||||
"codex_run", "external_sync_event"
|
||||
)
|
||||
.filter(user=trigger.user, approval_key=approval_key)
|
||||
.first()
|
||||
)()
|
||||
if request is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="approval_key_not_found"
|
||||
)
|
||||
|
||||
now = timezone.now()
|
||||
if parsed.command == "approve":
|
||||
request.status = "approved"
|
||||
request.resolved_at = now
|
||||
request.resolved_by_identifier = current_channel
|
||||
request.resolution_note = "approved via claude command"
|
||||
await sync_to_async(request.save)(
|
||||
update_fields=[
|
||||
"status",
|
||||
"resolved_at",
|
||||
"resolved_by_identifier",
|
||||
"resolution_note",
|
||||
]
|
||||
)
|
||||
if request.external_sync_event_id:
|
||||
await sync_to_async(
|
||||
ExternalSyncEvent.objects.filter(
|
||||
id=request.external_sync_event_id
|
||||
).update
|
||||
)(
|
||||
status="ok",
|
||||
error="",
|
||||
)
|
||||
run = request.codex_run
|
||||
run.status = "approved_waiting_resume"
|
||||
run.error = ""
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
source_service = str(run.source_service or "")
|
||||
source_channel = str(run.source_channel or "")
|
||||
resume_payload = dict(request.resume_payload or {})
|
||||
resume_action = str(resume_payload.get("action") or "").strip().lower()
|
||||
resume_provider_payload = dict(resume_payload.get("provider_payload") or {})
|
||||
if resume_action and resume_provider_payload:
|
||||
provider_payload = dict(resume_provider_payload)
|
||||
provider_payload["codex_run_id"] = str(run.id)
|
||||
provider_payload["source_service"] = source_service
|
||||
provider_payload["source_channel"] = source_channel
|
||||
event_action = resume_action
|
||||
resume_idempotency_key = str(
|
||||
resume_payload.get("idempotency_key") or ""
|
||||
).strip()
|
||||
resume_event_key = (
|
||||
resume_idempotency_key
|
||||
if resume_idempotency_key
|
||||
else f"{self._approval_prefix}:{approval_key}:approved"
|
||||
)
|
||||
else:
|
||||
provider_payload = dict(
|
||||
run.request_payload.get("provider_payload") or {}
|
||||
)
|
||||
provider_payload.update(
|
||||
{
|
||||
"mode": "approval_response",
|
||||
"approval_key": approval_key,
|
||||
"resume_payload": dict(request.resume_payload or {}),
|
||||
"codex_run_id": str(run.id),
|
||||
"source_service": source_service,
|
||||
"source_channel": source_channel,
|
||||
}
|
||||
)
|
||||
event_action = "append_update"
|
||||
resume_event_key = f"{self._approval_prefix}:{approval_key}:approved"
|
||||
await sync_to_async(ExternalSyncEvent.objects.update_or_create)(
|
||||
idempotency_key=resume_event_key,
|
||||
defaults={
|
||||
"user": trigger.user,
|
||||
"task_id": run.task_id,
|
||||
"task_event_id": run.derived_task_event_id,
|
||||
"provider": self._provider_name,
|
||||
"status": "pending",
|
||||
"payload": {
|
||||
"action": event_action,
|
||||
"provider_payload": provider_payload,
|
||||
},
|
||||
"error": "",
|
||||
},
|
||||
)
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"approval_key": approval_key, "resolution": "approved"},
|
||||
)
|
||||
|
||||
request.status = "denied"
|
||||
request.resolved_at = now
|
||||
request.resolved_by_identifier = current_channel
|
||||
request.resolution_note = "denied via claude command"
|
||||
await sync_to_async(request.save)(
|
||||
update_fields=[
|
||||
"status",
|
||||
"resolved_at",
|
||||
"resolved_by_identifier",
|
||||
"resolution_note",
|
||||
]
|
||||
)
|
||||
if request.external_sync_event_id:
|
||||
await sync_to_async(
|
||||
ExternalSyncEvent.objects.filter(
|
||||
id=request.external_sync_event_id
|
||||
).update
|
||||
)(
|
||||
status="failed",
|
||||
error="approval_denied",
|
||||
)
|
||||
run = request.codex_run
|
||||
run.status = "denied"
|
||||
run.error = "approval_denied"
|
||||
await sync_to_async(run.save)(update_fields=["status", "error", "updated_at"])
|
||||
await sync_to_async(ExternalSyncEvent.objects.update_or_create)(
|
||||
idempotency_key=f"{self._approval_prefix}:{approval_key}:denied",
|
||||
defaults={
|
||||
"user": trigger.user,
|
||||
"task_id": run.task_id,
|
||||
"task_event_id": run.derived_task_event_id,
|
||||
"provider": self._provider_name,
|
||||
"status": "failed",
|
||||
"payload": {
|
||||
"action": "append_update",
|
||||
"provider_payload": {
|
||||
"mode": "approval_response",
|
||||
"approval_key": approval_key,
|
||||
"codex_run_id": str(run.id),
|
||||
},
|
||||
},
|
||||
"error": "approval_denied",
|
||||
},
|
||||
)
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"approval_key": approval_key, "resolution": "denied"},
|
||||
)
|
||||
|
||||
async def _create_submission(
|
||||
self,
|
||||
*,
|
||||
trigger: Message,
|
||||
mode: str,
|
||||
body_text: str,
|
||||
task: DerivedTask,
|
||||
project: TaskProject,
|
||||
) -> CommandResult:
|
||||
cfg = await sync_to_async(
|
||||
lambda: TaskProviderConfig.objects.filter(
|
||||
user=trigger.user, provider=self._provider_name, enabled=True
|
||||
).first()
|
||||
)()
|
||||
if cfg is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="provider_disabled_or_missing"
|
||||
)
|
||||
|
||||
service, channel = self._effective_scope(trigger)
|
||||
external_chat_id = await sync_to_async(resolve_external_chat_id)(
|
||||
user=trigger.user,
|
||||
provider=self._provider_name,
|
||||
service=service,
|
||||
channel=channel,
|
||||
)
|
||||
payload = {
|
||||
"task_id": str(task.id),
|
||||
"reference_code": str(task.reference_code or ""),
|
||||
"title": str(task.title or ""),
|
||||
"external_key": str(task.external_key or ""),
|
||||
"project_name": str(getattr(project, "name", "") or ""),
|
||||
"epic_name": str(getattr(getattr(task, "epic", None), "name", "") or ""),
|
||||
"source_service": service,
|
||||
"source_channel": channel,
|
||||
"external_chat_id": external_chat_id,
|
||||
"origin_message_id": str(getattr(task, "origin_message_id", "") or ""),
|
||||
"trigger_message_id": str(trigger.id),
|
||||
"mode": mode,
|
||||
"command_text": str(body_text or ""),
|
||||
}
|
||||
if mode == "plan":
|
||||
anchor = trigger.reply_to
|
||||
if anchor is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="reply_required_for_claude_plan"
|
||||
)
|
||||
rows = await sync_to_async(list)(
|
||||
Message.objects.filter(
|
||||
user=trigger.user,
|
||||
session=trigger.session,
|
||||
ts__gte=int(anchor.ts or 0),
|
||||
ts__lte=int(trigger.ts or 0),
|
||||
)
|
||||
.order_by("ts")
|
||||
.select_related(
|
||||
"session", "session__identifier", "session__identifier__person"
|
||||
)
|
||||
)
|
||||
payload["reply_context"] = {
|
||||
"anchor_message_id": str(anchor.id),
|
||||
"trigger_message_id": str(trigger.id),
|
||||
"message_ids": [str(row.id) for row in rows],
|
||||
"content": plain_text_blob(rows),
|
||||
}
|
||||
|
||||
run = await sync_to_async(CodexRun.objects.create)(
|
||||
user=trigger.user,
|
||||
task=task,
|
||||
source_message=trigger,
|
||||
project=project,
|
||||
epic=getattr(task, "epic", None),
|
||||
source_service=service,
|
||||
source_channel=channel,
|
||||
external_chat_id=external_chat_id,
|
||||
status="waiting_approval",
|
||||
request_payload={
|
||||
"action": "append_update",
|
||||
"provider_payload": dict(payload),
|
||||
},
|
||||
result_payload={},
|
||||
error="",
|
||||
)
|
||||
payload["codex_run_id"] = str(run.id)
|
||||
run.request_payload = {
|
||||
"action": "append_update",
|
||||
"provider_payload": dict(payload),
|
||||
}
|
||||
await sync_to_async(run.save)(update_fields=["request_payload", "updated_at"])
|
||||
|
||||
idempotency_key = f"claude_cmd:{trigger.id}:{mode}:{task.id}:{hashlib.sha1(str(body_text or '').encode('utf-8')).hexdigest()[:12]}"
|
||||
await sync_to_async(queue_codex_event_with_pre_approval)(
|
||||
user=trigger.user,
|
||||
run=run,
|
||||
task=task,
|
||||
task_event=None,
|
||||
action="append_update",
|
||||
provider_payload=dict(payload),
|
||||
idempotency_key=idempotency_key,
|
||||
)
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"codex_run_id": str(run.id), "approval_required": True},
|
||||
)
|
||||
|
||||
async def execute(self, ctx: CommandContext) -> CommandResult:
|
||||
trigger = await self._load_trigger(ctx.message_id)
|
||||
if trigger is None:
|
||||
return CommandResult(ok=False, status="failed", error="trigger_not_found")
|
||||
|
||||
profile = await sync_to_async(
|
||||
lambda: CommandProfile.objects.filter(
|
||||
user=trigger.user, slug=self.slug, enabled=True
|
||||
).first()
|
||||
)()
|
||||
if profile is None:
|
||||
return CommandResult(ok=False, status="skipped", error="profile_missing")
|
||||
|
||||
parsed = parse_claude_command(ctx.message_text)
|
||||
if not parsed.command:
|
||||
return CommandResult(
|
||||
ok=False, status="skipped", error="claude_command_not_matched"
|
||||
)
|
||||
|
||||
service, channel = self._effective_scope(trigger)
|
||||
|
||||
if parsed.command == "status":
|
||||
project = None
|
||||
reply_task = await self._linked_task_from_reply(
|
||||
trigger.user, trigger.reply_to
|
||||
)
|
||||
if reply_task is not None:
|
||||
project = reply_task.project
|
||||
return await self._run_status(trigger, service, channel, project)
|
||||
|
||||
if parsed.command in {"approve", "deny"}:
|
||||
return await self._run_approval_action(
|
||||
trigger,
|
||||
parsed,
|
||||
current_service=str(ctx.service or ""),
|
||||
current_channel=str(ctx.channel_identifier or ""),
|
||||
)
|
||||
|
||||
project_token, cleaned_body = self._extract_project_token(parsed.body_text)
|
||||
reference_code = self._extract_reference(cleaned_body)
|
||||
reply_task = await self._linked_task_from_reply(trigger.user, trigger.reply_to)
|
||||
task = await self._resolve_task(trigger.user, reference_code, reply_task)
|
||||
if task is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="task_target_required"
|
||||
)
|
||||
|
||||
project, project_error = await self._resolve_project(
|
||||
user=trigger.user,
|
||||
service=service,
|
||||
channel=channel,
|
||||
task=task,
|
||||
reply_task=reply_task,
|
||||
project_token=project_token,
|
||||
)
|
||||
if project is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error=project_error or "project_unresolved"
|
||||
)
|
||||
|
||||
mode = "plan" if parsed.command == "plan" else "default"
|
||||
return await self._create_submission(
|
||||
trigger=trigger,
|
||||
mode=mode,
|
||||
body_text=cleaned_body,
|
||||
task=task,
|
||||
project=project,
|
||||
)
|
||||
627
core/commands/handlers/codex.py
Normal file
627
core/commands/handlers/codex.py
Normal file
@@ -0,0 +1,627 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import re
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
from django.utils import timezone
|
||||
|
||||
from core.commands.base import CommandContext, CommandHandler, CommandResult
|
||||
from core.commands.delivery import post_status_in_source
|
||||
from core.messaging.text_export import plain_text_blob
|
||||
from core.models import (
|
||||
ChatTaskSource,
|
||||
CodexPermissionRequest,
|
||||
CodexRun,
|
||||
CommandProfile,
|
||||
DerivedTask,
|
||||
ExternalSyncEvent,
|
||||
Message,
|
||||
TaskProject,
|
||||
TaskProviderConfig,
|
||||
)
|
||||
from core.tasks.codex_approval import queue_codex_event_with_pre_approval
|
||||
from core.tasks.codex_support import channel_variants, resolve_external_chat_id
|
||||
|
||||
_CODEX_DEFAULT_RE = re.compile(
|
||||
r"^\s*(?:\.codex\b|#codex#?)(?P<body>.*)$",
|
||||
re.IGNORECASE | re.DOTALL,
|
||||
)
|
||||
_CODEX_PLAN_RE = re.compile(
|
||||
r"^\s*(?:\.codex\s+plan\b|#codex\s+plan#?)(?P<body>.*)$",
|
||||
re.IGNORECASE | re.DOTALL,
|
||||
)
|
||||
_CODEX_STATUS_RE = re.compile(
|
||||
r"^\s*(?:\.codex\s+status\b|#codex\s+status#?)\s*$", re.IGNORECASE
|
||||
)
|
||||
_CODEX_APPROVE_DENY_RE = re.compile(
|
||||
r"^\s*(?:\.codex|#codex)\s+(?P<action>approve|deny)\s+(?P<approval_key>[A-Za-z0-9._:-]+)#?\s*$",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
_PROJECT_TOKEN_RE = re.compile(r"\[\s*project\s*:\s*([^\]]+)\]", re.IGNORECASE)
|
||||
_REFERENCE_RE = re.compile(r"(?<!\w)#([A-Za-z0-9_-]+)\b")
|
||||
|
||||
|
||||
class CodexParsedCommand(dict):
|
||||
@property
|
||||
def command(self) -> str | None:
|
||||
value = self.get("command")
|
||||
return str(value) if value else None
|
||||
|
||||
@property
|
||||
def body_text(self) -> str:
|
||||
return str(self.get("body_text") or "")
|
||||
|
||||
@property
|
||||
def approval_key(self) -> str:
|
||||
return str(self.get("approval_key") or "")
|
||||
|
||||
|
||||
def parse_codex_command(text: str) -> CodexParsedCommand:
|
||||
body = str(text or "")
|
||||
m = _CODEX_APPROVE_DENY_RE.match(body)
|
||||
if m:
|
||||
return CodexParsedCommand(
|
||||
command=str(m.group("action") or "").strip().lower(),
|
||||
body_text="",
|
||||
approval_key=str(m.group("approval_key") or "").strip(),
|
||||
)
|
||||
if _CODEX_STATUS_RE.match(body):
|
||||
return CodexParsedCommand(command="status", body_text="", approval_key="")
|
||||
m = _CODEX_PLAN_RE.match(body)
|
||||
if m:
|
||||
return CodexParsedCommand(
|
||||
command="plan",
|
||||
body_text=str(m.group("body") or "").strip(),
|
||||
approval_key="",
|
||||
)
|
||||
m = _CODEX_DEFAULT_RE.match(body)
|
||||
if m:
|
||||
return CodexParsedCommand(
|
||||
command="default",
|
||||
body_text=str(m.group("body") or "").strip(),
|
||||
approval_key="",
|
||||
)
|
||||
return CodexParsedCommand(command=None, body_text="", approval_key="")
|
||||
|
||||
|
||||
def codex_trigger_matches(
|
||||
message_text: str, trigger_token: str, exact_match_only: bool
|
||||
) -> bool:
|
||||
body = str(message_text or "").strip()
|
||||
parsed = parse_codex_command(body)
|
||||
if parsed.command:
|
||||
return True
|
||||
trigger = str(trigger_token or "").strip()
|
||||
if not trigger:
|
||||
return False
|
||||
if exact_match_only:
|
||||
return body.lower() == trigger.lower()
|
||||
return trigger.lower() in body.lower()
|
||||
|
||||
|
||||
class CodexCommandHandler(CommandHandler):
|
||||
slug = "codex"
|
||||
|
||||
async def _load_trigger(self, message_id: str) -> Message | None:
|
||||
return await sync_to_async(
|
||||
lambda: Message.objects.select_related(
|
||||
"user", "session", "session__identifier", "reply_to"
|
||||
)
|
||||
.filter(id=message_id)
|
||||
.first()
|
||||
)()
|
||||
|
||||
def _effective_scope(self, trigger: Message) -> tuple[str, str]:
|
||||
service = str(getattr(trigger, "source_service", "") or "").strip().lower()
|
||||
channel = str(getattr(trigger, "source_chat_id", "") or "").strip()
|
||||
identifier = getattr(getattr(trigger, "session", None), "identifier", None)
|
||||
fallback_service = str(getattr(identifier, "service", "") or "").strip().lower()
|
||||
fallback_identifier = str(getattr(identifier, "identifier", "") or "").strip()
|
||||
if (
|
||||
service == "web"
|
||||
and fallback_service
|
||||
and fallback_identifier
|
||||
and fallback_service != "web"
|
||||
):
|
||||
return fallback_service, fallback_identifier
|
||||
return service or "web", channel
|
||||
|
||||
async def _mapped_sources(
|
||||
self, user, service: str, channel: str
|
||||
) -> list[ChatTaskSource]:
|
||||
variants = channel_variants(service, channel)
|
||||
if not variants:
|
||||
return []
|
||||
return await sync_to_async(list)(
|
||||
ChatTaskSource.objects.filter(
|
||||
user=user,
|
||||
enabled=True,
|
||||
service=service,
|
||||
channel_identifier__in=variants,
|
||||
).select_related("project", "epic")
|
||||
)
|
||||
|
||||
async def _linked_task_from_reply(
|
||||
self, user, reply_to: Message | None
|
||||
) -> DerivedTask | None:
|
||||
if reply_to is None:
|
||||
return None
|
||||
by_origin = await sync_to_async(
|
||||
lambda: DerivedTask.objects.filter(user=user, origin_message=reply_to)
|
||||
.select_related("project", "epic")
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)()
|
||||
if by_origin is not None:
|
||||
return by_origin
|
||||
return await sync_to_async(
|
||||
lambda: DerivedTask.objects.filter(
|
||||
user=user, events__source_message=reply_to
|
||||
)
|
||||
.select_related("project", "epic")
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)()
|
||||
|
||||
def _extract_project_token(self, body_text: str) -> tuple[str, str]:
|
||||
text = str(body_text or "")
|
||||
m = _PROJECT_TOKEN_RE.search(text)
|
||||
if not m:
|
||||
return "", text
|
||||
token = str(m.group(1) or "").strip()
|
||||
cleaned = _PROJECT_TOKEN_RE.sub("", text).strip()
|
||||
return token, cleaned
|
||||
|
||||
def _extract_reference(self, body_text: str) -> str:
|
||||
m = _REFERENCE_RE.search(str(body_text or ""))
|
||||
if not m:
|
||||
return ""
|
||||
return str(m.group(1) or "").strip()
|
||||
|
||||
async def _resolve_task(
|
||||
self, user, reference_code: str, reply_task: DerivedTask | None
|
||||
) -> DerivedTask | None:
|
||||
if reference_code:
|
||||
return await sync_to_async(
|
||||
lambda: DerivedTask.objects.filter(
|
||||
user=user, reference_code=reference_code
|
||||
)
|
||||
.select_related("project", "epic")
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)()
|
||||
return reply_task
|
||||
|
||||
async def _resolve_project(
|
||||
self,
|
||||
*,
|
||||
user,
|
||||
service: str,
|
||||
channel: str,
|
||||
task: DerivedTask | None,
|
||||
reply_task: DerivedTask | None,
|
||||
project_token: str,
|
||||
) -> tuple[TaskProject | None, str]:
|
||||
if task is not None:
|
||||
return task.project, ""
|
||||
if reply_task is not None:
|
||||
return reply_task.project, ""
|
||||
if project_token:
|
||||
project = await sync_to_async(
|
||||
lambda: TaskProject.objects.filter(
|
||||
user=user, name__iexact=project_token
|
||||
).first()
|
||||
)()
|
||||
if project is not None:
|
||||
return project, ""
|
||||
return None, f"project_not_found:{project_token}"
|
||||
|
||||
mapped = await self._mapped_sources(user, service, channel)
|
||||
project_ids = sorted({str(row.project_id) for row in mapped if row.project_id})
|
||||
if len(project_ids) == 1:
|
||||
project = next(
|
||||
(
|
||||
row.project
|
||||
for row in mapped
|
||||
if str(row.project_id) == project_ids[0]
|
||||
),
|
||||
None,
|
||||
)
|
||||
return project, ""
|
||||
if len(project_ids) > 1:
|
||||
return None, "project_required:[project:Name]"
|
||||
return None, "project_unresolved"
|
||||
|
||||
async def _post_source_status(
|
||||
self, trigger: Message, text: str, suffix: str
|
||||
) -> None:
|
||||
await post_status_in_source(
|
||||
trigger_message=trigger,
|
||||
text=text,
|
||||
origin_tag=f"codex-status:{suffix}",
|
||||
)
|
||||
|
||||
async def _run_status(
|
||||
self, trigger: Message, service: str, channel: str, project: TaskProject | None
|
||||
) -> CommandResult:
|
||||
def _load_runs():
|
||||
qs = CodexRun.objects.filter(user=trigger.user)
|
||||
if service:
|
||||
qs = qs.filter(source_service=service)
|
||||
if channel:
|
||||
qs = qs.filter(source_channel=channel)
|
||||
if project is not None:
|
||||
qs = qs.filter(project=project)
|
||||
return list(qs.order_by("-created_at")[:10])
|
||||
|
||||
runs = await sync_to_async(_load_runs)()
|
||||
if not runs:
|
||||
await self._post_source_status(
|
||||
trigger, "[codex] no recent runs for this scope.", "empty"
|
||||
)
|
||||
return CommandResult(ok=True, status="ok", payload={"count": 0})
|
||||
lines = ["[codex] recent runs:"]
|
||||
for row in runs:
|
||||
ref = str(getattr(getattr(row, "task", None), "reference_code", "") or "-")
|
||||
summary = str((row.result_payload or {}).get("summary") or "").strip()
|
||||
summary_part = f" · {summary}" if summary else ""
|
||||
lines.append(f"- {row.status} run={row.id} task=#{ref}{summary_part}")
|
||||
await self._post_source_status(trigger, "\n".join(lines), "runs")
|
||||
return CommandResult(ok=True, status="ok", payload={"count": len(runs)})
|
||||
|
||||
async def _run_approval_action(
|
||||
self,
|
||||
trigger: Message,
|
||||
parsed: CodexParsedCommand,
|
||||
current_service: str,
|
||||
current_channel: str,
|
||||
) -> CommandResult:
|
||||
cfg = await sync_to_async(
|
||||
lambda: TaskProviderConfig.objects.filter(
|
||||
user=trigger.user, provider="codex_cli"
|
||||
).first()
|
||||
)()
|
||||
settings_payload = dict(getattr(cfg, "settings", {}) or {})
|
||||
approver_service = (
|
||||
str(settings_payload.get("approver_service") or "").strip().lower()
|
||||
)
|
||||
approver_identifier = str(
|
||||
settings_payload.get("approver_identifier") or ""
|
||||
).strip()
|
||||
if not approver_service or not approver_identifier:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="approver_channel_not_configured"
|
||||
)
|
||||
|
||||
if str(current_service or "").strip().lower() != approver_service or str(
|
||||
current_channel or ""
|
||||
).strip() not in set(channel_variants(approver_service, approver_identifier)):
|
||||
return CommandResult(
|
||||
ok=False,
|
||||
status="failed",
|
||||
error="approval_command_not_allowed_in_this_channel",
|
||||
)
|
||||
|
||||
approval_key = parsed.approval_key
|
||||
request = await sync_to_async(
|
||||
lambda: CodexPermissionRequest.objects.select_related(
|
||||
"codex_run", "external_sync_event"
|
||||
)
|
||||
.filter(user=trigger.user, approval_key=approval_key)
|
||||
.first()
|
||||
)()
|
||||
if request is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="approval_key_not_found"
|
||||
)
|
||||
|
||||
now = timezone.now()
|
||||
if parsed.command == "approve":
|
||||
request.status = "approved"
|
||||
request.resolved_at = now
|
||||
request.resolved_by_identifier = current_channel
|
||||
request.resolution_note = "approved via command"
|
||||
await sync_to_async(request.save)(
|
||||
update_fields=[
|
||||
"status",
|
||||
"resolved_at",
|
||||
"resolved_by_identifier",
|
||||
"resolution_note",
|
||||
]
|
||||
)
|
||||
if request.external_sync_event_id:
|
||||
await sync_to_async(
|
||||
ExternalSyncEvent.objects.filter(
|
||||
id=request.external_sync_event_id
|
||||
).update
|
||||
)(
|
||||
status="ok",
|
||||
error="",
|
||||
)
|
||||
run = request.codex_run
|
||||
run.status = "approved_waiting_resume"
|
||||
run.error = ""
|
||||
await sync_to_async(run.save)(
|
||||
update_fields=["status", "error", "updated_at"]
|
||||
)
|
||||
source_service = str(run.source_service or "")
|
||||
source_channel = str(run.source_channel or "")
|
||||
resume_payload = dict(request.resume_payload or {})
|
||||
resume_action = str(resume_payload.get("action") or "").strip().lower()
|
||||
resume_provider_payload = dict(resume_payload.get("provider_payload") or {})
|
||||
if resume_action and resume_provider_payload:
|
||||
provider_payload = dict(resume_provider_payload)
|
||||
provider_payload["codex_run_id"] = str(run.id)
|
||||
provider_payload["source_service"] = source_service
|
||||
provider_payload["source_channel"] = source_channel
|
||||
event_action = resume_action
|
||||
resume_idempotency_key = str(
|
||||
resume_payload.get("idempotency_key") or ""
|
||||
).strip()
|
||||
resume_event_key = (
|
||||
resume_idempotency_key
|
||||
if resume_idempotency_key
|
||||
else f"codex_approval:{approval_key}:approved"
|
||||
)
|
||||
else:
|
||||
provider_payload = dict(
|
||||
run.request_payload.get("provider_payload") or {}
|
||||
)
|
||||
provider_payload.update(
|
||||
{
|
||||
"mode": "approval_response",
|
||||
"approval_key": approval_key,
|
||||
"resume_payload": dict(request.resume_payload or {}),
|
||||
"codex_run_id": str(run.id),
|
||||
"source_service": source_service,
|
||||
"source_channel": source_channel,
|
||||
}
|
||||
)
|
||||
event_action = "append_update"
|
||||
resume_event_key = f"codex_approval:{approval_key}:approved"
|
||||
await sync_to_async(ExternalSyncEvent.objects.update_or_create)(
|
||||
idempotency_key=resume_event_key,
|
||||
defaults={
|
||||
"user": trigger.user,
|
||||
"task_id": run.task_id,
|
||||
"task_event_id": run.derived_task_event_id,
|
||||
"provider": "codex_cli",
|
||||
"status": "pending",
|
||||
"payload": {
|
||||
"action": event_action,
|
||||
"provider_payload": provider_payload,
|
||||
},
|
||||
"error": "",
|
||||
},
|
||||
)
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"approval_key": approval_key, "resolution": "approved"},
|
||||
)
|
||||
|
||||
request.status = "denied"
|
||||
request.resolved_at = now
|
||||
request.resolved_by_identifier = current_channel
|
||||
request.resolution_note = "denied via command"
|
||||
await sync_to_async(request.save)(
|
||||
update_fields=[
|
||||
"status",
|
||||
"resolved_at",
|
||||
"resolved_by_identifier",
|
||||
"resolution_note",
|
||||
]
|
||||
)
|
||||
if request.external_sync_event_id:
|
||||
await sync_to_async(
|
||||
ExternalSyncEvent.objects.filter(
|
||||
id=request.external_sync_event_id
|
||||
).update
|
||||
)(
|
||||
status="failed",
|
||||
error="approval_denied",
|
||||
)
|
||||
run = request.codex_run
|
||||
run.status = "denied"
|
||||
run.error = "approval_denied"
|
||||
await sync_to_async(run.save)(update_fields=["status", "error", "updated_at"])
|
||||
await sync_to_async(ExternalSyncEvent.objects.update_or_create)(
|
||||
idempotency_key=f"codex_approval:{approval_key}:denied",
|
||||
defaults={
|
||||
"user": trigger.user,
|
||||
"task_id": run.task_id,
|
||||
"task_event_id": run.derived_task_event_id,
|
||||
"provider": "codex_cli",
|
||||
"status": "failed",
|
||||
"payload": {
|
||||
"action": "append_update",
|
||||
"provider_payload": {
|
||||
"mode": "approval_response",
|
||||
"approval_key": approval_key,
|
||||
"codex_run_id": str(run.id),
|
||||
},
|
||||
},
|
||||
"error": "approval_denied",
|
||||
},
|
||||
)
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"approval_key": approval_key, "resolution": "denied"},
|
||||
)
|
||||
|
||||
async def _create_submission(
|
||||
self,
|
||||
*,
|
||||
trigger: Message,
|
||||
mode: str,
|
||||
body_text: str,
|
||||
task: DerivedTask,
|
||||
project: TaskProject,
|
||||
) -> CommandResult:
|
||||
cfg = await sync_to_async(
|
||||
lambda: TaskProviderConfig.objects.filter(
|
||||
user=trigger.user, provider="codex_cli", enabled=True
|
||||
).first()
|
||||
)()
|
||||
if cfg is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="provider_disabled_or_missing"
|
||||
)
|
||||
|
||||
service, channel = self._effective_scope(trigger)
|
||||
external_chat_id = await sync_to_async(resolve_external_chat_id)(
|
||||
user=trigger.user,
|
||||
provider="codex_cli",
|
||||
service=service,
|
||||
channel=channel,
|
||||
)
|
||||
payload = {
|
||||
"task_id": str(task.id),
|
||||
"reference_code": str(task.reference_code or ""),
|
||||
"title": str(task.title or ""),
|
||||
"external_key": str(task.external_key or ""),
|
||||
"project_name": str(getattr(project, "name", "") or ""),
|
||||
"epic_name": str(getattr(getattr(task, "epic", None), "name", "") or ""),
|
||||
"source_service": service,
|
||||
"source_channel": channel,
|
||||
"external_chat_id": external_chat_id,
|
||||
"origin_message_id": str(getattr(task, "origin_message_id", "") or ""),
|
||||
"trigger_message_id": str(trigger.id),
|
||||
"mode": mode,
|
||||
"command_text": str(body_text or ""),
|
||||
}
|
||||
if mode == "plan":
|
||||
anchor = trigger.reply_to
|
||||
if anchor is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="reply_required_for_codex_plan"
|
||||
)
|
||||
rows = await sync_to_async(list)(
|
||||
Message.objects.filter(
|
||||
user=trigger.user,
|
||||
session=trigger.session,
|
||||
ts__gte=int(anchor.ts or 0),
|
||||
ts__lte=int(trigger.ts or 0),
|
||||
)
|
||||
.order_by("ts")
|
||||
.select_related(
|
||||
"session", "session__identifier", "session__identifier__person"
|
||||
)
|
||||
)
|
||||
payload["reply_context"] = {
|
||||
"anchor_message_id": str(anchor.id),
|
||||
"trigger_message_id": str(trigger.id),
|
||||
"message_ids": [str(row.id) for row in rows],
|
||||
"content": plain_text_blob(rows),
|
||||
}
|
||||
|
||||
run = await sync_to_async(CodexRun.objects.create)(
|
||||
user=trigger.user,
|
||||
task=task,
|
||||
source_message=trigger,
|
||||
project=project,
|
||||
epic=getattr(task, "epic", None),
|
||||
source_service=service,
|
||||
source_channel=channel,
|
||||
external_chat_id=external_chat_id,
|
||||
status="waiting_approval",
|
||||
request_payload={
|
||||
"action": "append_update",
|
||||
"provider_payload": dict(payload),
|
||||
},
|
||||
result_payload={},
|
||||
error="",
|
||||
)
|
||||
payload["codex_run_id"] = str(run.id)
|
||||
run.request_payload = {
|
||||
"action": "append_update",
|
||||
"provider_payload": dict(payload),
|
||||
}
|
||||
await sync_to_async(run.save)(update_fields=["request_payload", "updated_at"])
|
||||
|
||||
idempotency_key = f"codex_cmd:{trigger.id}:{mode}:{task.id}:{hashlib.sha1(str(body_text or '').encode('utf-8')).hexdigest()[:12]}"
|
||||
await sync_to_async(queue_codex_event_with_pre_approval)(
|
||||
user=trigger.user,
|
||||
run=run,
|
||||
task=task,
|
||||
task_event=None,
|
||||
action="append_update",
|
||||
provider_payload=dict(payload),
|
||||
idempotency_key=idempotency_key,
|
||||
)
|
||||
return CommandResult(
|
||||
ok=True,
|
||||
status="ok",
|
||||
payload={"codex_run_id": str(run.id), "approval_required": True},
|
||||
)
|
||||
|
||||
async def execute(self, ctx: CommandContext) -> CommandResult:
|
||||
trigger = await self._load_trigger(ctx.message_id)
|
||||
if trigger is None:
|
||||
return CommandResult(ok=False, status="failed", error="trigger_not_found")
|
||||
|
||||
profile = await sync_to_async(
|
||||
lambda: CommandProfile.objects.filter(
|
||||
user=trigger.user, slug=self.slug, enabled=True
|
||||
).first()
|
||||
)()
|
||||
if profile is None:
|
||||
return CommandResult(ok=False, status="skipped", error="profile_missing")
|
||||
|
||||
parsed = parse_codex_command(ctx.message_text)
|
||||
if not parsed.command:
|
||||
return CommandResult(
|
||||
ok=False, status="skipped", error="codex_command_not_matched"
|
||||
)
|
||||
|
||||
service, channel = self._effective_scope(trigger)
|
||||
|
||||
if parsed.command == "status":
|
||||
project = None
|
||||
reply_task = await self._linked_task_from_reply(
|
||||
trigger.user, trigger.reply_to
|
||||
)
|
||||
if reply_task is not None:
|
||||
project = reply_task.project
|
||||
return await self._run_status(trigger, service, channel, project)
|
||||
|
||||
if parsed.command in {"approve", "deny"}:
|
||||
return await self._run_approval_action(
|
||||
trigger,
|
||||
parsed,
|
||||
current_service=str(ctx.service or ""),
|
||||
current_channel=str(ctx.channel_identifier or ""),
|
||||
)
|
||||
|
||||
project_token, cleaned_body = self._extract_project_token(parsed.body_text)
|
||||
reference_code = self._extract_reference(cleaned_body)
|
||||
reply_task = await self._linked_task_from_reply(trigger.user, trigger.reply_to)
|
||||
task = await self._resolve_task(trigger.user, reference_code, reply_task)
|
||||
if task is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error="task_target_required"
|
||||
)
|
||||
|
||||
project, project_error = await self._resolve_project(
|
||||
user=trigger.user,
|
||||
service=service,
|
||||
channel=channel,
|
||||
task=task,
|
||||
reply_task=reply_task,
|
||||
project_token=project_token,
|
||||
)
|
||||
if project is None:
|
||||
return CommandResult(
|
||||
ok=False, status="failed", error=project_error or "project_unresolved"
|
||||
)
|
||||
|
||||
mode = "plan" if parsed.command == "plan" else "default"
|
||||
return await self._create_submission(
|
||||
trigger=trigger,
|
||||
mode=mode,
|
||||
body_text=cleaned_body,
|
||||
task=task,
|
||||
project=project,
|
||||
)
|
||||
111
core/commands/policies.py
Normal file
111
core/commands/policies.py
Normal file
@@ -0,0 +1,111 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Iterable
|
||||
|
||||
from core.models import CommandAction, CommandProfile, CommandVariantPolicy
|
||||
|
||||
BP_VARIANT_KEYS = ("bp", "bp_set", "bp_set_range")
|
||||
BP_VARIANT_META = {
|
||||
"bp": {
|
||||
"name": "bp",
|
||||
"trigger_token": ".bp",
|
||||
"template_supported": True,
|
||||
"position": 0,
|
||||
},
|
||||
"bp_set": {
|
||||
"name": "bp set",
|
||||
"trigger_token": ".bp set",
|
||||
"template_supported": False,
|
||||
"position": 1,
|
||||
},
|
||||
"bp_set_range": {
|
||||
"name": "bp set range",
|
||||
"trigger_token": ".bp set range",
|
||||
"template_supported": False,
|
||||
"position": 2,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _legacy_defaults(profile: CommandProfile, post_result_enabled: bool) -> dict:
|
||||
return {
|
||||
"enabled": True,
|
||||
"generation_mode": "ai",
|
||||
"send_plan_to_egress": bool(post_result_enabled),
|
||||
"send_status_to_source": str(profile.visibility_mode or "")
|
||||
== "status_in_source",
|
||||
"send_status_to_egress": False,
|
||||
"store_document": True,
|
||||
}
|
||||
|
||||
|
||||
def _bp_defaults(
|
||||
profile: CommandProfile,
|
||||
variant_key: str,
|
||||
post_result_enabled: bool,
|
||||
) -> dict:
|
||||
defaults = _legacy_defaults(profile, post_result_enabled)
|
||||
if variant_key in {"bp_set", "bp_set_range"}:
|
||||
defaults["generation_mode"] = "verbatim"
|
||||
else:
|
||||
defaults["generation_mode"] = "ai"
|
||||
return defaults
|
||||
|
||||
|
||||
def ensure_variant_policies_for_profile(
|
||||
profile: CommandProfile,
|
||||
*,
|
||||
action_rows: Iterable[CommandAction] | None = None,
|
||||
) -> dict[str, CommandVariantPolicy]:
|
||||
actions = (
|
||||
list(action_rows) if action_rows is not None else list(profile.actions.all())
|
||||
)
|
||||
post_result_enabled = any(
|
||||
row.action_type == "post_result" and bool(row.enabled) for row in actions
|
||||
)
|
||||
result: dict[str, CommandVariantPolicy] = {}
|
||||
|
||||
if str(profile.slug or "").strip() == "bp":
|
||||
for key in BP_VARIANT_KEYS:
|
||||
meta = BP_VARIANT_META.get(key, {})
|
||||
defaults = _bp_defaults(profile, key, post_result_enabled)
|
||||
policy, _ = CommandVariantPolicy.objects.get_or_create(
|
||||
profile=profile,
|
||||
variant_key=key,
|
||||
defaults={
|
||||
**defaults,
|
||||
"position": int(meta.get("position") or 0),
|
||||
},
|
||||
)
|
||||
result[key] = policy
|
||||
else:
|
||||
defaults = _legacy_defaults(profile, post_result_enabled)
|
||||
policy, _ = CommandVariantPolicy.objects.get_or_create(
|
||||
profile=profile,
|
||||
variant_key="default",
|
||||
defaults={
|
||||
**defaults,
|
||||
"generation_mode": "verbatim",
|
||||
"position": 0,
|
||||
},
|
||||
)
|
||||
result["default"] = policy
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def load_variant_policy(
|
||||
profile: CommandProfile, variant_key: str
|
||||
) -> CommandVariantPolicy | None:
|
||||
key = str(variant_key or "").strip()
|
||||
if not key:
|
||||
return None
|
||||
policy = (
|
||||
profile.variant_policies.filter(variant_key=key)
|
||||
.order_by("position", "id")
|
||||
.first()
|
||||
)
|
||||
if policy is not None:
|
||||
return policy
|
||||
ensured = ensure_variant_policies_for_profile(profile)
|
||||
return ensured.get(key)
|
||||
16
core/commands/registry.py
Normal file
16
core/commands/registry.py
Normal file
@@ -0,0 +1,16 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from core.commands.base import CommandHandler
|
||||
|
||||
_HANDLERS: dict[str, CommandHandler] = {}
|
||||
|
||||
|
||||
def register(handler: CommandHandler):
|
||||
slug = str(getattr(handler, "slug", "") or "").strip().lower()
|
||||
if not slug:
|
||||
raise ValueError("handler slug is required")
|
||||
_HANDLERS[slug] = handler
|
||||
|
||||
|
||||
def get(slug: str) -> CommandHandler | None:
|
||||
return _HANDLERS.get(str(slug or "").strip().lower())
|
||||
165
core/context_processors.py
Normal file
165
core/context_processors.py
Normal file
@@ -0,0 +1,165 @@
|
||||
from django.urls import reverse
|
||||
|
||||
|
||||
def _tab(label: str, href: str, active: bool) -> dict:
|
||||
return {
|
||||
"label": label,
|
||||
"href": href,
|
||||
"active": bool(active),
|
||||
}
|
||||
|
||||
|
||||
def settings_hierarchy_nav(request):
|
||||
match = getattr(request, "resolver_match", None)
|
||||
if match is None:
|
||||
return {}
|
||||
|
||||
url_name = str(getattr(match, "url_name", "") or "")
|
||||
namespace = str(getattr(match, "namespace", "") or "")
|
||||
path = str(getattr(request, "path", "") or "")
|
||||
|
||||
notifications_href = reverse("notifications_settings")
|
||||
system_href = reverse("system_settings")
|
||||
accessibility_href = reverse("accessibility_settings")
|
||||
encryption_href = reverse("encryption_settings")
|
||||
permissions_href = reverse("permission_settings")
|
||||
security_2fa_href = reverse("security_2fa")
|
||||
ai_models_href = reverse("ai_models")
|
||||
ai_traces_href = reverse("ai_execution_log")
|
||||
commands_href = reverse("command_routing")
|
||||
business_plans_href = reverse("business_plan_inbox")
|
||||
tasks_href = reverse("tasks_settings")
|
||||
translation_href = reverse("translation_settings")
|
||||
behavioral_href = reverse("behavioral_signals_settings")
|
||||
|
||||
categories = {
|
||||
"general": {
|
||||
"routes": {
|
||||
"notifications_settings",
|
||||
"notifications_update",
|
||||
"system_settings",
|
||||
"accessibility_settings",
|
||||
},
|
||||
"title": "General",
|
||||
"tabs": [
|
||||
(
|
||||
"Notifications",
|
||||
notifications_href,
|
||||
lambda: path == notifications_href,
|
||||
),
|
||||
("System", system_href, lambda: path == system_href),
|
||||
(
|
||||
"Accessibility",
|
||||
accessibility_href,
|
||||
lambda: path == accessibility_href,
|
||||
),
|
||||
],
|
||||
},
|
||||
"security": {
|
||||
"routes": {
|
||||
"security_settings",
|
||||
"encryption_settings",
|
||||
"permission_settings",
|
||||
"security_2fa",
|
||||
},
|
||||
"title": "Security",
|
||||
"tabs": [
|
||||
("Encryption", encryption_href, lambda: path == encryption_href),
|
||||
("Permissions", permissions_href, lambda: path == permissions_href),
|
||||
(
|
||||
"2FA",
|
||||
security_2fa_href,
|
||||
lambda: path == security_2fa_href or namespace == "two_factor",
|
||||
),
|
||||
],
|
||||
},
|
||||
"ai": {
|
||||
"routes": {
|
||||
"ai_settings",
|
||||
"ai_models",
|
||||
"ais",
|
||||
"ai_create",
|
||||
"ai_update",
|
||||
"ai_delete",
|
||||
"ai_execution_log",
|
||||
},
|
||||
"title": "AI",
|
||||
"tabs": [
|
||||
("Models", ai_models_href, lambda: path == ai_models_href),
|
||||
("Traces", ai_traces_href, lambda: path == ai_traces_href),
|
||||
],
|
||||
},
|
||||
"modules": {
|
||||
"routes": {
|
||||
"modules_settings",
|
||||
"command_routing",
|
||||
"business_plan_inbox",
|
||||
"business_plan_editor",
|
||||
"tasks_settings",
|
||||
"translation_settings",
|
||||
"translation_preview",
|
||||
"availability_settings",
|
||||
"behavioral_signals_settings",
|
||||
"codex_settings",
|
||||
"codex_approval",
|
||||
},
|
||||
"title": "Modules",
|
||||
"tabs": [
|
||||
("Commands", commands_href, lambda: path == commands_href),
|
||||
(
|
||||
"Business Plans",
|
||||
business_plans_href,
|
||||
lambda: url_name in {"business_plan_inbox", "business_plan_editor"},
|
||||
),
|
||||
("Task Automation", tasks_href, lambda: path == tasks_href),
|
||||
(
|
||||
"Translation",
|
||||
translation_href,
|
||||
lambda: url_name in {"translation_settings", "translation_preview"},
|
||||
),
|
||||
(
|
||||
"Behavioral Signals",
|
||||
behavioral_href,
|
||||
lambda: url_name
|
||||
in {"availability_settings", "behavioral_signals_settings"},
|
||||
),
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
two_factor_security_routes = {
|
||||
"profile",
|
||||
"setup",
|
||||
"backup_tokens",
|
||||
"disable",
|
||||
"phone_create",
|
||||
"phone_delete",
|
||||
}
|
||||
|
||||
if url_name in categories["general"]["routes"]:
|
||||
category = categories["general"]
|
||||
elif url_name in categories["security"]["routes"] or (
|
||||
namespace == "two_factor" and url_name in two_factor_security_routes
|
||||
):
|
||||
category = categories["security"]
|
||||
elif url_name in categories["ai"]["routes"]:
|
||||
category = categories["ai"]
|
||||
elif url_name in categories["modules"]["routes"]:
|
||||
category = categories["modules"]
|
||||
else:
|
||||
category = None
|
||||
|
||||
if category is None:
|
||||
settings_nav = None
|
||||
else:
|
||||
settings_nav = {
|
||||
"title": str(category.get("title") or "Settings"),
|
||||
"tabs": [
|
||||
_tab(label, href, bool(is_active()))
|
||||
for label, href, is_active in category.get("tabs", [])
|
||||
],
|
||||
}
|
||||
|
||||
if not settings_nav:
|
||||
return {}
|
||||
return {"settings_nav": settings_nav}
|
||||
@@ -24,7 +24,6 @@ async def init_mysql_pool():
|
||||
|
||||
async def close_mysql_pool():
|
||||
"""Close the MySQL connection pool properly."""
|
||||
global mysql_pool
|
||||
if mysql_pool:
|
||||
mysql_pool.close()
|
||||
await mysql_pool.wait_closed()
|
||||
|
||||
16
core/events/__init__.py
Normal file
16
core/events/__init__.py
Normal file
@@ -0,0 +1,16 @@
|
||||
from core.events.ledger import (
|
||||
append_event,
|
||||
append_event_sync,
|
||||
event_ledger_enabled,
|
||||
event_ledger_status,
|
||||
)
|
||||
from core.events.projection import project_session_from_events, shadow_compare_session
|
||||
|
||||
__all__ = [
|
||||
"append_event",
|
||||
"append_event_sync",
|
||||
"event_ledger_enabled",
|
||||
"event_ledger_status",
|
||||
"project_session_from_events",
|
||||
"shadow_compare_session",
|
||||
]
|
||||
213
core/events/behavior.py
Normal file
213
core/events/behavior.py
Normal file
@@ -0,0 +1,213 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import statistics
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
|
||||
def safe_int(value: Any, default: int = 0) -> int:
|
||||
try:
|
||||
return int(value)
|
||||
except Exception:
|
||||
return int(default)
|
||||
|
||||
|
||||
def parse_payload(value: Any) -> dict:
|
||||
if isinstance(value, dict):
|
||||
return dict(value)
|
||||
if isinstance(value, str):
|
||||
text = value.strip()
|
||||
if not text:
|
||||
return {}
|
||||
try:
|
||||
loaded = json.loads(text)
|
||||
except Exception:
|
||||
return {}
|
||||
if isinstance(loaded, dict):
|
||||
return dict(loaded)
|
||||
return {}
|
||||
|
||||
|
||||
def median_ms(values: list[int]) -> int:
|
||||
clean = [int(v) for v in values if safe_int(v, 0) > 0]
|
||||
if not clean:
|
||||
return 0
|
||||
return int(statistics.median(clean))
|
||||
|
||||
|
||||
def z_score(value: int, baseline_samples: list[int]) -> float:
|
||||
clean = [int(v) for v in baseline_samples if safe_int(v, 0) > 0]
|
||||
if len(clean) < 2:
|
||||
return 0.0
|
||||
baseline = statistics.median(clean)
|
||||
stdev = statistics.pstdev(clean)
|
||||
if stdev <= 0:
|
||||
return 0.0
|
||||
return float((float(value) - float(baseline)) / float(stdev))
|
||||
|
||||
|
||||
@dataclass
|
||||
class CompositionState:
|
||||
started_ts: int
|
||||
last_started_ts: int
|
||||
stopped_ts: int = 0
|
||||
revision: int = 1
|
||||
|
||||
|
||||
class ComposingTracker:
|
||||
def __init__(self, window_ms: int = 300000):
|
||||
self.window_ms = max(1000, int(window_ms or 300000))
|
||||
self._state: dict[str, CompositionState] = {}
|
||||
|
||||
def observe_started(self, session_id: str, ts: int) -> CompositionState:
|
||||
key = str(session_id or "").strip()
|
||||
if not key:
|
||||
raise ValueError("session_id is required")
|
||||
safe_ts_value = max(0, safe_int(ts, 0))
|
||||
state = self._state.get(key)
|
||||
if state is None:
|
||||
state = CompositionState(
|
||||
started_ts=safe_ts_value,
|
||||
last_started_ts=safe_ts_value,
|
||||
revision=1,
|
||||
)
|
||||
self._state[key] = state
|
||||
return state
|
||||
if state.stopped_ts > 0:
|
||||
state.revision += 1
|
||||
state.last_started_ts = safe_ts_value
|
||||
state.stopped_ts = 0
|
||||
return state
|
||||
|
||||
def observe_stopped(self, session_id: str, ts: int) -> dict | None:
|
||||
key = str(session_id or "").strip()
|
||||
state = self._state.get(key)
|
||||
if state is None:
|
||||
return None
|
||||
safe_ts_value = max(0, safe_int(ts, 0))
|
||||
duration_ms = max(0, safe_ts_value - int(state.started_ts or 0))
|
||||
if duration_ms >= self.window_ms:
|
||||
self._state.pop(key, None)
|
||||
return {
|
||||
"started_ts": int(state.started_ts or 0),
|
||||
"stopped_ts": safe_ts_value,
|
||||
"duration_ms": duration_ms,
|
||||
"revision": int(state.revision or 1),
|
||||
"abandoned": True,
|
||||
}
|
||||
state.stopped_ts = safe_ts_value
|
||||
return None
|
||||
|
||||
def observe_message(self, session_id: str) -> CompositionState | None:
|
||||
key = str(session_id or "").strip()
|
||||
if not key:
|
||||
return None
|
||||
return self._state.pop(key, None)
|
||||
|
||||
|
||||
def extract_metric_samples(rows: list[dict]) -> dict[str, list[int]]:
|
||||
delivered_by_message: dict[str, int] = {}
|
||||
read_by_message: dict[str, int] = {}
|
||||
delay_c_samples: list[int] = []
|
||||
delay_f_samples: list[int] = []
|
||||
revision_samples: list[int] = []
|
||||
abandoned_started = 0
|
||||
abandoned_total = 0
|
||||
composition_by_session: dict[str, dict[str, int]] = {}
|
||||
presence_by_session: dict[str, int] = {}
|
||||
|
||||
for row in sorted(
|
||||
list(rows or []),
|
||||
key=lambda item: (
|
||||
safe_int(item.get("ts"), 0),
|
||||
str(item.get("kind") or ""),
|
||||
str(item.get("session_id") or ""),
|
||||
),
|
||||
):
|
||||
kind = str(row.get("kind") or "").strip().lower()
|
||||
session_id = str(row.get("session_id") or "").strip()
|
||||
ts = safe_int(row.get("ts"), 0)
|
||||
payload = parse_payload(row.get("payload"))
|
||||
message_id = str(
|
||||
payload.get("message_id")
|
||||
or payload.get("origin_message_id")
|
||||
or row.get("origin_message_id")
|
||||
or ""
|
||||
).strip()
|
||||
|
||||
if kind == "message_delivered" and message_id:
|
||||
delivered_by_message[message_id] = ts
|
||||
continue
|
||||
if kind == "message_read" and message_id:
|
||||
read_by_message[message_id] = ts
|
||||
continue
|
||||
if kind == "presence_available" and session_id:
|
||||
presence_by_session[session_id] = ts
|
||||
continue
|
||||
if kind == "composing_started" and session_id:
|
||||
abandoned_started += 1
|
||||
state = composition_by_session.get(session_id)
|
||||
if state is None:
|
||||
state = {"started_ts": ts, "revision": 1}
|
||||
composition_by_session[session_id] = state
|
||||
else:
|
||||
state["revision"] = int(state.get("revision", 1)) + 1
|
||||
if presence_by_session.get(session_id):
|
||||
delta = ts - int(presence_by_session.get(session_id) or 0)
|
||||
if delta >= 0:
|
||||
delay_f_samples.append(delta)
|
||||
continue
|
||||
if kind == "composing_abandoned":
|
||||
abandoned_total += 1
|
||||
if session_id:
|
||||
composition_by_session.pop(session_id, None)
|
||||
continue
|
||||
if kind == "message_sent" and session_id:
|
||||
state = composition_by_session.pop(session_id, None)
|
||||
if state is None:
|
||||
continue
|
||||
delta = ts - int(state.get("started_ts") or 0)
|
||||
if delta >= 0:
|
||||
delay_c_samples.append(delta)
|
||||
revision_samples.append(max(1, int(state.get("revision") or 1)))
|
||||
|
||||
delay_b_samples = []
|
||||
for message_id, delivered_ts in delivered_by_message.items():
|
||||
read_ts = safe_int(read_by_message.get(message_id), 0)
|
||||
if read_ts > 0 and read_ts >= delivered_ts:
|
||||
delay_b_samples.append(read_ts - delivered_ts)
|
||||
|
||||
abandoned_rate_samples = []
|
||||
if abandoned_started > 0:
|
||||
abandoned_rate_samples.append(
|
||||
int(round((float(abandoned_total) / float(abandoned_started)) * 1000))
|
||||
)
|
||||
|
||||
return {
|
||||
"delay_b": delay_b_samples,
|
||||
"delay_c": delay_c_samples,
|
||||
"delay_f": delay_f_samples,
|
||||
"revision": revision_samples,
|
||||
"abandoned_rate": abandoned_rate_samples,
|
||||
}
|
||||
|
||||
|
||||
def summarize_metrics(window_rows: list[dict], baseline_rows: list[dict]) -> dict[str, dict]:
|
||||
window_samples = extract_metric_samples(window_rows)
|
||||
baseline_samples = extract_metric_samples(baseline_rows)
|
||||
metrics: dict[str, dict] = {}
|
||||
for metric in ("delay_b", "delay_c", "delay_f", "revision", "abandoned_rate"):
|
||||
samples = list(window_samples.get(metric) or [])
|
||||
if not samples:
|
||||
continue
|
||||
baseline = list(baseline_samples.get(metric) or [])
|
||||
value = median_ms(samples)
|
||||
baseline_value = median_ms(baseline)
|
||||
metrics[metric] = {
|
||||
"value_ms": int(value),
|
||||
"baseline_ms": int(baseline_value),
|
||||
"z_score": float(round(z_score(value, baseline), 6)),
|
||||
"sample_n": len(samples),
|
||||
}
|
||||
return metrics
|
||||
157
core/events/ledger.py
Normal file
157
core/events/ledger.py
Normal file
@@ -0,0 +1,157 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
from django.conf import settings
|
||||
|
||||
from core.events.manticore import get_event_ledger_backend
|
||||
from core.models import ConversationEvent
|
||||
from core.observability.tracing import ensure_trace_id
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("event-ledger")
|
||||
|
||||
|
||||
def event_ledger_enabled() -> bool:
|
||||
return bool(
|
||||
getattr(settings, "EVENT_LEDGER_DUAL_WRITE", False)
|
||||
or getattr(settings, "EVENT_PRIMARY_WRITE_PATH", False)
|
||||
)
|
||||
|
||||
|
||||
def event_ledger_status() -> dict:
|
||||
return {
|
||||
"event_ledger_dual_write": bool(
|
||||
getattr(settings, "EVENT_LEDGER_DUAL_WRITE", False)
|
||||
),
|
||||
"event_primary_write_path": bool(
|
||||
getattr(settings, "EVENT_PRIMARY_WRITE_PATH", False)
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
def _normalize_direction(value: str) -> str:
|
||||
direction = str(value or "system").strip().lower()
|
||||
if direction not in {"in", "out", "system"}:
|
||||
return "system"
|
||||
return direction
|
||||
|
||||
|
||||
def _safe_ts(value: int | None) -> int:
|
||||
if value is None:
|
||||
return int(time.time() * 1000)
|
||||
try:
|
||||
parsed = int(value)
|
||||
except Exception:
|
||||
return int(time.time() * 1000)
|
||||
if parsed <= 0:
|
||||
return int(time.time() * 1000)
|
||||
return parsed
|
||||
|
||||
|
||||
def append_event_sync(
|
||||
*,
|
||||
user,
|
||||
session,
|
||||
event_type: str,
|
||||
direction: str,
|
||||
actor_identifier: str = "",
|
||||
origin_transport: str = "",
|
||||
origin_message_id: str = "",
|
||||
origin_chat_id: str = "",
|
||||
payload: dict | None = None,
|
||||
raw_payload: dict | None = None,
|
||||
trace_id: str = "",
|
||||
ts: int | None = None,
|
||||
):
|
||||
if not event_ledger_enabled():
|
||||
return None
|
||||
|
||||
normalized_type = str(event_type or "").strip().lower()
|
||||
if not normalized_type:
|
||||
raise ValueError("event_type is required")
|
||||
|
||||
candidates = {str(choice[0]) for choice in ConversationEvent.EVENT_TYPE_CHOICES}
|
||||
if normalized_type not in candidates:
|
||||
raise ValueError(f"unsupported event_type: {normalized_type}")
|
||||
|
||||
normalized_direction = _normalize_direction(direction)
|
||||
normalized_trace = ensure_trace_id(trace_id, payload or {})
|
||||
|
||||
safe_ts = _safe_ts(ts)
|
||||
transport = str(origin_transport or "").strip().lower()
|
||||
message_id = str(origin_message_id or "").strip()
|
||||
actor_identifier = str(actor_identifier or "").strip()
|
||||
origin_chat_id = str(origin_chat_id or "").strip()
|
||||
payload = dict(payload or {})
|
||||
raw_payload = dict(raw_payload or {})
|
||||
|
||||
dual_write = bool(getattr(settings, "EVENT_LEDGER_DUAL_WRITE", False))
|
||||
primary_write = bool(getattr(settings, "EVENT_PRIMARY_WRITE_PATH", False))
|
||||
write_django = dual_write and not primary_write
|
||||
|
||||
row = None
|
||||
if write_django:
|
||||
dedup_row = None
|
||||
if transport and message_id:
|
||||
dedup_row = (
|
||||
ConversationEvent.objects.filter(
|
||||
user=user,
|
||||
session=session,
|
||||
event_type=normalized_type,
|
||||
origin_transport=transport,
|
||||
origin_message_id=message_id,
|
||||
)
|
||||
.order_by("-created_at")
|
||||
.first()
|
||||
)
|
||||
if dedup_row is not None:
|
||||
row = dedup_row
|
||||
else:
|
||||
row = ConversationEvent.objects.create(
|
||||
user=user,
|
||||
session=session,
|
||||
ts=safe_ts,
|
||||
event_type=normalized_type,
|
||||
direction=normalized_direction,
|
||||
actor_identifier=actor_identifier,
|
||||
origin_transport=transport,
|
||||
origin_message_id=message_id,
|
||||
origin_chat_id=origin_chat_id,
|
||||
payload=payload,
|
||||
raw_payload=raw_payload,
|
||||
trace_id=normalized_trace,
|
||||
)
|
||||
|
||||
try:
|
||||
get_event_ledger_backend().upsert_event(
|
||||
user_id=int(user.id),
|
||||
person_id=str(session.identifier.person_id),
|
||||
session_id=str(session.id),
|
||||
event_type=normalized_type,
|
||||
direction=normalized_direction,
|
||||
ts=safe_ts,
|
||||
actor_identifier=actor_identifier,
|
||||
origin_transport=transport,
|
||||
origin_message_id=message_id,
|
||||
origin_chat_id=origin_chat_id,
|
||||
payload=payload,
|
||||
raw_payload=raw_payload,
|
||||
trace_id=normalized_trace,
|
||||
)
|
||||
except Exception as exc:
|
||||
if primary_write:
|
||||
raise
|
||||
log.warning(
|
||||
"Event ledger manticore dual-write failed session=%s event_type=%s err=%s",
|
||||
getattr(session, "id", "-"),
|
||||
normalized_type,
|
||||
exc,
|
||||
)
|
||||
|
||||
return row
|
||||
|
||||
|
||||
async def append_event(**kwargs):
|
||||
return await sync_to_async(append_event_sync, thread_sensitive=True)(**kwargs)
|
||||
588
core/events/manticore.py
Normal file
588
core/events/manticore.py
Normal file
@@ -0,0 +1,588 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
import time
|
||||
from urllib.parse import urlparse, urlunparse
|
||||
from typing import Any
|
||||
|
||||
import requests
|
||||
from django.conf import settings
|
||||
|
||||
from core.models import ConversationEvent
|
||||
from core.util import logs
|
||||
from core.events.behavior import parse_payload
|
||||
|
||||
log = logs.get_logger("event-manticore")
|
||||
|
||||
|
||||
class ManticoreEventLedgerBackend:
|
||||
_table_ready_cache: dict[str, float] = {}
|
||||
_table_ready_ttl_seconds = 30.0
|
||||
|
||||
def __init__(self):
|
||||
self.base_url = str(
|
||||
getattr(settings, "MANTICORE_HTTP_URL", "http://localhost:9308")
|
||||
).rstrip("/")
|
||||
self.table = (
|
||||
str(getattr(settings, "MANTICORE_EVENT_TABLE", "gia_events")).strip()
|
||||
or "gia_events"
|
||||
)
|
||||
self.metrics_table = (
|
||||
str(getattr(settings, "MANTICORE_METRIC_TABLE", "gia_metrics")).strip()
|
||||
or "gia_metrics"
|
||||
)
|
||||
self.timeout_seconds = int(getattr(settings, "MANTICORE_HTTP_TIMEOUT", 5) or 5)
|
||||
self._table_cache_key = f"{self.base_url}|{self.table}"
|
||||
self._metrics_cache_key = f"{self.base_url}|{self.metrics_table}"
|
||||
|
||||
def _candidate_base_urls(self) -> list[str]:
|
||||
parsed = urlparse(self.base_url)
|
||||
hostname = str(parsed.hostname or "").strip().lower()
|
||||
candidates = [self.base_url]
|
||||
if hostname in {"localhost", "127.0.0.1"}:
|
||||
replacement = parsed._replace(netloc=f"host.containers.internal:{parsed.port or 9308}")
|
||||
candidates.append(urlunparse(replacement))
|
||||
output = []
|
||||
seen = set()
|
||||
for value in candidates:
|
||||
key = str(value or "").strip()
|
||||
if not key or key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
output.append(key)
|
||||
return output
|
||||
|
||||
def _sql(self, query: str) -> dict[str, Any]:
|
||||
last_exc = None
|
||||
for base_url in self._candidate_base_urls():
|
||||
try:
|
||||
response = requests.post(
|
||||
f"{base_url}/sql",
|
||||
data={"mode": "raw", "query": query},
|
||||
timeout=self.timeout_seconds,
|
||||
)
|
||||
response.raise_for_status()
|
||||
payload = response.json()
|
||||
if base_url != self.base_url:
|
||||
self.base_url = base_url.rstrip("/")
|
||||
self._table_cache_key = f"{self.base_url}|{self.table}"
|
||||
self._metrics_cache_key = f"{self.base_url}|{self.metrics_table}"
|
||||
if isinstance(payload, list):
|
||||
return payload[0] if payload else {}
|
||||
return dict(payload or {})
|
||||
except Exception as exc:
|
||||
last_exc = exc
|
||||
if last_exc is not None:
|
||||
raise last_exc
|
||||
return {}
|
||||
|
||||
def ensure_table(self) -> None:
|
||||
last_ready = float(
|
||||
self._table_ready_cache.get(self._table_cache_key, 0.0) or 0.0
|
||||
)
|
||||
if (time.time() - last_ready) <= float(self._table_ready_ttl_seconds):
|
||||
return
|
||||
self._sql(
|
||||
(
|
||||
f"CREATE TABLE IF NOT EXISTS {self.table} ("
|
||||
"id BIGINT,"
|
||||
"user_id BIGINT,"
|
||||
"person_id STRING,"
|
||||
"session_id STRING,"
|
||||
"transport STRING,"
|
||||
"kind STRING,"
|
||||
"direction STRING,"
|
||||
"ts BIGINT,"
|
||||
"ts_ref BIGINT,"
|
||||
"actor STRING,"
|
||||
"duration_ms BIGINT,"
|
||||
"abandoned INTEGER,"
|
||||
"revision INTEGER,"
|
||||
"payload JSON"
|
||||
") engine='columnar' min_infix_len='2'"
|
||||
)
|
||||
)
|
||||
self._table_ready_cache[self._table_cache_key] = time.time()
|
||||
|
||||
def ensure_metrics_table(self) -> None:
|
||||
last_ready = float(
|
||||
self._table_ready_cache.get(self._metrics_cache_key, 0.0) or 0.0
|
||||
)
|
||||
if (time.time() - last_ready) <= float(self._table_ready_ttl_seconds):
|
||||
return
|
||||
self._sql(
|
||||
(
|
||||
f"CREATE TABLE IF NOT EXISTS {self.metrics_table} ("
|
||||
"id BIGINT,"
|
||||
"user_id BIGINT,"
|
||||
"person_id STRING,"
|
||||
"window_days INTEGER,"
|
||||
"metric STRING,"
|
||||
"value_ms BIGINT,"
|
||||
"baseline_ms BIGINT,"
|
||||
"z_score FLOAT,"
|
||||
"sample_n INTEGER,"
|
||||
"computed_at BIGINT"
|
||||
") engine='columnar'"
|
||||
)
|
||||
)
|
||||
self._table_ready_cache[self._metrics_cache_key] = time.time()
|
||||
|
||||
def _escape(self, value: Any) -> str:
|
||||
text = str(value or "")
|
||||
return text.replace("\\", "\\\\").replace("'", "\\'")
|
||||
|
||||
def _event_id(self, *, logical_key: str) -> int:
|
||||
digest = hashlib.blake2b(
|
||||
str(logical_key or "").encode("utf-8"),
|
||||
digest_size=8,
|
||||
).digest()
|
||||
value = int.from_bytes(digest, byteorder="big", signed=False)
|
||||
return max(1, int(value))
|
||||
|
||||
def _event_kind(self, event_type: str) -> str:
|
||||
normalized = str(event_type or "").strip().lower()
|
||||
return {
|
||||
"message_created": "message_sent",
|
||||
"delivery_receipt": "message_delivered",
|
||||
"read_receipt": "message_read",
|
||||
"typing_started": "composing_started",
|
||||
"typing_stopped": "composing_stopped",
|
||||
"composing_abandoned": "composing_abandoned",
|
||||
"presence_available": "presence_available",
|
||||
"presence_unavailable": "presence_unavailable",
|
||||
}.get(normalized, normalized)
|
||||
|
||||
def _rows_from_sql_payload(self, payload: dict[str, Any]) -> list[dict]:
|
||||
data = payload.get("data") or payload.get("hits") or []
|
||||
if isinstance(data, dict):
|
||||
data = [data]
|
||||
rows = []
|
||||
for row in list(data or []):
|
||||
if isinstance(row, dict):
|
||||
rows.append(dict(row))
|
||||
return rows
|
||||
|
||||
def _build_values(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
person_id: str,
|
||||
session_id: str,
|
||||
event_type: str,
|
||||
direction: str,
|
||||
ts: int,
|
||||
actor_identifier: str,
|
||||
origin_transport: str,
|
||||
origin_message_id: str,
|
||||
origin_chat_id: str,
|
||||
payload: dict | None,
|
||||
raw_payload: dict | None,
|
||||
trace_id: str,
|
||||
) -> str:
|
||||
data = dict(payload or {})
|
||||
if raw_payload:
|
||||
data["raw_payload"] = dict(raw_payload)
|
||||
if trace_id:
|
||||
data["trace_id"] = str(trace_id)
|
||||
if origin_message_id:
|
||||
data["origin_message_id"] = str(origin_message_id)
|
||||
if origin_chat_id:
|
||||
data["origin_chat_id"] = str(origin_chat_id)
|
||||
data["legacy_event_type"] = str(event_type or "").strip().lower()
|
||||
|
||||
ts_ref = 0
|
||||
try:
|
||||
ts_ref = int(data.get("message_ts") or data.get("source_ts") or 0)
|
||||
except Exception:
|
||||
ts_ref = 0
|
||||
try:
|
||||
duration_ms = int(data.get("duration_ms") or 0)
|
||||
except Exception:
|
||||
duration_ms = 0
|
||||
try:
|
||||
abandoned = 1 if bool(data.get("abandoned")) else 0
|
||||
except Exception:
|
||||
abandoned = 0
|
||||
try:
|
||||
revision = int(data.get("revision") or 0)
|
||||
except Exception:
|
||||
revision = 0
|
||||
|
||||
logical_key = "|".join(
|
||||
[
|
||||
str(user_id),
|
||||
str(session_id),
|
||||
str(event_type or "").strip().lower(),
|
||||
str(direction or "").strip().lower(),
|
||||
str(origin_transport or "").strip().lower(),
|
||||
str(origin_message_id or "").strip(),
|
||||
str(origin_chat_id or "").strip(),
|
||||
str(actor_identifier or "").strip(),
|
||||
str(int(ts or 0)),
|
||||
str(trace_id or "").strip(),
|
||||
]
|
||||
)
|
||||
doc_id = self._event_id(logical_key=logical_key)
|
||||
payload_json = json.dumps(data, separators=(",", ":"), sort_keys=True)
|
||||
return (
|
||||
f"({doc_id},{int(user_id)},'{self._escape(person_id)}',"
|
||||
f"'{self._escape(session_id)}','{self._escape(origin_transport)}',"
|
||||
f"'{self._escape(self._event_kind(event_type))}','{self._escape(direction)}',"
|
||||
f"{int(ts)},{ts_ref},'{self._escape(actor_identifier)}',{duration_ms},"
|
||||
f"{abandoned},{revision},'{self._escape(payload_json)}')"
|
||||
)
|
||||
|
||||
def upsert_event(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
person_id: str,
|
||||
session_id: str,
|
||||
event_type: str,
|
||||
direction: str,
|
||||
ts: int,
|
||||
actor_identifier: str = "",
|
||||
origin_transport: str = "",
|
||||
origin_message_id: str = "",
|
||||
origin_chat_id: str = "",
|
||||
payload: dict | None = None,
|
||||
raw_payload: dict | None = None,
|
||||
trace_id: str = "",
|
||||
) -> None:
|
||||
self.ensure_table()
|
||||
values = self._build_values(
|
||||
user_id=user_id,
|
||||
person_id=person_id,
|
||||
session_id=session_id,
|
||||
event_type=event_type,
|
||||
direction=direction,
|
||||
ts=ts,
|
||||
actor_identifier=actor_identifier,
|
||||
origin_transport=origin_transport,
|
||||
origin_message_id=origin_message_id,
|
||||
origin_chat_id=origin_chat_id,
|
||||
payload=payload,
|
||||
raw_payload=raw_payload,
|
||||
trace_id=trace_id,
|
||||
)
|
||||
self._sql(
|
||||
f"REPLACE INTO {self.table} "
|
||||
"(id,user_id,person_id,session_id,transport,kind,direction,ts,ts_ref,actor,duration_ms,abandoned,revision,payload) "
|
||||
f"VALUES {values}"
|
||||
)
|
||||
|
||||
def query_rows(self, query: str) -> list[dict]:
|
||||
return self._rows_from_sql_payload(self._sql(query))
|
||||
|
||||
def list_event_targets(self, *, user_id: int | None = None) -> list[dict]:
|
||||
filters = []
|
||||
if user_id is not None:
|
||||
filters.append(f"user_id={int(user_id)}")
|
||||
where_clause = f" WHERE {' AND '.join(filters)}" if filters else ""
|
||||
return self.query_rows(
|
||||
f"SELECT user_id, person_id FROM {self.table}{where_clause} "
|
||||
"GROUP BY user_id, person_id"
|
||||
)
|
||||
|
||||
def fetch_events(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
person_id: str,
|
||||
since_ts: int,
|
||||
) -> list[dict]:
|
||||
return self.query_rows(
|
||||
f"SELECT user_id, person_id, session_id, transport, kind, direction, ts, ts_ref, actor, duration_ms, abandoned, revision, payload "
|
||||
f"FROM {self.table} "
|
||||
f"WHERE user_id={int(user_id)} "
|
||||
f"AND person_id='{self._escape(person_id)}' "
|
||||
f"AND ts>={int(since_ts)} "
|
||||
"ORDER BY ts ASC"
|
||||
)
|
||||
|
||||
def _metric_doc_id(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
person_id: str,
|
||||
window_days: int,
|
||||
metric: str,
|
||||
) -> int:
|
||||
digest = hashlib.blake2b(
|
||||
f"{int(user_id)}|{person_id}|{int(window_days)}|{metric}".encode("utf-8"),
|
||||
digest_size=8,
|
||||
).digest()
|
||||
return max(1, int.from_bytes(digest, byteorder="big", signed=False))
|
||||
|
||||
def upsert_metric(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
person_id: str,
|
||||
window_days: int,
|
||||
metric: str,
|
||||
value_ms: int,
|
||||
baseline_ms: int,
|
||||
z_score: float,
|
||||
sample_n: int,
|
||||
computed_at: int,
|
||||
) -> None:
|
||||
self.ensure_metrics_table()
|
||||
doc_id = self._metric_doc_id(
|
||||
user_id=user_id,
|
||||
person_id=person_id,
|
||||
window_days=window_days,
|
||||
metric=metric,
|
||||
)
|
||||
self._sql(
|
||||
f"REPLACE INTO {self.metrics_table} "
|
||||
"(id,user_id,person_id,window_days,metric,value_ms,baseline_ms,z_score,sample_n,computed_at) "
|
||||
f"VALUES ({doc_id},{int(user_id)},'{self._escape(person_id)}',{int(window_days)},"
|
||||
f"'{self._escape(metric)}',{int(value_ms)},{int(baseline_ms)},"
|
||||
f"{float(z_score)},{int(sample_n)},{int(computed_at)})"
|
||||
)
|
||||
|
||||
|
||||
def get_event_ledger_backend() -> ManticoreEventLedgerBackend:
|
||||
return ManticoreEventLedgerBackend()
|
||||
|
||||
|
||||
def upsert_conversation_event(event: ConversationEvent) -> None:
|
||||
session = event.session
|
||||
identifier = session.identifier
|
||||
get_event_ledger_backend().upsert_event(
|
||||
user_id=int(event.user_id),
|
||||
person_id=str(identifier.person_id),
|
||||
session_id=str(session.id),
|
||||
event_type=str(event.event_type or ""),
|
||||
direction=str(event.direction or "system"),
|
||||
ts=int(event.ts or 0),
|
||||
actor_identifier=str(event.actor_identifier or ""),
|
||||
origin_transport=str(event.origin_transport or ""),
|
||||
origin_message_id=str(event.origin_message_id or ""),
|
||||
origin_chat_id=str(event.origin_chat_id or ""),
|
||||
payload=dict(event.payload or {}),
|
||||
raw_payload=dict(event.raw_payload or {}),
|
||||
trace_id=str(event.trace_id or ""),
|
||||
)
|
||||
|
||||
|
||||
def get_behavioral_availability_stats(*, user_id: int) -> list[dict]:
|
||||
backend = get_event_ledger_backend()
|
||||
return backend.query_rows(
|
||||
f"SELECT person_id, transport, "
|
||||
"COUNT(*) AS total_events, "
|
||||
"SUM(IF(kind IN ('presence_available','presence_unavailable'),1,0)) AS presence_events, "
|
||||
"SUM(IF(kind='message_read',1,0)) AS read_events, "
|
||||
"SUM(IF(kind IN ('composing_started','composing_stopped'),1,0)) AS typing_events, "
|
||||
"SUM(IF(kind='message_sent',1,0)) AS message_events, "
|
||||
"SUM(IF(kind='composing_abandoned',1,0)) AS abandoned_events, "
|
||||
"MAX(ts) AS last_event_ts "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE user_id={int(user_id)} "
|
||||
"GROUP BY person_id, transport "
|
||||
"ORDER BY total_events DESC, person_id ASC, transport ASC"
|
||||
)
|
||||
|
||||
|
||||
def get_behavioral_latest_states(
|
||||
*,
|
||||
user_id: int,
|
||||
person_ids: list[str],
|
||||
transport: str = "",
|
||||
) -> list[dict]:
|
||||
backend = get_event_ledger_backend()
|
||||
cleaned_ids = [
|
||||
str(value or "").strip()
|
||||
for value in list(person_ids or [])
|
||||
if str(value or "").strip()
|
||||
]
|
||||
if not cleaned_ids:
|
||||
return []
|
||||
id_clause = ",".join(f"'{backend._escape(value)}'" for value in cleaned_ids)
|
||||
transport_clause = ""
|
||||
if str(transport or "").strip():
|
||||
transport_clause = (
|
||||
f" AND transport='{backend._escape(str(transport or '').strip().lower())}'"
|
||||
)
|
||||
return backend.query_rows(
|
||||
f"SELECT person_id, transport, kind, ts "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE user_id={int(user_id)} "
|
||||
f"AND person_id IN ({id_clause})"
|
||||
f"{transport_clause} "
|
||||
"ORDER BY person_id ASC, ts DESC"
|
||||
)
|
||||
|
||||
|
||||
def get_behavioral_events_for_range(
|
||||
*,
|
||||
user_id: int,
|
||||
person_id: str,
|
||||
start_ts: int,
|
||||
end_ts: int,
|
||||
transport: str = "",
|
||||
) -> list[dict]:
|
||||
backend = get_event_ledger_backend()
|
||||
transport_clause = ""
|
||||
if str(transport or "").strip():
|
||||
transport_clause = (
|
||||
f" AND transport='{backend._escape(str(transport or '').strip().lower())}'"
|
||||
)
|
||||
return backend.query_rows(
|
||||
f"SELECT person_id, session_id, transport, kind, direction, ts, payload "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE user_id={int(user_id)} "
|
||||
f"AND person_id='{backend._escape(str(person_id or '').strip())}' "
|
||||
f"AND ts>={int(start_ts)} AND ts<={int(end_ts)}"
|
||||
f"{transport_clause} "
|
||||
"ORDER BY ts ASC"
|
||||
)
|
||||
|
||||
|
||||
def get_recent_event_rows(
|
||||
*,
|
||||
minutes: int = 120,
|
||||
service: str = "",
|
||||
user_id: str = "",
|
||||
limit: int = 200,
|
||||
) -> list[dict]:
|
||||
backend = get_event_ledger_backend()
|
||||
cutoff_ts = int(time.time() * 1000) - (max(1, int(minutes)) * 60 * 1000)
|
||||
where = [f"ts>={cutoff_ts}"]
|
||||
if service:
|
||||
where.append(f"transport='{backend._escape(str(service).strip().lower())}'")
|
||||
if user_id:
|
||||
where.append(f"user_id={int(user_id)}")
|
||||
rows = backend.query_rows(
|
||||
f"SELECT user_id, session_id, ts, kind, direction, transport, payload "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE {' AND '.join(where)} "
|
||||
f"ORDER BY ts DESC "
|
||||
f"LIMIT {max(1, min(int(limit), 500))}"
|
||||
)
|
||||
output = []
|
||||
for row in list(rows or []):
|
||||
payload = parse_payload(row.get("payload"))
|
||||
legacy_event_type = str(payload.get("legacy_event_type") or "").strip().lower()
|
||||
output.append(
|
||||
{
|
||||
"id": "",
|
||||
"user_id": int(row.get("user_id") or 0),
|
||||
"session_id": str(row.get("session_id") or ""),
|
||||
"ts": int(row.get("ts") or 0),
|
||||
"event_type": legacy_event_type or str(row.get("kind") or ""),
|
||||
"kind": str(row.get("kind") or ""),
|
||||
"direction": str(row.get("direction") or ""),
|
||||
"origin_transport": str(row.get("transport") or ""),
|
||||
"trace_id": str(payload.get("trace_id") or ""),
|
||||
}
|
||||
)
|
||||
return output
|
||||
|
||||
|
||||
def count_behavioral_events(*, user_id: int) -> int:
|
||||
backend = get_event_ledger_backend()
|
||||
rows = backend.query_rows(
|
||||
f"SELECT COUNT(*) AS total_events "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE user_id={int(user_id)}"
|
||||
)
|
||||
if not rows:
|
||||
return 0
|
||||
try:
|
||||
return int((rows[0] or {}).get("total_events") or 0)
|
||||
except Exception:
|
||||
return 0
|
||||
|
||||
|
||||
def get_trace_ids(*, user_id: int, limit: int = 120) -> list[str]:
|
||||
backend = get_event_ledger_backend()
|
||||
rows = backend.query_rows(
|
||||
f"SELECT payload "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE user_id={int(user_id)} "
|
||||
"ORDER BY ts DESC "
|
||||
f"LIMIT {max(1, min(int(limit) * 6, 1000))}"
|
||||
)
|
||||
seen = set()
|
||||
output = []
|
||||
for row in list(rows or []):
|
||||
payload = parse_payload(row.get("payload"))
|
||||
trace_id = str(payload.get("trace_id") or "").strip()
|
||||
if not trace_id or trace_id in seen:
|
||||
continue
|
||||
seen.add(trace_id)
|
||||
output.append(trace_id)
|
||||
if len(output) >= max(1, min(int(limit), 500)):
|
||||
break
|
||||
return output
|
||||
|
||||
|
||||
def get_trace_event_rows(*, user_id: int, trace_id: str, limit: int = 500) -> list[dict]:
|
||||
backend = get_event_ledger_backend()
|
||||
rows = backend.query_rows(
|
||||
f"SELECT user_id, session_id, ts, kind, direction, transport, payload "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE user_id={int(user_id)} "
|
||||
"ORDER BY ts ASC "
|
||||
f"LIMIT {max(1, min(int(limit) * 8, 5000))}"
|
||||
)
|
||||
output = []
|
||||
target = str(trace_id or "").strip()
|
||||
for row in list(rows or []):
|
||||
payload = parse_payload(row.get("payload"))
|
||||
if str(payload.get("trace_id") or "").strip() != target:
|
||||
continue
|
||||
output.append(
|
||||
{
|
||||
"id": "",
|
||||
"ts": int(row.get("ts") or 0),
|
||||
"event_type": str(
|
||||
payload.get("legacy_event_type") or row.get("kind") or ""
|
||||
).strip(),
|
||||
"kind": str(row.get("kind") or "").strip(),
|
||||
"direction": str(row.get("direction") or "").strip(),
|
||||
"session_id": str(row.get("session_id") or "").strip(),
|
||||
"origin_transport": str(row.get("transport") or "").strip(),
|
||||
"origin_message_id": str(payload.get("origin_message_id") or "").strip(),
|
||||
"payload": payload,
|
||||
"trace_id": target,
|
||||
}
|
||||
)
|
||||
if len(output) >= max(1, min(int(limit), 500)):
|
||||
break
|
||||
return output
|
||||
|
||||
|
||||
def get_session_event_rows(*, user_id: int, session_id: str, limit: int = 2000) -> list[dict]:
|
||||
backend = get_event_ledger_backend()
|
||||
rows = backend.query_rows(
|
||||
f"SELECT user_id, session_id, ts, kind, direction, transport, actor, payload "
|
||||
f"FROM {backend.table} "
|
||||
f"WHERE user_id={int(user_id)} "
|
||||
f"AND session_id='{backend._escape(str(session_id or '').strip())}' "
|
||||
"ORDER BY ts ASC "
|
||||
f"LIMIT {max(1, min(int(limit), 5000))}"
|
||||
)
|
||||
output = []
|
||||
for row in list(rows or []):
|
||||
payload = parse_payload(row.get("payload"))
|
||||
output.append(
|
||||
{
|
||||
"ts": int(row.get("ts") or 0),
|
||||
"event_type": str(
|
||||
payload.get("legacy_event_type") or row.get("kind") or ""
|
||||
).strip(),
|
||||
"kind": str(row.get("kind") or "").strip(),
|
||||
"direction": str(row.get("direction") or "").strip(),
|
||||
"session_id": str(row.get("session_id") or "").strip(),
|
||||
"origin_transport": str(row.get("transport") or "").strip(),
|
||||
"actor_identifier": str(row.get("actor") or "").strip(),
|
||||
"origin_message_id": str(payload.get("origin_message_id") or "").strip(),
|
||||
"payload": payload,
|
||||
}
|
||||
)
|
||||
return output
|
||||
362
core/events/projection.py
Normal file
362
core/events/projection.py
Normal file
@@ -0,0 +1,362 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
from core.events.manticore import get_session_event_rows
|
||||
from core.models import ChatSession, ConversationEvent, Message
|
||||
|
||||
|
||||
@dataclass
|
||||
class _ProjectedMessage:
|
||||
message_id: str
|
||||
ts: int = 0
|
||||
text: str = ""
|
||||
delivered_ts: int | None = None
|
||||
read_ts: int | None = None
|
||||
reactions: dict[tuple[str, str, str], dict] | None = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.reactions is None:
|
||||
self.reactions = {}
|
||||
|
||||
|
||||
def _safe_int(value, default=0) -> int:
|
||||
try:
|
||||
return int(value)
|
||||
except Exception:
|
||||
return int(default)
|
||||
|
||||
|
||||
def _reaction_key(row: dict) -> tuple[str, str, str]:
|
||||
item = dict(row or {})
|
||||
return (
|
||||
str(item.get("source_service") or "").strip().lower(),
|
||||
str(item.get("actor") or "").strip(),
|
||||
str(item.get("emoji") or "").strip(),
|
||||
)
|
||||
|
||||
|
||||
def _normalize_reactions(rows: list[dict] | None) -> list[dict]:
|
||||
merged = {}
|
||||
for row in list(rows or []):
|
||||
item = dict(row or {})
|
||||
key = _reaction_key(item)
|
||||
if not any(key):
|
||||
continue
|
||||
merged[key] = {
|
||||
"source_service": key[0],
|
||||
"actor": key[1],
|
||||
"emoji": key[2],
|
||||
"removed": bool(item.get("removed")),
|
||||
}
|
||||
return sorted(
|
||||
merged.values(),
|
||||
key=lambda entry: (
|
||||
str(entry.get("source_service") or ""),
|
||||
str(entry.get("actor") or ""),
|
||||
str(entry.get("emoji") or ""),
|
||||
bool(entry.get("removed")),
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _event_rows_for_session(session: ChatSession):
|
||||
try:
|
||||
rows = get_session_event_rows(
|
||||
user_id=int(session.user_id),
|
||||
session_id=str(session.id),
|
||||
limit=2000,
|
||||
)
|
||||
except Exception:
|
||||
rows = []
|
||||
if rows:
|
||||
return rows, "manticore"
|
||||
return (
|
||||
list(
|
||||
ConversationEvent.objects.filter(
|
||||
user=session.user,
|
||||
session=session,
|
||||
).order_by("ts", "created_at")
|
||||
),
|
||||
"django",
|
||||
)
|
||||
|
||||
|
||||
def project_session_from_events(session: ChatSession) -> list[dict]:
|
||||
rows, _source = _event_rows_for_session(session)
|
||||
|
||||
projected: dict[str, _ProjectedMessage] = {}
|
||||
order: list[str] = []
|
||||
|
||||
for event in rows:
|
||||
is_dict = isinstance(event, dict)
|
||||
payload = dict(
|
||||
(event.get("payload") if is_dict else getattr(event, "payload", {})) or {}
|
||||
)
|
||||
event_type = str(
|
||||
(event.get("event_type") if is_dict else getattr(event, "event_type", ""))
|
||||
or ""
|
||||
).strip().lower()
|
||||
message_id = str(
|
||||
payload.get("message_id") or payload.get("target_message_id") or ""
|
||||
).strip()
|
||||
|
||||
if event_type == "message_created":
|
||||
message_id = str(
|
||||
payload.get("message_id")
|
||||
or (
|
||||
event.get("origin_message_id")
|
||||
if is_dict
|
||||
else getattr(event, "origin_message_id", "")
|
||||
)
|
||||
or ""
|
||||
).strip()
|
||||
if not message_id:
|
||||
continue
|
||||
state = projected.get(message_id)
|
||||
if state is None:
|
||||
state = _ProjectedMessage(message_id=message_id)
|
||||
projected[message_id] = state
|
||||
order.append(message_id)
|
||||
state.ts = _safe_int(
|
||||
payload.get("message_ts"),
|
||||
_safe_int(event.get("ts") if is_dict else getattr(event, "ts", 0)),
|
||||
)
|
||||
state.text = str(payload.get("text") or state.text or "")
|
||||
delivered_default = _safe_int(
|
||||
payload.get("delivered_ts"),
|
||||
_safe_int(event.get("ts") if is_dict else getattr(event, "ts", 0)),
|
||||
)
|
||||
if state.delivered_ts is None:
|
||||
state.delivered_ts = delivered_default or None
|
||||
continue
|
||||
|
||||
if not message_id or message_id not in projected:
|
||||
continue
|
||||
state = projected[message_id]
|
||||
|
||||
if event_type == "read_receipt":
|
||||
read_ts = _safe_int(
|
||||
payload.get("read_ts"),
|
||||
_safe_int(event.get("ts") if is_dict else getattr(event, "ts", 0)),
|
||||
)
|
||||
if read_ts > 0:
|
||||
if state.read_ts is None:
|
||||
state.read_ts = read_ts
|
||||
else:
|
||||
state.read_ts = max(int(state.read_ts or 0), read_ts)
|
||||
if state.delivered_ts is None and read_ts > 0:
|
||||
state.delivered_ts = read_ts
|
||||
continue
|
||||
|
||||
if event_type in {"reaction_added", "reaction_removed"}:
|
||||
source_service = (
|
||||
str(
|
||||
payload.get("source_service")
|
||||
or (
|
||||
event.get("origin_transport")
|
||||
if is_dict
|
||||
else getattr(event, "origin_transport", "")
|
||||
)
|
||||
or ""
|
||||
)
|
||||
.strip()
|
||||
.lower()
|
||||
)
|
||||
actor = str(
|
||||
payload.get("actor")
|
||||
or (
|
||||
event.get("actor_identifier")
|
||||
if is_dict
|
||||
else getattr(event, "actor_identifier", "")
|
||||
)
|
||||
or ""
|
||||
).strip()
|
||||
emoji = str(payload.get("emoji") or "").strip()
|
||||
if not source_service and not actor and not emoji:
|
||||
continue
|
||||
key = (source_service, actor, emoji)
|
||||
state.reactions[key] = {
|
||||
"source_service": source_service,
|
||||
"actor": actor,
|
||||
"emoji": emoji,
|
||||
"removed": bool(
|
||||
event_type == "reaction_removed" or payload.get("remove")
|
||||
),
|
||||
}
|
||||
|
||||
output = []
|
||||
for message_id in order:
|
||||
state = projected.get(message_id)
|
||||
if state is None:
|
||||
continue
|
||||
output.append(
|
||||
{
|
||||
"message_id": str(state.message_id),
|
||||
"ts": int(state.ts or 0),
|
||||
"text": str(state.text or ""),
|
||||
"delivered_ts": (
|
||||
int(state.delivered_ts) if state.delivered_ts is not None else None
|
||||
),
|
||||
"read_ts": int(state.read_ts) if state.read_ts is not None else None,
|
||||
"reactions": _normalize_reactions(
|
||||
list((state.reactions or {}).values())
|
||||
),
|
||||
}
|
||||
)
|
||||
return output
|
||||
|
||||
|
||||
def shadow_compare_session(session: ChatSession, detail_limit: int = 50) -> dict:
|
||||
projected_rows = project_session_from_events(session)
|
||||
projected_by_id = {str(row.get("message_id") or ""): row for row in projected_rows}
|
||||
|
||||
db_rows = list(
|
||||
Message.objects.filter(user=session.user, session=session)
|
||||
.order_by("ts", "id")
|
||||
.values(
|
||||
"id",
|
||||
"ts",
|
||||
"text",
|
||||
"delivered_ts",
|
||||
"read_ts",
|
||||
"receipt_payload",
|
||||
)
|
||||
)
|
||||
db_by_id = {str(row.get("id")): dict(row) for row in db_rows}
|
||||
|
||||
counters = {
|
||||
"missing_in_projection": 0,
|
||||
"missing_in_db": 0,
|
||||
"text_mismatch": 0,
|
||||
"ts_mismatch": 0,
|
||||
"delivered_ts_mismatch": 0,
|
||||
"read_ts_mismatch": 0,
|
||||
"reactions_mismatch": 0,
|
||||
}
|
||||
details = []
|
||||
cause_counts = {
|
||||
"missing_event_write": 0,
|
||||
"ambiguous_reaction_target": 0,
|
||||
"payload_normalization_gap": 0,
|
||||
}
|
||||
cause_samples = {key: [] for key in cause_counts.keys()}
|
||||
cause_sample_limit = min(5, max(0, int(detail_limit)))
|
||||
|
||||
def _record_detail(
|
||||
message_id: str, issue: str, cause: str, extra: dict | None = None
|
||||
):
|
||||
if cause in cause_counts:
|
||||
cause_counts[cause] += 1
|
||||
row = {"message_id": message_id, "issue": issue, "cause": cause}
|
||||
if extra:
|
||||
row.update(dict(extra))
|
||||
if len(details) < max(0, int(detail_limit)):
|
||||
details.append(row)
|
||||
if cause in cause_samples and len(cause_samples[cause]) < cause_sample_limit:
|
||||
cause_samples[cause].append(row)
|
||||
|
||||
for message_id, db_row in db_by_id.items():
|
||||
projected = projected_by_id.get(message_id)
|
||||
if projected is None:
|
||||
counters["missing_in_projection"] += 1
|
||||
_record_detail(message_id, "missing_in_projection", "missing_event_write")
|
||||
continue
|
||||
|
||||
db_text = str(db_row.get("text") or "")
|
||||
projected_text = str(projected.get("text") or "")
|
||||
if db_text != projected_text:
|
||||
counters["text_mismatch"] += 1
|
||||
_record_detail(
|
||||
message_id,
|
||||
"text_mismatch",
|
||||
"payload_normalization_gap",
|
||||
{"db": db_text, "projected": projected_text},
|
||||
)
|
||||
|
||||
db_ts = _safe_int(db_row.get("ts"), 0)
|
||||
projected_ts = _safe_int(projected.get("ts"), 0)
|
||||
if db_ts != projected_ts:
|
||||
counters["ts_mismatch"] += 1
|
||||
_record_detail(
|
||||
message_id,
|
||||
"ts_mismatch",
|
||||
"payload_normalization_gap",
|
||||
{"db": db_ts, "projected": projected_ts},
|
||||
)
|
||||
|
||||
db_delivered_ts = db_row.get("delivered_ts")
|
||||
projected_delivered_ts = projected.get("delivered_ts")
|
||||
if (db_delivered_ts is None) != (projected_delivered_ts is None) or (
|
||||
db_delivered_ts is not None
|
||||
and projected_delivered_ts is not None
|
||||
and int(db_delivered_ts) != int(projected_delivered_ts)
|
||||
):
|
||||
counters["delivered_ts_mismatch"] += 1
|
||||
_record_detail(
|
||||
message_id,
|
||||
"delivered_ts_mismatch",
|
||||
"payload_normalization_gap",
|
||||
{
|
||||
"db": db_delivered_ts,
|
||||
"projected": projected_delivered_ts,
|
||||
},
|
||||
)
|
||||
|
||||
db_read_ts = db_row.get("read_ts")
|
||||
projected_read_ts = projected.get("read_ts")
|
||||
if (db_read_ts is None) != (projected_read_ts is None) or (
|
||||
db_read_ts is not None
|
||||
and projected_read_ts is not None
|
||||
and int(db_read_ts) != int(projected_read_ts)
|
||||
):
|
||||
counters["read_ts_mismatch"] += 1
|
||||
_record_detail(
|
||||
message_id,
|
||||
"read_ts_mismatch",
|
||||
"payload_normalization_gap",
|
||||
{"db": db_read_ts, "projected": projected_read_ts},
|
||||
)
|
||||
|
||||
db_reactions = _normalize_reactions(
|
||||
list((db_row.get("receipt_payload") or {}).get("reactions") or [])
|
||||
)
|
||||
projected_reactions = _normalize_reactions(
|
||||
list(projected.get("reactions") or [])
|
||||
)
|
||||
if db_reactions != projected_reactions:
|
||||
counters["reactions_mismatch"] += 1
|
||||
cause = "payload_normalization_gap"
|
||||
strategy = str(
|
||||
(
|
||||
(db_row.get("receipt_payload") or {}).get(
|
||||
"reaction_last_match_strategy"
|
||||
)
|
||||
or ""
|
||||
)
|
||||
).strip()
|
||||
if strategy == "nearest_ts_window":
|
||||
cause = "ambiguous_reaction_target"
|
||||
_record_detail(
|
||||
message_id,
|
||||
"reactions_mismatch",
|
||||
cause,
|
||||
{"db": db_reactions, "projected": projected_reactions},
|
||||
)
|
||||
|
||||
for message_id in projected_by_id.keys():
|
||||
if message_id not in db_by_id:
|
||||
counters["missing_in_db"] += 1
|
||||
_record_detail(message_id, "missing_in_db", "payload_normalization_gap")
|
||||
|
||||
mismatch_total = int(sum(int(value or 0) for value in counters.values()))
|
||||
return {
|
||||
"session_id": str(session.id),
|
||||
"db_message_count": len(db_rows),
|
||||
"projected_message_count": len(projected_rows),
|
||||
"mismatch_total": mismatch_total,
|
||||
"counters": counters,
|
||||
"cause_counts": cause_counts,
|
||||
"cause_samples": cause_samples,
|
||||
"details": details,
|
||||
}
|
||||
148
core/events/shadow.py
Normal file
148
core/events/shadow.py
Normal file
@@ -0,0 +1,148 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from django.db.models import Count, Max, Q
|
||||
|
||||
from core.models import ConversationEvent, Person, User
|
||||
|
||||
|
||||
def _kind_from_event_type(event_type: str) -> str:
|
||||
normalized = str(event_type or "").strip().lower()
|
||||
return {
|
||||
"message_created": "message_sent",
|
||||
"delivery_receipt": "message_delivered",
|
||||
"read_receipt": "message_read",
|
||||
"typing_started": "composing_started",
|
||||
"typing_stopped": "composing_stopped",
|
||||
"composing_abandoned": "composing_abandoned",
|
||||
"presence_available": "presence_available",
|
||||
"presence_unavailable": "presence_unavailable",
|
||||
}.get(normalized, normalized)
|
||||
|
||||
|
||||
def get_shadow_behavioral_availability_stats(*, user: User) -> list[dict]:
|
||||
person_map = {
|
||||
str(row["id"]): str(row["name"] or "")
|
||||
for row in Person.objects.filter(user=user).values("id", "name")
|
||||
}
|
||||
rows = (
|
||||
ConversationEvent.objects.filter(
|
||||
user=user,
|
||||
session__identifier__person__isnull=False,
|
||||
)
|
||||
.values("session__identifier__person_id", "origin_transport")
|
||||
.annotate(
|
||||
total_events=Count("id"),
|
||||
presence_events=Count(
|
||||
"id",
|
||||
filter=Q(event_type__in=["presence_available", "presence_unavailable"]),
|
||||
),
|
||||
read_events=Count("id", filter=Q(event_type="read_receipt")),
|
||||
typing_events=Count(
|
||||
"id",
|
||||
filter=Q(
|
||||
event_type__in=["typing_started", "typing_stopped"]
|
||||
),
|
||||
),
|
||||
message_events=Count("id", filter=Q(event_type="message_created")),
|
||||
abandoned_events=Count("id", filter=Q(event_type="composing_abandoned")),
|
||||
last_event_ts=Max("ts"),
|
||||
)
|
||||
.order_by("-total_events", "session__identifier__person_id", "origin_transport")
|
||||
)
|
||||
output = []
|
||||
for row in rows:
|
||||
person_id = str(row.get("session__identifier__person_id") or "").strip()
|
||||
output.append(
|
||||
{
|
||||
"person_id": person_id,
|
||||
"person_name": person_map.get(person_id, person_id or "-"),
|
||||
"service": str(row.get("origin_transport") or "").strip().lower(),
|
||||
"total_events": int(row.get("total_events") or 0),
|
||||
"presence_events": int(row.get("presence_events") or 0),
|
||||
"read_events": int(row.get("read_events") or 0),
|
||||
"typing_events": int(row.get("typing_events") or 0),
|
||||
"message_events": int(row.get("message_events") or 0),
|
||||
"abandoned_events": int(row.get("abandoned_events") or 0),
|
||||
"last_event_ts": int(row.get("last_event_ts") or 0),
|
||||
}
|
||||
)
|
||||
return output
|
||||
|
||||
|
||||
def get_shadow_behavioral_latest_states(
|
||||
*, user: User, person_ids: list[str], transport: str = ""
|
||||
) -> list[dict]:
|
||||
queryset = ConversationEvent.objects.filter(
|
||||
user=user,
|
||||
session__identifier__person_id__in=[str(value) for value in person_ids],
|
||||
event_type__in=[
|
||||
"message_created",
|
||||
"delivery_receipt",
|
||||
"read_receipt",
|
||||
"typing_started",
|
||||
"typing_stopped",
|
||||
"composing_abandoned",
|
||||
"presence_available",
|
||||
"presence_unavailable",
|
||||
],
|
||||
).select_related("session__identifier")
|
||||
if transport:
|
||||
queryset = queryset.filter(origin_transport=str(transport).strip().lower())
|
||||
rows = []
|
||||
seen = set()
|
||||
for row in queryset.order_by(
|
||||
"session__identifier__person_id", "-ts", "-created_at"
|
||||
)[:500]:
|
||||
person_id = str(getattr(row.session.identifier, "person_id", "") or "").strip()
|
||||
if not person_id or person_id in seen:
|
||||
continue
|
||||
seen.add(person_id)
|
||||
rows.append(
|
||||
{
|
||||
"person_id": person_id,
|
||||
"transport": str(row.origin_transport or "").strip().lower(),
|
||||
"kind": _kind_from_event_type(row.event_type),
|
||||
"ts": int(row.ts or 0),
|
||||
}
|
||||
)
|
||||
return rows
|
||||
|
||||
|
||||
def get_shadow_behavioral_events_for_range(
|
||||
*,
|
||||
user: User,
|
||||
person_id: str,
|
||||
start_ts: int,
|
||||
end_ts: int,
|
||||
transport: str = "",
|
||||
) -> list[dict]:
|
||||
queryset = ConversationEvent.objects.filter(
|
||||
user=user,
|
||||
session__identifier__person_id=str(person_id or "").strip(),
|
||||
ts__gte=int(start_ts),
|
||||
ts__lte=int(end_ts),
|
||||
event_type__in=[
|
||||
"message_created",
|
||||
"delivery_receipt",
|
||||
"read_receipt",
|
||||
"typing_started",
|
||||
"typing_stopped",
|
||||
"composing_abandoned",
|
||||
"presence_available",
|
||||
"presence_unavailable",
|
||||
],
|
||||
).order_by("ts", "created_at")
|
||||
if transport:
|
||||
queryset = queryset.filter(origin_transport=str(transport).strip().lower())
|
||||
return [
|
||||
{
|
||||
"person_id": str(person_id or "").strip(),
|
||||
"session_id": str(row.session_id or ""),
|
||||
"transport": str(row.origin_transport or "").strip().lower(),
|
||||
"kind": _kind_from_event_type(row.event_type),
|
||||
"direction": str(row.direction or "").strip().lower(),
|
||||
"ts": int(row.ts or 0),
|
||||
"payload": dict(row.payload or {}),
|
||||
}
|
||||
for row in queryset[:1000]
|
||||
]
|
||||
@@ -1,6 +1,7 @@
|
||||
from django import forms
|
||||
from django.contrib.auth.forms import UserCreationForm
|
||||
from django.forms import ModelForm
|
||||
|
||||
from mixins.restrictions import RestrictedFormMixin
|
||||
|
||||
from .models import (
|
||||
|
||||
1
core/gateway/__init__.py
Normal file
1
core/gateway/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Gateway command routing utilities."""
|
||||
417
core/gateway/builtin.py
Normal file
417
core/gateway/builtin.py
Normal file
@@ -0,0 +1,417 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
|
||||
from core.gateway.commands import (
|
||||
GatewayCommandContext,
|
||||
GatewayCommandRoute,
|
||||
dispatch_gateway_command,
|
||||
)
|
||||
from core.models import (
|
||||
CodexPermissionRequest,
|
||||
CodexRun,
|
||||
DerivedTask,
|
||||
ExternalSyncEvent,
|
||||
Person,
|
||||
TaskProject,
|
||||
User,
|
||||
)
|
||||
from core.tasks.engine import create_task_record_and_sync, mark_task_completed_and_sync
|
||||
|
||||
APPROVAL_PROVIDER_COMMANDS = {
|
||||
".claude": "claude",
|
||||
".codex": "codex_cli",
|
||||
}
|
||||
APPROVAL_EVENT_PREFIX = "codex_approval"
|
||||
ACTION_TO_STATUS = {"approve": "approved", "reject": "denied"}
|
||||
TASK_COMMAND_MATCH_RE = re.compile(r"^\s*(?:\.tasks\b|\.l\b|\.list\b)", re.IGNORECASE)
|
||||
|
||||
|
||||
def gateway_help_lines() -> list[str]:
|
||||
return [
|
||||
"Gateway commands:",
|
||||
" .contacts — list contacts",
|
||||
" .whoami — show current user",
|
||||
" .help — show this help",
|
||||
"Approval commands:",
|
||||
" .approval list-pending [all] — list pending approval requests",
|
||||
" .approval approve <key> — approve a request",
|
||||
" .approval reject <key> — reject a request",
|
||||
" .approval status <key> — check request status",
|
||||
"Task commands:",
|
||||
" .l — shortcut for open task list",
|
||||
" .tasks list [status] [limit] — list tasks",
|
||||
" .tasks add <project> :: <title> — create task in project",
|
||||
" .tasks show #<ref> — show task details",
|
||||
" .tasks complete #<ref> — mark task complete",
|
||||
" .tasks undo #<ref> — remove task",
|
||||
]
|
||||
|
||||
|
||||
def _resolve_request_provider(request):
|
||||
event = getattr(request, "external_sync_event", None)
|
||||
if event is None:
|
||||
return ""
|
||||
return str(getattr(event, "provider", "") or "").strip()
|
||||
|
||||
|
||||
async def _apply_approval_decision(request, decision):
|
||||
status = ACTION_TO_STATUS.get(decision, decision)
|
||||
request.status = status
|
||||
await sync_to_async(request.save)(update_fields=["status"])
|
||||
run = None
|
||||
if request.codex_run_id:
|
||||
run = await sync_to_async(CodexRun.objects.get)(pk=request.codex_run_id)
|
||||
run.status = "approved_waiting_resume" if status == "approved" else status
|
||||
await sync_to_async(run.save)(update_fields=["status"])
|
||||
if request.external_sync_event_id:
|
||||
evt = await sync_to_async(ExternalSyncEvent.objects.get)(
|
||||
pk=request.external_sync_event_id
|
||||
)
|
||||
evt.status = "ok"
|
||||
await sync_to_async(evt.save)(update_fields=["status"])
|
||||
user = await sync_to_async(User.objects.get)(pk=request.user_id)
|
||||
task = None
|
||||
if run is not None and run.task_id:
|
||||
task = await sync_to_async(DerivedTask.objects.get)(pk=run.task_id)
|
||||
ikey = f"{APPROVAL_EVENT_PREFIX}:{request.approval_key}:{status}"
|
||||
await sync_to_async(ExternalSyncEvent.objects.get_or_create)(
|
||||
idempotency_key=ikey,
|
||||
defaults={
|
||||
"user": user,
|
||||
"task": task,
|
||||
"provider": "codex_cli",
|
||||
"status": "pending",
|
||||
"payload": {},
|
||||
"error": "",
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
async def _approval_list_pending(user, scope, emit):
|
||||
_ = scope
|
||||
requests = await sync_to_async(list)(
|
||||
CodexPermissionRequest.objects.filter(user=user, status="pending").order_by(
|
||||
"-requested_at"
|
||||
)[:20]
|
||||
)
|
||||
emit(f"pending={len(requests)}")
|
||||
for req in requests:
|
||||
emit(f" {req.approval_key}: {req.summary}")
|
||||
|
||||
|
||||
async def _approval_status(user, approval_key, emit):
|
||||
try:
|
||||
req = await sync_to_async(CodexPermissionRequest.objects.get)(
|
||||
user=user, approval_key=approval_key
|
||||
)
|
||||
emit(f"status={req.status} key={req.approval_key}")
|
||||
except CodexPermissionRequest.DoesNotExist:
|
||||
emit(f"approval_key_not_found:{approval_key}")
|
||||
|
||||
|
||||
async def handle_approval_command(user, body, emit):
|
||||
command = str(body or "").strip()
|
||||
for prefix, expected_provider in APPROVAL_PROVIDER_COMMANDS.items():
|
||||
if command.startswith(prefix + " ") or command == prefix:
|
||||
sub = command[len(prefix) :].strip()
|
||||
parts = sub.split()
|
||||
if len(parts) >= 2 and parts[0] in ("approve", "reject"):
|
||||
action, approval_key = parts[0], parts[1]
|
||||
try:
|
||||
req = await sync_to_async(
|
||||
CodexPermissionRequest.objects.select_related(
|
||||
"external_sync_event"
|
||||
).get
|
||||
)(user=user, approval_key=approval_key)
|
||||
except CodexPermissionRequest.DoesNotExist:
|
||||
emit(f"approval_key_not_found:{approval_key}")
|
||||
return True
|
||||
provider = _resolve_request_provider(req)
|
||||
if not provider.startswith(expected_provider):
|
||||
emit(
|
||||
f"approval_key_not_for_provider:{approval_key} provider={provider}"
|
||||
)
|
||||
return True
|
||||
await _apply_approval_decision(req, action)
|
||||
emit(f"{action}d: {approval_key}")
|
||||
return True
|
||||
emit(f"usage: {prefix} approve|reject <key>")
|
||||
return True
|
||||
|
||||
if not command.startswith(".approval"):
|
||||
return False
|
||||
|
||||
rest = command[len(".approval") :].strip()
|
||||
|
||||
if rest.split() and rest.split()[0] in ("approve", "reject"):
|
||||
parts = rest.split()
|
||||
action = parts[0]
|
||||
approval_key = parts[1] if len(parts) > 1 else ""
|
||||
if not approval_key:
|
||||
emit("usage: .approval approve|reject <key>")
|
||||
return True
|
||||
try:
|
||||
req = await sync_to_async(
|
||||
CodexPermissionRequest.objects.select_related("external_sync_event").get
|
||||
)(user=user, approval_key=approval_key)
|
||||
except CodexPermissionRequest.DoesNotExist:
|
||||
emit(f"approval_key_not_found:{approval_key}")
|
||||
return True
|
||||
await _apply_approval_decision(req, action)
|
||||
emit(f"{action}d: {approval_key}")
|
||||
return True
|
||||
|
||||
if rest.startswith("list-pending"):
|
||||
scope = rest[len("list-pending") :].strip() or "mine"
|
||||
await _approval_list_pending(user, scope, emit)
|
||||
return True
|
||||
|
||||
if rest.startswith("status "):
|
||||
approval_key = rest[len("status ") :].strip()
|
||||
await _approval_status(user, approval_key, emit)
|
||||
return True
|
||||
|
||||
emit(
|
||||
"approval: .approval approve|reject <key> | "
|
||||
".approval list-pending [all] | "
|
||||
".approval status <key>"
|
||||
)
|
||||
return True
|
||||
|
||||
|
||||
def _parse_task_create(rest: str) -> tuple[str, str]:
|
||||
text = str(rest or "").strip()
|
||||
if not text.lower().startswith("add "):
|
||||
return "", ""
|
||||
payload = text[4:].strip()
|
||||
if "::" in payload:
|
||||
project_name, title = payload.split("::", 1)
|
||||
return str(project_name or "").strip(), str(title or "").strip()
|
||||
return "", ""
|
||||
|
||||
|
||||
async def handle_tasks_command(
|
||||
user,
|
||||
body,
|
||||
emit,
|
||||
*,
|
||||
service: str = "",
|
||||
channel_identifier: str = "",
|
||||
sender_identifier: str = "",
|
||||
):
|
||||
command = str(body or "").strip()
|
||||
lower_command = command.lower()
|
||||
if not TASK_COMMAND_MATCH_RE.match(command):
|
||||
return False
|
||||
if lower_command.startswith(".tasks"):
|
||||
rest = command[len(".tasks") :].strip()
|
||||
elif lower_command.startswith(".list") or lower_command.startswith(".l"):
|
||||
rest = "list"
|
||||
else:
|
||||
rest = "list " + command[2:].strip() if len(command) > 2 else "list"
|
||||
|
||||
if rest.startswith("list"):
|
||||
parts = rest.split()
|
||||
status_filter = parts[1] if len(parts) > 1 else "open"
|
||||
limit = int(parts[2]) if len(parts) > 2 and parts[2].isdigit() else 10
|
||||
tasks = await sync_to_async(list)(
|
||||
DerivedTask.objects.filter(
|
||||
user=user, status_snapshot=status_filter
|
||||
).order_by("-id")[:limit]
|
||||
)
|
||||
if not tasks:
|
||||
emit(f"no {status_filter} tasks")
|
||||
else:
|
||||
for task in tasks:
|
||||
emit(f"#{task.reference_code} [{task.status_snapshot}] {task.title}")
|
||||
return True
|
||||
|
||||
project_name, title = _parse_task_create(rest)
|
||||
if project_name or rest.startswith("add "):
|
||||
if not project_name or not title:
|
||||
emit("usage: .tasks add <project> :: <title>")
|
||||
return True
|
||||
project = await sync_to_async(
|
||||
lambda: TaskProject.objects.filter(user=user, name__iexact=project_name)
|
||||
.order_by("name")
|
||||
.first()
|
||||
)()
|
||||
if project is None:
|
||||
emit(f"project_not_found:{project_name}")
|
||||
return True
|
||||
task, _event = await create_task_record_and_sync(
|
||||
user=user,
|
||||
project=project,
|
||||
title=title,
|
||||
source_service=str(service or "web").strip().lower() or "web",
|
||||
source_channel=str(channel_identifier or "").strip(),
|
||||
actor_identifier=str(sender_identifier or "").strip(),
|
||||
immutable_payload={
|
||||
"origin": "gateway.tasks.add",
|
||||
"channel_service": str(service or "").strip().lower(),
|
||||
"channel_identifier": str(channel_identifier or "").strip(),
|
||||
},
|
||||
event_payload={
|
||||
"command": ".tasks add",
|
||||
"via": "gateway_builtin",
|
||||
},
|
||||
)
|
||||
emit(f"created #{task.reference_code} [{project.name}] {task.title}")
|
||||
return True
|
||||
|
||||
if rest.startswith("show "):
|
||||
ref = rest[len("show ") :].strip().lstrip("#")
|
||||
try:
|
||||
task = await sync_to_async(DerivedTask.objects.get)(
|
||||
user=user, reference_code=ref
|
||||
)
|
||||
emit(f"#{task.reference_code} {task.title}")
|
||||
emit(f"status: {task.status_snapshot}")
|
||||
except DerivedTask.DoesNotExist:
|
||||
emit(f"task_not_found:#{ref}")
|
||||
return True
|
||||
|
||||
if rest.startswith("complete "):
|
||||
ref = rest[len("complete ") :].strip().lstrip("#")
|
||||
try:
|
||||
task = await sync_to_async(DerivedTask.objects.select_related("project").get)(
|
||||
user=user, reference_code=ref
|
||||
)
|
||||
await mark_task_completed_and_sync(
|
||||
task=task,
|
||||
actor_identifier=str(sender_identifier or "").strip(),
|
||||
payload={
|
||||
"marker": ref,
|
||||
"command": ".tasks complete",
|
||||
"via": "gateway_builtin",
|
||||
},
|
||||
)
|
||||
emit(f"completed #{ref}")
|
||||
except DerivedTask.DoesNotExist:
|
||||
emit(f"task_not_found:#{ref}")
|
||||
return True
|
||||
|
||||
if rest.startswith("undo "):
|
||||
ref = rest[len("undo ") :].strip().lstrip("#")
|
||||
try:
|
||||
task = await sync_to_async(DerivedTask.objects.get)(
|
||||
user=user, reference_code=ref
|
||||
)
|
||||
await sync_to_async(task.delete)()
|
||||
emit(f"removed #{ref}")
|
||||
except DerivedTask.DoesNotExist:
|
||||
emit(f"task_not_found:#{ref}")
|
||||
return True
|
||||
|
||||
emit(
|
||||
"tasks: .l | .tasks list [status] [limit] | "
|
||||
".tasks add <project> :: <title> | "
|
||||
".tasks show #<ref> | "
|
||||
".tasks complete #<ref> | "
|
||||
".tasks undo #<ref>"
|
||||
)
|
||||
return True
|
||||
|
||||
|
||||
async def dispatch_builtin_gateway_command(
|
||||
*,
|
||||
user,
|
||||
command_text: str,
|
||||
service: str,
|
||||
channel_identifier: str,
|
||||
sender_identifier: str,
|
||||
source_message,
|
||||
message_meta: dict,
|
||||
payload: dict,
|
||||
emit,
|
||||
) -> bool:
|
||||
text = str(command_text or "").strip()
|
||||
|
||||
async def _contacts_handler(_ctx, out):
|
||||
persons = await sync_to_async(list)(Person.objects.filter(user=user).order_by("name"))
|
||||
if not persons:
|
||||
out("No contacts found.")
|
||||
return True
|
||||
out("Contacts: " + ", ".join([p.name for p in persons]))
|
||||
return True
|
||||
|
||||
async def _help_handler(_ctx, out):
|
||||
for line in gateway_help_lines():
|
||||
out(line)
|
||||
return True
|
||||
|
||||
async def _whoami_handler(_ctx, out):
|
||||
out(str(user.__dict__))
|
||||
return True
|
||||
|
||||
async def _approval_handler(_ctx, out):
|
||||
return await handle_approval_command(user, text, out)
|
||||
|
||||
async def _tasks_handler(_ctx, out):
|
||||
return await handle_tasks_command(
|
||||
user,
|
||||
text,
|
||||
out,
|
||||
service=service,
|
||||
channel_identifier=channel_identifier,
|
||||
sender_identifier=sender_identifier,
|
||||
)
|
||||
|
||||
routes = [
|
||||
GatewayCommandRoute(
|
||||
name="contacts",
|
||||
scope_key="gateway.contacts",
|
||||
matcher=lambda value: str(value or "").strip().lower() == ".contacts",
|
||||
handler=_contacts_handler,
|
||||
),
|
||||
GatewayCommandRoute(
|
||||
name="help",
|
||||
scope_key="gateway.help",
|
||||
matcher=lambda value: str(value or "").strip().lower() == ".help",
|
||||
handler=_help_handler,
|
||||
),
|
||||
GatewayCommandRoute(
|
||||
name="whoami",
|
||||
scope_key="gateway.whoami",
|
||||
matcher=lambda value: str(value or "").strip().lower() == ".whoami",
|
||||
handler=_whoami_handler,
|
||||
),
|
||||
GatewayCommandRoute(
|
||||
name="approval",
|
||||
scope_key="gateway.approval",
|
||||
matcher=lambda value: str(value or "").strip().lower().startswith(".approval")
|
||||
or any(
|
||||
str(value or "").strip().lower().startswith(prefix + " ")
|
||||
or str(value or "").strip().lower() == prefix
|
||||
for prefix in APPROVAL_PROVIDER_COMMANDS
|
||||
),
|
||||
handler=_approval_handler,
|
||||
),
|
||||
GatewayCommandRoute(
|
||||
name="tasks",
|
||||
scope_key="gateway.tasks",
|
||||
matcher=lambda value: bool(TASK_COMMAND_MATCH_RE.match(str(value or ""))),
|
||||
handler=_tasks_handler,
|
||||
),
|
||||
]
|
||||
|
||||
handled = await dispatch_gateway_command(
|
||||
context=GatewayCommandContext(
|
||||
user=user,
|
||||
source_message=source_message,
|
||||
service=str(service or "xmpp"),
|
||||
channel_identifier=str(channel_identifier or ""),
|
||||
sender_identifier=str(sender_identifier or ""),
|
||||
message_text=text,
|
||||
message_meta=dict(message_meta or {}),
|
||||
payload=dict(payload or {}),
|
||||
),
|
||||
routes=routes,
|
||||
emit=emit,
|
||||
)
|
||||
if not handled and text.startswith("."):
|
||||
emit("No such command")
|
||||
return handled
|
||||
137
core/gateway/commands.py
Normal file
137
core/gateway/commands.py
Normal file
@@ -0,0 +1,137 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Awaitable, Callable
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
|
||||
from core.models import GatewayCommandEvent
|
||||
from core.security.command_policy import CommandSecurityContext, evaluate_command_policy
|
||||
|
||||
GatewayEmit = Callable[[str], None]
|
||||
GatewayHandler = Callable[["GatewayCommandContext", GatewayEmit], Awaitable[bool]]
|
||||
GatewayMatcher = Callable[[str], bool]
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class GatewayCommandContext:
|
||||
user: object
|
||||
source_message: object
|
||||
service: str
|
||||
channel_identifier: str
|
||||
sender_identifier: str
|
||||
message_text: str
|
||||
message_meta: dict
|
||||
payload: dict
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class GatewayCommandRoute:
|
||||
name: str
|
||||
scope_key: str
|
||||
matcher: GatewayMatcher
|
||||
handler: GatewayHandler
|
||||
|
||||
|
||||
def _first_token(text: str) -> str:
|
||||
body = str(text or "").strip()
|
||||
if not body:
|
||||
return ""
|
||||
return str(body.split()[0] or "").strip().lower()
|
||||
|
||||
|
||||
def _derive_unknown_scope(text: str) -> str:
|
||||
token = _first_token(text).lstrip(".")
|
||||
if not token:
|
||||
token = "message"
|
||||
return f"gateway.{token}"
|
||||
|
||||
|
||||
async def dispatch_gateway_command(
|
||||
*,
|
||||
context: GatewayCommandContext,
|
||||
routes: list[GatewayCommandRoute],
|
||||
emit: GatewayEmit,
|
||||
) -> bool:
|
||||
text = str(context.message_text or "").strip()
|
||||
if not text:
|
||||
return False
|
||||
|
||||
route = next((row for row in routes if row.matcher(text)), None)
|
||||
scope_key = route.scope_key if route is not None else _derive_unknown_scope(text)
|
||||
command_name = route.name if route is not None else _first_token(text).lstrip(".")
|
||||
|
||||
event = await sync_to_async(GatewayCommandEvent.objects.create)(
|
||||
user=context.user,
|
||||
source_message=context.source_message,
|
||||
service=str(context.service or "").strip().lower() or "xmpp",
|
||||
channel_identifier=str(context.channel_identifier or "").strip(),
|
||||
sender_identifier=str(context.sender_identifier or "").strip(),
|
||||
scope_key=scope_key,
|
||||
command_name=command_name,
|
||||
command_text=text,
|
||||
status="pending",
|
||||
request_meta={
|
||||
"payload": dict(context.payload or {}),
|
||||
"message_meta": dict(context.message_meta or {}),
|
||||
},
|
||||
)
|
||||
|
||||
if route is None:
|
||||
event.status = "ignored"
|
||||
event.error = "unmatched_gateway_command"
|
||||
await sync_to_async(event.save)(update_fields=["status", "error", "updated_at"])
|
||||
return False
|
||||
|
||||
decision = await sync_to_async(evaluate_command_policy)(
|
||||
user=context.user,
|
||||
scope_key=scope_key,
|
||||
context=CommandSecurityContext(
|
||||
service=context.service,
|
||||
channel_identifier=context.channel_identifier,
|
||||
message_meta=dict(context.message_meta or {}),
|
||||
payload=dict(context.payload or {}),
|
||||
),
|
||||
)
|
||||
if not decision.allowed:
|
||||
message = (
|
||||
f"blocked by policy: {decision.code}"
|
||||
if not decision.reason
|
||||
else f"blocked by policy: {decision.reason}"
|
||||
)
|
||||
emit(message)
|
||||
event.status = "blocked"
|
||||
event.error = f"{decision.code}:{decision.reason}"
|
||||
event.response_meta = {
|
||||
"policy_code": decision.code,
|
||||
"policy_reason": decision.reason,
|
||||
}
|
||||
await sync_to_async(event.save)(
|
||||
update_fields=["status", "error", "response_meta", "updated_at"]
|
||||
)
|
||||
return True
|
||||
|
||||
responses: list[str] = []
|
||||
|
||||
def _captured_emit(value: str) -> None:
|
||||
row = str(value or "")
|
||||
responses.append(row)
|
||||
emit(row)
|
||||
|
||||
try:
|
||||
handled = await route.handler(context, _captured_emit)
|
||||
except Exception as exc:
|
||||
event.status = "failed"
|
||||
event.error = f"handler_exception:{exc}"
|
||||
event.response_meta = {"responses": responses}
|
||||
await sync_to_async(event.save)(
|
||||
update_fields=["status", "error", "response_meta", "updated_at"]
|
||||
)
|
||||
return True
|
||||
|
||||
event.status = "ok" if handled else "ignored"
|
||||
event.response_meta = {"responses": responses}
|
||||
await sync_to_async(event.save)(
|
||||
update_fields=["status", "response_meta", "updated_at"]
|
||||
)
|
||||
return bool(handled)
|
||||
128
core/management/commands/backfill_contact_availability.py
Normal file
128
core/management/commands/backfill_contact_availability.py
Normal file
@@ -0,0 +1,128 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Iterable
|
||||
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.events.ledger import append_event_sync
|
||||
from core.models import Message
|
||||
from core.presence.inference import now_ms
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
"Backfill behavioral event ledger rows from historical message and "
|
||||
"read-receipt activity."
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--days", type=int, default=30)
|
||||
parser.add_argument("--limit", type=int, default=5000)
|
||||
parser.add_argument("--service", default="")
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--dry-run", action="store_true", default=False)
|
||||
|
||||
def _iter_messages(
|
||||
self, *, days: int, limit: int, service: str, user_id: str
|
||||
) -> Iterable[Message]:
|
||||
cutoff_ts = now_ms() - (max(1, int(days)) * 24 * 60 * 60 * 1000)
|
||||
qs = Message.objects.filter(ts__gte=cutoff_ts).select_related(
|
||||
"user", "session", "session__identifier", "session__identifier__person"
|
||||
)
|
||||
if service:
|
||||
qs = qs.filter(source_service=str(service).strip().lower())
|
||||
if user_id:
|
||||
qs = qs.filter(user_id=str(user_id).strip())
|
||||
return qs.order_by("ts")[: max(1, int(limit))]
|
||||
|
||||
def handle(self, *args, **options):
|
||||
days = max(1, int(options.get("days") or 30))
|
||||
limit = max(1, int(options.get("limit") or 5000))
|
||||
service_filter = str(options.get("service") or "").strip().lower()
|
||||
user_filter = str(options.get("user_id") or "").strip()
|
||||
dry_run = bool(options.get("dry_run"))
|
||||
|
||||
indexed = 0
|
||||
scanned = 0
|
||||
|
||||
for msg in self._iter_messages(
|
||||
days=days, limit=limit, service=service_filter, user_id=user_filter
|
||||
):
|
||||
scanned += 1
|
||||
session = getattr(msg, "session", None)
|
||||
identifier = getattr(session, "identifier", None)
|
||||
person = getattr(identifier, "person", None)
|
||||
user = getattr(msg, "user", None)
|
||||
if not session or not identifier or not person or not user:
|
||||
continue
|
||||
|
||||
service = (
|
||||
str(getattr(msg, "source_service", "") or identifier.service or "")
|
||||
.strip()
|
||||
.lower()
|
||||
)
|
||||
if not service:
|
||||
continue
|
||||
|
||||
author = str(getattr(msg, "custom_author", "") or "").strip().upper()
|
||||
outgoing = author in {"USER", "BOT"}
|
||||
message_id = str(
|
||||
getattr(msg, "source_message_id", "") or f"django-message-{msg.id}"
|
||||
).strip()
|
||||
|
||||
if not dry_run:
|
||||
append_event_sync(
|
||||
user=user,
|
||||
session=session,
|
||||
ts=int(getattr(msg, "ts", 0) or 0),
|
||||
event_type="message_created",
|
||||
direction="out" if outgoing else "in",
|
||||
actor_identifier=str(
|
||||
getattr(msg, "sender_uuid", "") or identifier.identifier or ""
|
||||
),
|
||||
origin_transport=service,
|
||||
origin_message_id=message_id,
|
||||
origin_chat_id=str(getattr(msg, "source_chat_id", "") or ""),
|
||||
payload={
|
||||
"origin": "backfill_contact_availability",
|
||||
"message_id": str(msg.id),
|
||||
"text": str(getattr(msg, "text", "") or ""),
|
||||
"outgoing": outgoing,
|
||||
},
|
||||
)
|
||||
indexed += 1
|
||||
|
||||
read_ts = int(getattr(msg, "read_ts", 0) or 0)
|
||||
if read_ts <= 0:
|
||||
continue
|
||||
if not dry_run:
|
||||
append_event_sync(
|
||||
user=user,
|
||||
session=session,
|
||||
ts=read_ts,
|
||||
event_type="read_receipt",
|
||||
direction="system",
|
||||
actor_identifier=str(
|
||||
getattr(msg, "read_by_identifier", "") or identifier.identifier
|
||||
),
|
||||
origin_transport=service,
|
||||
origin_message_id=message_id,
|
||||
origin_chat_id=str(getattr(msg, "source_chat_id", "") or ""),
|
||||
payload={
|
||||
"origin": "backfill_contact_availability",
|
||||
"message_id": str(msg.id),
|
||||
"message_ts": int(getattr(msg, "ts", 0) or 0),
|
||||
"read_by": str(
|
||||
getattr(msg, "read_by_identifier", "") or ""
|
||||
).strip(),
|
||||
},
|
||||
)
|
||||
indexed += 1
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
"backfill_contact_availability complete "
|
||||
f"scanned={scanned} indexed={indexed} dry_run={dry_run} "
|
||||
f"days={days} limit={limit}"
|
||||
)
|
||||
)
|
||||
328
core/management/commands/codex_worker.py
Normal file
328
core/management/commands/codex_worker.py
Normal file
@@ -0,0 +1,328 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from asgiref.sync import async_to_sync
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.clients.transport import send_message_raw
|
||||
from core.models import (
|
||||
CodexPermissionRequest,
|
||||
CodexRun,
|
||||
ExternalSyncEvent,
|
||||
TaskProviderConfig,
|
||||
)
|
||||
from core.tasks.providers import get_provider
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("codex_worker")
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
"Process queued external sync events for worker-backed providers (codex_cli)."
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--once", action="store_true", default=False)
|
||||
parser.add_argument("--sleep-seconds", type=float, default=2.0)
|
||||
parser.add_argument("--batch-size", type=int, default=20)
|
||||
parser.add_argument("--provider", default="codex_cli")
|
||||
|
||||
def _claim_batch(self, provider: str, batch_size: int) -> list[str]:
|
||||
ids: list[str] = []
|
||||
rows = list(
|
||||
ExternalSyncEvent.objects.filter(
|
||||
provider=provider,
|
||||
status__in=["pending", "retrying"],
|
||||
)
|
||||
.order_by("updated_at")[: max(1, batch_size)]
|
||||
.values_list("id", flat=True)
|
||||
)
|
||||
for row_id in rows:
|
||||
updated = ExternalSyncEvent.objects.filter(
|
||||
id=row_id,
|
||||
provider=provider,
|
||||
status__in=["pending", "retrying"],
|
||||
).update(status="retrying")
|
||||
if updated:
|
||||
ids.append(str(row_id))
|
||||
return ids
|
||||
|
||||
def _run_event(self, event: ExternalSyncEvent) -> None:
|
||||
provider = get_provider(event.provider)
|
||||
if not bool(getattr(provider, "run_in_worker", False)):
|
||||
return
|
||||
|
||||
cfg = (
|
||||
TaskProviderConfig.objects.filter(
|
||||
user=event.user,
|
||||
provider=event.provider,
|
||||
enabled=True,
|
||||
)
|
||||
.order_by("-updated_at")
|
||||
.first()
|
||||
)
|
||||
if cfg is None:
|
||||
event.status = "failed"
|
||||
event.error = "provider_disabled_or_missing"
|
||||
event.save(update_fields=["status", "error", "updated_at"])
|
||||
provider_payload = dict((event.payload or {}).get("provider_payload") or {})
|
||||
run_id = str(provider_payload.get("codex_run_id") or "").strip()
|
||||
if run_id:
|
||||
CodexRun.objects.filter(id=run_id, user=event.user).update(
|
||||
status="failed",
|
||||
error="provider_disabled_or_missing",
|
||||
)
|
||||
return
|
||||
|
||||
payload = dict(event.payload or {})
|
||||
action = str(payload.get("action") or "append_update").strip().lower()
|
||||
provider_payload = dict(payload.get("provider_payload") or payload)
|
||||
run_id = str(
|
||||
provider_payload.get("codex_run_id") or payload.get("codex_run_id") or ""
|
||||
).strip()
|
||||
codex_run = None
|
||||
if run_id:
|
||||
codex_run = CodexRun.objects.filter(id=run_id, user=event.user).first()
|
||||
if codex_run is None and event.task_id:
|
||||
codex_run = (
|
||||
CodexRun.objects.filter(
|
||||
user=event.user,
|
||||
task_id=event.task_id,
|
||||
status__in=["queued", "running", "approved_waiting_resume"],
|
||||
)
|
||||
.order_by("-updated_at")
|
||||
.first()
|
||||
)
|
||||
if codex_run is not None:
|
||||
codex_run.status = "running"
|
||||
codex_run.error = ""
|
||||
codex_run.save(update_fields=["status", "error", "updated_at"])
|
||||
|
||||
if action == "create":
|
||||
result = provider.create_task(dict(cfg.settings or {}), provider_payload)
|
||||
elif action == "complete":
|
||||
result = provider.mark_complete(dict(cfg.settings or {}), provider_payload)
|
||||
elif action == "link_task":
|
||||
result = provider.link_task(dict(cfg.settings or {}), provider_payload)
|
||||
else:
|
||||
result = provider.append_update(dict(cfg.settings or {}), provider_payload)
|
||||
|
||||
result_payload = dict(result.payload or {})
|
||||
requires_approval = bool(result_payload.get("requires_approval"))
|
||||
if requires_approval:
|
||||
approval_key = str(
|
||||
result_payload.get("approval_key") or uuid.uuid4().hex[:12]
|
||||
).strip()
|
||||
permission_request = dict(result_payload.get("permission_request") or {})
|
||||
summary = str(
|
||||
result_payload.get("summary") or permission_request.get("summary") or ""
|
||||
).strip()
|
||||
requested_permissions = permission_request.get("requested_permissions")
|
||||
if not isinstance(requested_permissions, (list, dict)):
|
||||
requested_permissions = permission_request or {}
|
||||
resume_payload = result_payload.get("resume_payload")
|
||||
if not isinstance(resume_payload, dict):
|
||||
resume_payload = {}
|
||||
event.status = "waiting_approval"
|
||||
event.error = ""
|
||||
event.payload = dict(payload, worker_processed=True, result=result_payload)
|
||||
event.save(update_fields=["status", "error", "payload", "updated_at"])
|
||||
if codex_run is not None:
|
||||
codex_run.status = "waiting_approval"
|
||||
codex_run.result_payload = dict(result_payload)
|
||||
codex_run.error = ""
|
||||
codex_run.save(
|
||||
update_fields=["status", "result_payload", "error", "updated_at"]
|
||||
)
|
||||
CodexPermissionRequest.objects.update_or_create(
|
||||
approval_key=approval_key,
|
||||
defaults={
|
||||
"user": event.user,
|
||||
"codex_run": (
|
||||
codex_run
|
||||
if codex_run is not None
|
||||
else CodexRun.objects.create(
|
||||
user=event.user,
|
||||
task=event.task,
|
||||
derived_task_event=event.task_event,
|
||||
source_service=str(
|
||||
provider_payload.get("source_service") or ""
|
||||
),
|
||||
source_channel=str(
|
||||
provider_payload.get("source_channel") or ""
|
||||
),
|
||||
external_chat_id=str(
|
||||
provider_payload.get("external_chat_id") or ""
|
||||
),
|
||||
status="waiting_approval",
|
||||
request_payload=dict(payload or {}),
|
||||
result_payload=dict(result_payload),
|
||||
error="",
|
||||
)
|
||||
),
|
||||
"external_sync_event": event,
|
||||
"summary": summary,
|
||||
"requested_permissions": (
|
||||
requested_permissions
|
||||
if isinstance(requested_permissions, dict)
|
||||
else {"items": list(requested_permissions or [])}
|
||||
),
|
||||
"resume_payload": dict(resume_payload or {}),
|
||||
"status": "pending",
|
||||
"resolved_at": None,
|
||||
"resolved_by_identifier": "",
|
||||
"resolution_note": "",
|
||||
},
|
||||
)
|
||||
approver_service = (
|
||||
str((cfg.settings or {}).get("approver_service") or "").strip().lower()
|
||||
)
|
||||
approver_identifier = str(
|
||||
(cfg.settings or {}).get("approver_identifier") or ""
|
||||
).strip()
|
||||
requested_text = (
|
||||
result_payload.get("permission_request")
|
||||
or result_payload.get("requested_permissions")
|
||||
or {}
|
||||
)
|
||||
if approver_service and approver_identifier:
|
||||
try:
|
||||
async_to_sync(send_message_raw)(
|
||||
approver_service,
|
||||
approver_identifier,
|
||||
text=(
|
||||
f"[codex approval] key={approval_key}\\n"
|
||||
f"summary={summary or 'Codex run requires approval'}\\n"
|
||||
f"requested={requested_text}\\n"
|
||||
f"use: .codex approve {approval_key} or .codex deny {approval_key}"
|
||||
),
|
||||
attachments=[],
|
||||
metadata={"origin_tag": f"codex-approval:{approval_key}"},
|
||||
)
|
||||
except Exception:
|
||||
log.exception(
|
||||
"failed to notify approver channel for approval_key=%s",
|
||||
approval_key,
|
||||
)
|
||||
else:
|
||||
source_service = (
|
||||
str(provider_payload.get("source_service") or "").strip().lower()
|
||||
)
|
||||
source_channel = str(
|
||||
provider_payload.get("source_channel") or ""
|
||||
).strip()
|
||||
if source_service and source_channel:
|
||||
try:
|
||||
async_to_sync(send_message_raw)(
|
||||
source_service,
|
||||
source_channel,
|
||||
text=(
|
||||
"[codex approval] approval is pending but no approver channel is configured. "
|
||||
"Set approver_service and approver_identifier in Codex settings."
|
||||
),
|
||||
attachments=[],
|
||||
metadata={"origin_tag": "codex-approval-missing-target"},
|
||||
)
|
||||
except Exception:
|
||||
log.exception(
|
||||
"failed to notify source channel for missing approver target"
|
||||
)
|
||||
return
|
||||
|
||||
event.status = "ok" if result.ok else "failed"
|
||||
event.error = str(result.error or "")
|
||||
event.payload = dict(
|
||||
payload,
|
||||
worker_processed=True,
|
||||
result=result_payload,
|
||||
)
|
||||
event.save(update_fields=["status", "error", "payload", "updated_at"])
|
||||
|
||||
mode = str(provider_payload.get("mode") or "").strip().lower()
|
||||
approval_key = str(provider_payload.get("approval_key") or "").strip()
|
||||
if mode == "approval_response" and approval_key:
|
||||
req = (
|
||||
CodexPermissionRequest.objects.select_related(
|
||||
"external_sync_event", "codex_run"
|
||||
)
|
||||
.filter(user=event.user, approval_key=approval_key)
|
||||
.first()
|
||||
)
|
||||
if req and req.external_sync_event_id:
|
||||
if result.ok:
|
||||
ExternalSyncEvent.objects.filter(
|
||||
id=req.external_sync_event_id
|
||||
).update(
|
||||
status="ok",
|
||||
error="",
|
||||
)
|
||||
elif str(event.error or "").strip() == "approval_denied":
|
||||
ExternalSyncEvent.objects.filter(
|
||||
id=req.external_sync_event_id
|
||||
).update(
|
||||
status="failed",
|
||||
error="approval_denied",
|
||||
)
|
||||
if codex_run is not None:
|
||||
codex_run.status = "ok" if result.ok else "failed"
|
||||
codex_run.error = str(result.error or "")
|
||||
codex_run.result_payload = result_payload
|
||||
codex_run.save(
|
||||
update_fields=["status", "error", "result_payload", "updated_at"]
|
||||
)
|
||||
|
||||
if (
|
||||
result.ok
|
||||
and result.external_key
|
||||
and event.task_id
|
||||
and not str(event.task.external_key or "").strip()
|
||||
):
|
||||
event.task.external_key = str(result.external_key)
|
||||
event.task.save(update_fields=["external_key"])
|
||||
|
||||
def handle(self, *args, **options):
|
||||
once = bool(options.get("once"))
|
||||
sleep_seconds = max(0.2, float(options.get("sleep_seconds") or 2.0))
|
||||
batch_size = max(1, int(options.get("batch_size") or 20))
|
||||
provider_name = str(options.get("provider") or "codex_cli").strip().lower()
|
||||
|
||||
log.info(
|
||||
"codex_worker started provider=%s once=%s sleep=%s batch_size=%s",
|
||||
provider_name,
|
||||
once,
|
||||
sleep_seconds,
|
||||
batch_size,
|
||||
)
|
||||
|
||||
while True:
|
||||
claimed_ids = self._claim_batch(provider_name, batch_size)
|
||||
if not claimed_ids:
|
||||
if once:
|
||||
log.info("codex_worker exiting: no pending events")
|
||||
return
|
||||
time.sleep(sleep_seconds)
|
||||
continue
|
||||
|
||||
for row_id in claimed_ids:
|
||||
event = (
|
||||
ExternalSyncEvent.objects.filter(id=row_id)
|
||||
.select_related("task", "user")
|
||||
.first()
|
||||
)
|
||||
if event is None:
|
||||
continue
|
||||
try:
|
||||
self._run_event(event)
|
||||
except Exception as exc:
|
||||
log.exception("codex_worker failed processing id=%s", row_id)
|
||||
ExternalSyncEvent.objects.filter(id=row_id).update(
|
||||
status="failed",
|
||||
error=f"worker_exception:{exc}",
|
||||
)
|
||||
|
||||
if once:
|
||||
log.info("codex_worker processed %s event(s)", len(claimed_ids))
|
||||
return
|
||||
117
core/management/commands/event_ledger_smoke.py
Normal file
117
core/management/commands/event_ledger_smoke.py
Normal file
@@ -0,0 +1,117 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import time
|
||||
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
|
||||
from core.events.manticore import get_recent_event_rows
|
||||
from core.models import ConversationEvent
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Quick non-mutating sanity check for recent canonical event writes."
|
||||
|
||||
def _recent_rows(self, *, minutes: int, service: str, user_id: str, limit: int):
|
||||
cutoff_ts = int(time.time() * 1000) - (minutes * 60 * 1000)
|
||||
queryset = ConversationEvent.objects.filter(ts__gte=cutoff_ts).order_by("-ts")
|
||||
if service:
|
||||
queryset = queryset.filter(origin_transport=service)
|
||||
if user_id:
|
||||
queryset = queryset.filter(user_id=user_id)
|
||||
|
||||
rows = list(
|
||||
queryset.values(
|
||||
"id",
|
||||
"user_id",
|
||||
"session_id",
|
||||
"ts",
|
||||
"event_type",
|
||||
"direction",
|
||||
"origin_transport",
|
||||
"trace_id",
|
||||
)[:limit]
|
||||
)
|
||||
if rows:
|
||||
return rows, "django"
|
||||
try:
|
||||
manticore_rows = get_recent_event_rows(
|
||||
minutes=minutes,
|
||||
service=service,
|
||||
user_id=user_id,
|
||||
limit=limit,
|
||||
)
|
||||
except Exception:
|
||||
manticore_rows = []
|
||||
return manticore_rows, "manticore" if manticore_rows else "django"
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--minutes", type=int, default=120)
|
||||
parser.add_argument("--service", default="")
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--limit", type=int, default=200)
|
||||
parser.add_argument("--require-types", default="")
|
||||
parser.add_argument("--fail-if-empty", action="store_true", default=False)
|
||||
parser.add_argument("--json", action="store_true", default=False)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
minutes = max(1, int(options.get("minutes") or 120))
|
||||
service = str(options.get("service") or "").strip().lower()
|
||||
user_id = str(options.get("user_id") or "").strip()
|
||||
limit = max(1, int(options.get("limit") or 200))
|
||||
require_types_raw = str(options.get("require_types") or "").strip()
|
||||
fail_if_empty = bool(options.get("fail_if_empty"))
|
||||
as_json = bool(options.get("json"))
|
||||
required_types = [
|
||||
item.strip().lower()
|
||||
for item in require_types_raw.split(",")
|
||||
if item.strip()
|
||||
]
|
||||
|
||||
rows, data_source = self._recent_rows(
|
||||
minutes=minutes,
|
||||
service=service,
|
||||
user_id=user_id,
|
||||
limit=limit,
|
||||
)
|
||||
event_type_counts = {}
|
||||
for row in rows:
|
||||
key = str(row.get("event_type") or "")
|
||||
event_type_counts[key] = int(event_type_counts.get(key) or 0) + 1
|
||||
missing_required_types = [
|
||||
event_type
|
||||
for event_type in required_types
|
||||
if int(event_type_counts.get(event_type) or 0) <= 0
|
||||
]
|
||||
|
||||
payload = {
|
||||
"minutes": minutes,
|
||||
"service": service,
|
||||
"user_id": user_id,
|
||||
"data_source": data_source,
|
||||
"count": len(rows),
|
||||
"event_type_counts": event_type_counts,
|
||||
"required_types": required_types,
|
||||
"missing_required_types": missing_required_types,
|
||||
"sample": rows[:25],
|
||||
}
|
||||
|
||||
if as_json:
|
||||
self.stdout.write(json.dumps(payload, indent=2, sort_keys=True))
|
||||
return
|
||||
|
||||
self.stdout.write(
|
||||
f"event-ledger-smoke minutes={minutes} service={service or '-'} user={user_id or '-'} source={data_source} count={len(rows)}"
|
||||
)
|
||||
self.stdout.write(f"event_type_counts={event_type_counts}")
|
||||
if required_types:
|
||||
self.stdout.write(
|
||||
f"required_types={required_types} missing_required_types={missing_required_types}"
|
||||
)
|
||||
|
||||
if fail_if_empty and len(rows) == 0:
|
||||
raise CommandError("No recent canonical event rows found.")
|
||||
if missing_required_types:
|
||||
raise CommandError(
|
||||
"Missing required event types: " + ", ".join(missing_required_types)
|
||||
)
|
||||
136
core/management/commands/event_projection_shadow.py
Normal file
136
core/management/commands/event_projection_shadow.py
Normal file
@@ -0,0 +1,136 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import time
|
||||
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
|
||||
from core.events.projection import shadow_compare_session
|
||||
from core.models import ChatSession, Message
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
"Run event->message shadow projection comparison and emit mismatch counters "
|
||||
"per chat session."
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--session-id", default="")
|
||||
parser.add_argument("--service", default="")
|
||||
parser.add_argument("--recent-only", action="store_true", default=False)
|
||||
parser.add_argument("--recent-minutes", type=int, default=0)
|
||||
parser.add_argument("--limit-sessions", type=int, default=50)
|
||||
parser.add_argument("--detail-limit", type=int, default=25)
|
||||
parser.add_argument("--fail-on-mismatch", action="store_true", default=False)
|
||||
parser.add_argument("--json", action="store_true", default=False)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
user_id = str(options.get("user_id") or "").strip()
|
||||
session_id = str(options.get("session_id") or "").strip()
|
||||
service = str(options.get("service") or "").strip().lower()
|
||||
recent_only = bool(options.get("recent_only"))
|
||||
recent_minutes = max(0, int(options.get("recent_minutes") or 0))
|
||||
if recent_only and recent_minutes <= 0:
|
||||
recent_minutes = 120
|
||||
limit_sessions = max(1, int(options.get("limit_sessions") or 50))
|
||||
detail_limit = max(0, int(options.get("detail_limit") or 25))
|
||||
as_json = bool(options.get("json"))
|
||||
fail_on_mismatch = bool(options.get("fail_on_mismatch"))
|
||||
|
||||
sessions = ChatSession.objects.all().order_by("-last_interaction", "id")
|
||||
if user_id:
|
||||
sessions = sessions.filter(user_id=user_id)
|
||||
if session_id:
|
||||
sessions = sessions.filter(id=session_id)
|
||||
if service:
|
||||
sessions = sessions.filter(identifier__service=service)
|
||||
if recent_minutes > 0:
|
||||
cutoff_ts = int(time.time() * 1000) - (recent_minutes * 60 * 1000)
|
||||
recent_session_ids = (
|
||||
Message.objects.filter(ts__gte=cutoff_ts)
|
||||
.values_list("session_id", flat=True)
|
||||
.distinct()
|
||||
)
|
||||
sessions = sessions.filter(id__in=recent_session_ids)
|
||||
sessions = list(sessions.select_related("user", "identifier")[:limit_sessions])
|
||||
|
||||
if not sessions:
|
||||
raise CommandError("No chat sessions matched.")
|
||||
|
||||
aggregate = {
|
||||
"sessions_scanned": 0,
|
||||
"db_message_count": 0,
|
||||
"projected_message_count": 0,
|
||||
"mismatch_total": 0,
|
||||
"counters": {
|
||||
"missing_in_projection": 0,
|
||||
"missing_in_db": 0,
|
||||
"text_mismatch": 0,
|
||||
"ts_mismatch": 0,
|
||||
"delivered_ts_mismatch": 0,
|
||||
"read_ts_mismatch": 0,
|
||||
"reactions_mismatch": 0,
|
||||
},
|
||||
"cause_counts": {
|
||||
"missing_event_write": 0,
|
||||
"ambiguous_reaction_target": 0,
|
||||
"payload_normalization_gap": 0,
|
||||
},
|
||||
}
|
||||
results = []
|
||||
|
||||
for session in sessions:
|
||||
compared = shadow_compare_session(session, detail_limit=detail_limit)
|
||||
aggregate["sessions_scanned"] += 1
|
||||
aggregate["db_message_count"] += int(compared.get("db_message_count") or 0)
|
||||
aggregate["projected_message_count"] += int(
|
||||
compared.get("projected_message_count") or 0
|
||||
)
|
||||
aggregate["mismatch_total"] += int(compared.get("mismatch_total") or 0)
|
||||
for key in aggregate["counters"].keys():
|
||||
aggregate["counters"][key] += int(
|
||||
(compared.get("counters") or {}).get(key) or 0
|
||||
)
|
||||
for key in aggregate["cause_counts"].keys():
|
||||
aggregate["cause_counts"][key] += int(
|
||||
(compared.get("cause_counts") or {}).get(key) or 0
|
||||
)
|
||||
results.append(compared)
|
||||
|
||||
payload = {
|
||||
"filters": {
|
||||
"user_id": user_id,
|
||||
"session_id": session_id,
|
||||
"service": service,
|
||||
"recent_only": recent_only,
|
||||
"recent_minutes": recent_minutes,
|
||||
"limit_sessions": limit_sessions,
|
||||
"detail_limit": detail_limit,
|
||||
},
|
||||
"aggregate": aggregate,
|
||||
"sessions": results,
|
||||
}
|
||||
if as_json:
|
||||
self.stdout.write(json.dumps(payload, indent=2, sort_keys=True))
|
||||
else:
|
||||
self.stdout.write(
|
||||
"shadow compare: "
|
||||
f"sessions={aggregate['sessions_scanned']} "
|
||||
f"db={aggregate['db_message_count']} "
|
||||
f"projected={aggregate['projected_message_count']} "
|
||||
f"mismatches={aggregate['mismatch_total']}"
|
||||
)
|
||||
self.stdout.write(f"counters={aggregate['counters']}")
|
||||
self.stdout.write(f"cause_counts={aggregate['cause_counts']}")
|
||||
for row in results:
|
||||
self.stdout.write(
|
||||
f"session={row.get('session_id')} mismatch_total={row.get('mismatch_total')} "
|
||||
f"db={row.get('db_message_count')} projected={row.get('projected_message_count')}"
|
||||
)
|
||||
|
||||
if fail_on_mismatch and int(aggregate["mismatch_total"] or 0) > 0:
|
||||
raise CommandError(
|
||||
f"Shadow projection mismatch detected: {aggregate['mismatch_total']}"
|
||||
)
|
||||
96
core/management/commands/gia_analysis.py
Normal file
96
core/management/commands/gia_analysis.py
Normal file
@@ -0,0 +1,96 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.events.behavior import summarize_metrics
|
||||
from core.events.manticore import get_event_ledger_backend
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("gia_analysis")
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Compute behavioral metrics from Manticore event rows into gia_metrics."
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--once", action="store_true", default=False)
|
||||
parser.add_argument("--user-id", type=int)
|
||||
parser.add_argument("--person-id")
|
||||
parser.add_argument("--sleep-seconds", type=float, default=60.0)
|
||||
parser.add_argument("--window-days", nargs="*", type=int, default=[1, 7, 30, 90])
|
||||
|
||||
def _run_cycle(
|
||||
self,
|
||||
*,
|
||||
user_id: int | None = None,
|
||||
person_id: str = "",
|
||||
window_days: list[int] | None = None,
|
||||
) -> int:
|
||||
backend = get_event_ledger_backend()
|
||||
now_ms = int(time.time() * 1000)
|
||||
baseline_since = now_ms - (90 * 86400000)
|
||||
windows = sorted({max(1, int(value)) for value in list(window_days or [1, 7, 30, 90])})
|
||||
|
||||
targets = backend.list_event_targets(user_id=user_id)
|
||||
if person_id:
|
||||
targets = [
|
||||
row
|
||||
for row in targets
|
||||
if str(row.get("person_id") or "").strip() == str(person_id).strip()
|
||||
]
|
||||
|
||||
written = 0
|
||||
for target in targets:
|
||||
target_user_id = int(target.get("user_id") or 0)
|
||||
target_person_id = str(target.get("person_id") or "").strip()
|
||||
if target_user_id <= 0 or not target_person_id:
|
||||
continue
|
||||
baseline_rows = backend.fetch_events(
|
||||
user_id=target_user_id,
|
||||
person_id=target_person_id,
|
||||
since_ts=baseline_since,
|
||||
)
|
||||
if not baseline_rows:
|
||||
continue
|
||||
for window in windows:
|
||||
since_ts = now_ms - (int(window) * 86400000)
|
||||
window_rows = [
|
||||
row
|
||||
for row in baseline_rows
|
||||
if int(row.get("ts") or 0) >= since_ts
|
||||
]
|
||||
metrics = summarize_metrics(window_rows, baseline_rows)
|
||||
for metric, values in metrics.items():
|
||||
backend.upsert_metric(
|
||||
user_id=target_user_id,
|
||||
person_id=target_person_id,
|
||||
window_days=int(window),
|
||||
metric=metric,
|
||||
value_ms=int(values.get("value_ms") or 0),
|
||||
baseline_ms=int(values.get("baseline_ms") or 0),
|
||||
z_score=float(values.get("z_score") or 0.0),
|
||||
sample_n=int(values.get("sample_n") or 0),
|
||||
computed_at=now_ms,
|
||||
)
|
||||
written += 1
|
||||
return written
|
||||
|
||||
def handle(self, *args, **options):
|
||||
once = bool(options.get("once"))
|
||||
sleep_seconds = max(1.0, float(options.get("sleep_seconds") or 60.0))
|
||||
user_id = options.get("user_id")
|
||||
person_id = str(options.get("person_id") or "").strip()
|
||||
window_days = list(options.get("window_days") or [1, 7, 30, 90])
|
||||
|
||||
while True:
|
||||
written = self._run_cycle(
|
||||
user_id=user_id,
|
||||
person_id=person_id,
|
||||
window_days=window_days,
|
||||
)
|
||||
self.stdout.write(f"gia-analysis wrote={written}")
|
||||
if once:
|
||||
return
|
||||
time.sleep(sleep_seconds)
|
||||
46
core/management/commands/manticore_backfill.py
Normal file
46
core/management/commands/manticore_backfill.py
Normal file
@@ -0,0 +1,46 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
|
||||
from core.events.manticore import upsert_conversation_event
|
||||
from core.models import ConversationEvent
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Backfill behavioral events into Manticore from ConversationEvent rows."
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
"--from-conversation-events",
|
||||
action="store_true",
|
||||
help="Replay ConversationEvent rows into the Manticore event table.",
|
||||
)
|
||||
parser.add_argument("--user-id", type=int, default=None)
|
||||
parser.add_argument("--limit", type=int, default=5000)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
if not bool(options.get("from_conversation_events")):
|
||||
raise CommandError("Pass --from-conversation-events to run this backfill.")
|
||||
|
||||
queryset = (
|
||||
ConversationEvent.objects.select_related("session__identifier")
|
||||
.order_by("ts", "created_at")
|
||||
)
|
||||
user_id = options.get("user_id")
|
||||
if user_id is not None:
|
||||
queryset = queryset.filter(user_id=int(user_id))
|
||||
|
||||
scanned = 0
|
||||
indexed = 0
|
||||
limit = max(1, int(options.get("limit") or 5000))
|
||||
for event in queryset[:limit]:
|
||||
scanned += 1
|
||||
upsert_conversation_event(event)
|
||||
indexed += 1
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
"manticore-backfill scanned=%s indexed=%s user=%s"
|
||||
% (scanned, indexed, user_id if user_id is not None else "-")
|
||||
)
|
||||
)
|
||||
11
core/management/commands/mcp_manticore_server.py
Normal file
11
core/management/commands/mcp_manticore_server.py
Normal file
@@ -0,0 +1,11 @@
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.mcp.server import run_stdio_server
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Run GIA MCP stdio server with manticore/task/documentation tools."
|
||||
|
||||
def handle(self, *args, **options):
|
||||
_ = args, options
|
||||
run_stdio_server()
|
||||
40
core/management/commands/memory_hygiene.py
Normal file
40
core/management/commands/memory_hygiene.py
Normal file
@@ -0,0 +1,40 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.memory.pipeline import run_memory_hygiene
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Run memory hygiene checks (stale decay + contradiction queueing)."
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--dry-run", action="store_true", default=False)
|
||||
parser.add_argument("--json", action="store_true", default=False)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
user_id_raw = str(options.get("user_id") or "").strip()
|
||||
dry_run = bool(options.get("dry_run"))
|
||||
as_json = bool(options.get("json"))
|
||||
user_id = int(user_id_raw) if user_id_raw else None
|
||||
|
||||
result = run_memory_hygiene(user_id=user_id, dry_run=dry_run)
|
||||
payload = {
|
||||
"user_id": user_id,
|
||||
"dry_run": dry_run,
|
||||
"result": result,
|
||||
}
|
||||
if as_json:
|
||||
self.stdout.write(json.dumps(payload, indent=2, sort_keys=True))
|
||||
return
|
||||
self.stdout.write(
|
||||
"memory-hygiene "
|
||||
f"user={user_id if user_id is not None else '-'} "
|
||||
f"dry_run={'yes' if dry_run else 'no'} "
|
||||
f"expired={int(result.get('expired') or 0)} "
|
||||
f"contradictions={int(result.get('contradictions') or 0)} "
|
||||
f"queued={int(result.get('queued_requests') or 0)}"
|
||||
)
|
||||
72
core/management/commands/memory_search_query.py
Normal file
72
core/management/commands/memory_search_query.py
Normal file
@@ -0,0 +1,72 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
|
||||
from core.memory.search_backend import get_memory_search_backend
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Run a query against configured memory search backend."
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--user-id", required=True)
|
||||
parser.add_argument("--query", required=True)
|
||||
parser.add_argument("--conversation-id", default="")
|
||||
parser.add_argument("--statuses", default="active")
|
||||
parser.add_argument("--limit", type=int, default=20)
|
||||
parser.add_argument("--json", action="store_true", default=False)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
user_id_raw = str(options.get("user_id") or "").strip()
|
||||
query = str(options.get("query") or "").strip()
|
||||
conversation_id = str(options.get("conversation_id") or "").strip()
|
||||
statuses = tuple(
|
||||
item.strip().lower()
|
||||
for item in str(options.get("statuses") or "active").split(",")
|
||||
if item.strip()
|
||||
)
|
||||
limit = max(1, int(options.get("limit") or 20))
|
||||
as_json = bool(options.get("json"))
|
||||
|
||||
if not user_id_raw:
|
||||
raise CommandError("--user-id is required")
|
||||
if not query:
|
||||
raise CommandError("--query is required")
|
||||
|
||||
backend = get_memory_search_backend()
|
||||
hits = backend.search(
|
||||
user_id=int(user_id_raw),
|
||||
query=query,
|
||||
conversation_id=conversation_id,
|
||||
limit=limit,
|
||||
include_statuses=statuses,
|
||||
)
|
||||
payload = {
|
||||
"backend": getattr(backend, "name", "unknown"),
|
||||
"query": query,
|
||||
"user_id": int(user_id_raw),
|
||||
"conversation_id": conversation_id,
|
||||
"statuses": statuses,
|
||||
"count": len(hits),
|
||||
"hits": [
|
||||
{
|
||||
"memory_id": item.memory_id,
|
||||
"score": item.score,
|
||||
"summary": item.summary,
|
||||
"payload": item.payload,
|
||||
}
|
||||
for item in hits
|
||||
],
|
||||
}
|
||||
if as_json:
|
||||
self.stdout.write(json.dumps(payload, indent=2, sort_keys=True))
|
||||
return
|
||||
self.stdout.write(
|
||||
f"memory-search-query backend={payload['backend']} count={payload['count']} query={query!r}"
|
||||
)
|
||||
for row in payload["hits"]:
|
||||
self.stdout.write(
|
||||
f"- id={row['memory_id']} score={row['score']:.2f} summary={row['summary'][:120]}"
|
||||
)
|
||||
49
core/management/commands/memory_search_reindex.py
Normal file
49
core/management/commands/memory_search_reindex.py
Normal file
@@ -0,0 +1,49 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.memory.search_backend import get_memory_search_backend
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Reindex MemoryItem rows into the configured memory search backend."
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--statuses", default="active")
|
||||
parser.add_argument("--limit", type=int, default=2000)
|
||||
parser.add_argument("--json", action="store_true", default=False)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
user_id_raw = str(options.get("user_id") or "").strip()
|
||||
statuses = tuple(
|
||||
item.strip().lower()
|
||||
for item in str(options.get("statuses") or "active").split(",")
|
||||
if item.strip()
|
||||
)
|
||||
limit = max(1, int(options.get("limit") or 2000))
|
||||
as_json = bool(options.get("json"))
|
||||
|
||||
backend = get_memory_search_backend()
|
||||
result = backend.reindex(
|
||||
user_id=int(user_id_raw) if user_id_raw else None,
|
||||
include_statuses=statuses,
|
||||
limit=limit,
|
||||
)
|
||||
payload = {
|
||||
"backend": getattr(backend, "name", "unknown"),
|
||||
"user_id": user_id_raw,
|
||||
"statuses": statuses,
|
||||
"limit": limit,
|
||||
"result": result,
|
||||
}
|
||||
if as_json:
|
||||
self.stdout.write(json.dumps(payload, indent=2, sort_keys=True))
|
||||
return
|
||||
self.stdout.write(
|
||||
f"memory-search-reindex backend={payload['backend']} "
|
||||
f"user={user_id_raw or '-'} statuses={','.join(statuses) or '-'} "
|
||||
f"scanned={int(result.get('scanned') or 0)} indexed={int(result.get('indexed') or 0)}"
|
||||
)
|
||||
46
core/management/commands/memory_suggest_from_messages.py
Normal file
46
core/management/commands/memory_suggest_from_messages.py
Normal file
@@ -0,0 +1,46 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
from django.core.management.base import BaseCommand, CommandError
|
||||
|
||||
from core.memory.pipeline import suggest_memories_from_recent_messages
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Suggest proposed MemoryItem rows from recent inbound message text."
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--user-id", required=True)
|
||||
parser.add_argument("--limit-messages", type=int, default=300)
|
||||
parser.add_argument("--max-items", type=int, default=30)
|
||||
parser.add_argument("--json", action="store_true", default=False)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
user_id_raw = str(options.get("user_id") or "").strip()
|
||||
if not user_id_raw:
|
||||
raise CommandError("--user-id is required")
|
||||
limit_messages = max(1, int(options.get("limit_messages") or 300))
|
||||
max_items = max(1, int(options.get("max_items") or 30))
|
||||
as_json = bool(options.get("json"))
|
||||
|
||||
result = suggest_memories_from_recent_messages(
|
||||
user_id=int(user_id_raw),
|
||||
limit_messages=limit_messages,
|
||||
max_items=max_items,
|
||||
)
|
||||
payload = {
|
||||
"user_id": int(user_id_raw),
|
||||
"limit_messages": limit_messages,
|
||||
"max_items": max_items,
|
||||
"result": result,
|
||||
}
|
||||
if as_json:
|
||||
self.stdout.write(json.dumps(payload, indent=2, sort_keys=True))
|
||||
return
|
||||
self.stdout.write(
|
||||
"memory-suggest-from-messages "
|
||||
f"user={payload['user_id']} "
|
||||
f"scanned={int(result.get('scanned') or 0)} "
|
||||
f"queued={int(result.get('queued') or 0)}"
|
||||
)
|
||||
62
core/management/commands/prune_behavioral_orm_data.py
Normal file
62
core/management/commands/prune_behavioral_orm_data.py
Normal file
@@ -0,0 +1,62 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
|
||||
from django.conf import settings
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.models import ConversationEvent
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
"Prune high-growth behavioral ORM shadow tables after data has been "
|
||||
"persisted to Manticore."
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--dry-run", action="store_true", default=False)
|
||||
parser.add_argument("--conversation-days", type=int)
|
||||
parser.add_argument(
|
||||
"--tables",
|
||||
default="conversation_events",
|
||||
help="Comma separated subset of: conversation_events",
|
||||
)
|
||||
|
||||
def _cutoff_ms(self, days: int) -> int:
|
||||
return int(time.time() * 1000) - (max(1, int(days)) * 24 * 60 * 60 * 1000)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
user_id = str(options.get("user_id") or "").strip()
|
||||
dry_run = bool(options.get("dry_run"))
|
||||
conversation_days = int(
|
||||
options.get("conversation_days")
|
||||
or getattr(settings, "CONVERSATION_EVENT_RETENTION_DAYS", 90)
|
||||
or 90
|
||||
)
|
||||
selected_tables = {
|
||||
str(item or "").strip().lower()
|
||||
for item in str(options.get("tables") or "").split(",")
|
||||
if str(item or "").strip()
|
||||
}
|
||||
|
||||
deleted = {
|
||||
"conversation_events": 0,
|
||||
}
|
||||
|
||||
if "conversation_events" in selected_tables:
|
||||
qs = ConversationEvent.objects.filter(
|
||||
ts__lt=self._cutoff_ms(conversation_days)
|
||||
)
|
||||
if user_id:
|
||||
qs = qs.filter(user_id=user_id)
|
||||
deleted["conversation_events"] = int(qs.count() if dry_run else qs.delete()[0])
|
||||
|
||||
self.stdout.write(
|
||||
"prune-behavioral-orm-data "
|
||||
f"dry_run={dry_run} "
|
||||
f"user_id={user_id or '-'} "
|
||||
f"conversation_days={conversation_days} "
|
||||
f"deleted={deleted}"
|
||||
)
|
||||
161
core/management/commands/recalculate_contact_availability.py
Normal file
161
core/management/commands/recalculate_contact_availability.py
Normal file
@@ -0,0 +1,161 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from core.events.ledger import append_event_sync
|
||||
from core.models import Message
|
||||
from core.presence.inference import now_ms
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
"Replay behavioral event ledger rows from persisted message, receipt, "
|
||||
"and reaction history."
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--days", type=int, default=90)
|
||||
parser.add_argument("--limit", type=int, default=20000)
|
||||
parser.add_argument("--service", default="")
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--dry-run", action="store_true", default=False)
|
||||
parser.add_argument("--no-reset", action="store_true", default=False)
|
||||
|
||||
def _iter_messages(self, *, days: int, limit: int, service: str, user_id: str):
|
||||
cutoff_ts = now_ms() - (max(1, int(days)) * 24 * 60 * 60 * 1000)
|
||||
qs = Message.objects.filter(ts__gte=cutoff_ts).select_related(
|
||||
"user", "session", "session__identifier", "session__identifier__person"
|
||||
)
|
||||
if service:
|
||||
qs = qs.filter(source_service=str(service).strip().lower())
|
||||
if user_id:
|
||||
qs = qs.filter(user_id=str(user_id).strip())
|
||||
return qs.order_by("ts")[: max(1, int(limit))]
|
||||
|
||||
def handle(self, *args, **options):
|
||||
days = max(1, int(options.get("days") or 90))
|
||||
limit = max(1, int(options.get("limit") or 20000))
|
||||
service_filter = str(options.get("service") or "").strip().lower()
|
||||
user_filter = str(options.get("user_id") or "").strip()
|
||||
dry_run = bool(options.get("dry_run"))
|
||||
|
||||
messages = list(
|
||||
self._iter_messages(
|
||||
days=days,
|
||||
limit=limit,
|
||||
service=service_filter,
|
||||
user_id=user_filter,
|
||||
)
|
||||
)
|
||||
indexed = 0
|
||||
|
||||
for msg in messages:
|
||||
session = getattr(msg, "session", None)
|
||||
identifier = getattr(session, "identifier", None)
|
||||
person = getattr(identifier, "person", None)
|
||||
user = getattr(msg, "user", None)
|
||||
if not session or not identifier or not person or not user:
|
||||
continue
|
||||
|
||||
service = (
|
||||
str(getattr(msg, "source_service", "") or identifier.service or "")
|
||||
.strip()
|
||||
.lower()
|
||||
)
|
||||
if not service:
|
||||
continue
|
||||
|
||||
author = str(getattr(msg, "custom_author", "") or "").strip().upper()
|
||||
outgoing = author in {"USER", "BOT"}
|
||||
message_id = str(
|
||||
getattr(msg, "source_message_id", "") or f"django-message-{msg.id}"
|
||||
).strip()
|
||||
|
||||
if not dry_run:
|
||||
append_event_sync(
|
||||
user=user,
|
||||
session=session,
|
||||
ts=int(getattr(msg, "ts", 0) or 0),
|
||||
event_type="message_created",
|
||||
direction="out" if outgoing else "in",
|
||||
actor_identifier=str(
|
||||
getattr(msg, "sender_uuid", "") or identifier.identifier or ""
|
||||
),
|
||||
origin_transport=service,
|
||||
origin_message_id=message_id,
|
||||
origin_chat_id=str(getattr(msg, "source_chat_id", "") or ""),
|
||||
payload={
|
||||
"origin": "recalculate_contact_availability",
|
||||
"message_id": str(msg.id),
|
||||
"text": str(getattr(msg, "text", "") or ""),
|
||||
"outgoing": outgoing,
|
||||
},
|
||||
)
|
||||
indexed += 1
|
||||
|
||||
read_ts = int(getattr(msg, "read_ts", 0) or 0)
|
||||
if read_ts > 0:
|
||||
if not dry_run:
|
||||
append_event_sync(
|
||||
user=user,
|
||||
session=session,
|
||||
ts=read_ts,
|
||||
event_type="read_receipt",
|
||||
direction="system",
|
||||
actor_identifier=str(
|
||||
getattr(msg, "read_by_identifier", "")
|
||||
or identifier.identifier
|
||||
),
|
||||
origin_transport=service,
|
||||
origin_message_id=message_id,
|
||||
origin_chat_id=str(getattr(msg, "source_chat_id", "") or ""),
|
||||
payload={
|
||||
"origin": "recalculate_contact_availability",
|
||||
"message_id": str(msg.id),
|
||||
"message_ts": int(getattr(msg, "ts", 0) or 0),
|
||||
"read_by": str(
|
||||
getattr(msg, "read_by_identifier", "") or ""
|
||||
).strip(),
|
||||
},
|
||||
)
|
||||
indexed += 1
|
||||
|
||||
reactions = list(
|
||||
(getattr(msg, "receipt_payload", {}) or {}).get("reactions") or []
|
||||
)
|
||||
for reaction in reactions:
|
||||
item = dict(reaction or {})
|
||||
if bool(item.get("removed")):
|
||||
continue
|
||||
reaction_ts = int(item.get("updated_at") or 0)
|
||||
if reaction_ts <= 0:
|
||||
continue
|
||||
if not dry_run:
|
||||
append_event_sync(
|
||||
user=user,
|
||||
session=session,
|
||||
ts=reaction_ts,
|
||||
event_type="presence_available",
|
||||
direction="system",
|
||||
actor_identifier=str(item.get("actor") or ""),
|
||||
origin_transport=service,
|
||||
origin_message_id=message_id,
|
||||
origin_chat_id=str(getattr(msg, "source_chat_id", "") or ""),
|
||||
payload={
|
||||
"origin": "recalculate_contact_availability",
|
||||
"message_id": str(msg.id),
|
||||
"inferred_from": "reaction",
|
||||
"emoji": str(item.get("emoji") or ""),
|
||||
"source_service": str(item.get("source_service") or service),
|
||||
},
|
||||
)
|
||||
indexed += 1
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
"recalculate_contact_availability complete "
|
||||
f"messages_scanned={len(messages)} indexed={indexed} "
|
||||
f"dry_run={dry_run} no_reset={bool(options.get('no_reset'))} "
|
||||
f"days={days} limit={limit}"
|
||||
)
|
||||
)
|
||||
468
core/management/commands/reconcile_workspace_metric_history.py
Normal file
468
core/management/commands/reconcile_workspace_metric_history.py
Normal file
@@ -0,0 +1,468 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import statistics
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.utils import timezone as dj_timezone
|
||||
|
||||
from core.models import (
|
||||
Message,
|
||||
Person,
|
||||
PersonIdentifier,
|
||||
WorkspaceConversation,
|
||||
WorkspaceMetricSnapshot,
|
||||
)
|
||||
from core.views.workspace import _conversation_for_person
|
||||
from core.workspace import compact_snapshot_rows
|
||||
|
||||
|
||||
def _score_from_lag(lag_ms, target_hours=4):
|
||||
if lag_ms is None:
|
||||
return 50.0
|
||||
target_ms = max(1, int(target_hours)) * 60 * 60 * 1000
|
||||
return max(0.0, min(100.0, 100.0 / (1.0 + (float(lag_ms) / target_ms))))
|
||||
|
||||
|
||||
def _median_or_none(values):
|
||||
if not values:
|
||||
return None
|
||||
return float(statistics.median(values))
|
||||
|
||||
|
||||
def _calibrating_payload(last_ts=None):
|
||||
return {
|
||||
"source_event_ts": int(last_ts) if last_ts else None,
|
||||
"stability_state": WorkspaceConversation.StabilityState.CALIBRATING,
|
||||
"stability_score": None,
|
||||
"stability_confidence": 0.0,
|
||||
"stability_sample_messages": 0,
|
||||
"stability_sample_days": 0,
|
||||
"commitment_inbound_score": None,
|
||||
"commitment_outbound_score": None,
|
||||
"commitment_confidence": 0.0,
|
||||
"inbound_messages": 0,
|
||||
"outbound_messages": 0,
|
||||
"reciprocity_score": None,
|
||||
"continuity_score": None,
|
||||
"response_score": None,
|
||||
"volatility_score": None,
|
||||
"inbound_response_score": None,
|
||||
"outbound_response_score": None,
|
||||
"balance_inbound_score": None,
|
||||
"balance_outbound_score": None,
|
||||
}
|
||||
|
||||
|
||||
def _compute_payload(rows, identifier_values):
|
||||
if not rows:
|
||||
return _calibrating_payload(None)
|
||||
|
||||
inbound_count = 0
|
||||
outbound_count = 0
|
||||
daily_counts = {}
|
||||
inbound_response_lags = []
|
||||
outbound_response_lags = []
|
||||
pending_in_ts = None
|
||||
pending_out_ts = None
|
||||
first_ts = int(rows[0]["ts"] or 0)
|
||||
last_ts = int(rows[-1]["ts"] or 0)
|
||||
latest_service = (
|
||||
str(rows[-1].get("session__identifier__service") or "").strip().lower()
|
||||
)
|
||||
|
||||
for row in rows:
|
||||
ts = int(row.get("ts") or 0)
|
||||
sender = str(row.get("sender_uuid") or "").strip()
|
||||
author = str(row.get("custom_author") or "").strip().upper()
|
||||
if author in {"USER", "BOT"}:
|
||||
is_inbound = False
|
||||
elif author == "OTHER":
|
||||
is_inbound = True
|
||||
else:
|
||||
is_inbound = sender in identifier_values
|
||||
direction = "in" if is_inbound else "out"
|
||||
day_key = datetime.fromtimestamp(ts / 1000, tz=timezone.utc).date().isoformat()
|
||||
daily_counts[day_key] = daily_counts.get(day_key, 0) + 1
|
||||
|
||||
if direction == "in":
|
||||
inbound_count += 1
|
||||
if pending_out_ts is not None and ts >= pending_out_ts:
|
||||
inbound_response_lags.append(ts - pending_out_ts)
|
||||
pending_out_ts = None
|
||||
pending_in_ts = ts
|
||||
else:
|
||||
outbound_count += 1
|
||||
if pending_in_ts is not None and ts >= pending_in_ts:
|
||||
outbound_response_lags.append(ts - pending_in_ts)
|
||||
pending_in_ts = None
|
||||
pending_out_ts = ts
|
||||
|
||||
message_count = len(rows)
|
||||
span_days = max(1, int(((last_ts - first_ts) / (24 * 60 * 60 * 1000)) + 1))
|
||||
sample_days = len(daily_counts)
|
||||
|
||||
total_messages = max(1, inbound_count + outbound_count)
|
||||
reciprocity_score = 100.0 * (
|
||||
1.0 - abs(inbound_count - outbound_count) / total_messages
|
||||
)
|
||||
continuity_score = 100.0 * min(1.0, sample_days / max(1, span_days))
|
||||
out_resp_score = _score_from_lag(_median_or_none(outbound_response_lags))
|
||||
in_resp_score = _score_from_lag(_median_or_none(inbound_response_lags))
|
||||
response_score = (out_resp_score + in_resp_score) / 2.0
|
||||
|
||||
daily_values = list(daily_counts.values())
|
||||
if len(daily_values) > 1:
|
||||
mean_daily = statistics.mean(daily_values)
|
||||
stdev_daily = statistics.pstdev(daily_values)
|
||||
cv = (stdev_daily / mean_daily) if mean_daily else 1.0
|
||||
volatility_score = max(0.0, 100.0 * (1.0 - min(cv, 1.5) / 1.5))
|
||||
else:
|
||||
volatility_score = 60.0
|
||||
|
||||
stability_score = (
|
||||
(0.35 * reciprocity_score)
|
||||
+ (0.25 * continuity_score)
|
||||
+ (0.20 * response_score)
|
||||
+ (0.20 * volatility_score)
|
||||
)
|
||||
|
||||
balance_out = 100.0 * min(1.0, outbound_count / max(1, inbound_count))
|
||||
balance_in = 100.0 * min(1.0, inbound_count / max(1, outbound_count))
|
||||
commitment_out = (0.60 * out_resp_score) + (0.40 * balance_out)
|
||||
commitment_in = (0.60 * in_resp_score) + (0.40 * balance_in)
|
||||
|
||||
msg_conf = min(1.0, message_count / 200.0)
|
||||
day_conf = min(1.0, sample_days / 30.0)
|
||||
pair_conf = min(
|
||||
1.0, (len(inbound_response_lags) + len(outbound_response_lags)) / 40.0
|
||||
)
|
||||
confidence = (0.50 * msg_conf) + (0.30 * day_conf) + (0.20 * pair_conf)
|
||||
|
||||
if message_count < 20 or sample_days < 3 or confidence < 0.25:
|
||||
stability_state = WorkspaceConversation.StabilityState.CALIBRATING
|
||||
stability_score_value = None
|
||||
commitment_in_value = None
|
||||
commitment_out_value = None
|
||||
else:
|
||||
stability_score_value = round(stability_score, 2)
|
||||
commitment_in_value = round(commitment_in, 2)
|
||||
commitment_out_value = round(commitment_out, 2)
|
||||
if stability_score_value >= 70:
|
||||
stability_state = WorkspaceConversation.StabilityState.STABLE
|
||||
elif stability_score_value >= 50:
|
||||
stability_state = WorkspaceConversation.StabilityState.WATCH
|
||||
else:
|
||||
stability_state = WorkspaceConversation.StabilityState.FRAGILE
|
||||
|
||||
feedback_state = "balanced"
|
||||
if outbound_count > (inbound_count * 1.5):
|
||||
feedback_state = "withdrawing"
|
||||
elif inbound_count > (outbound_count * 1.5):
|
||||
feedback_state = "overextending"
|
||||
|
||||
payload = {
|
||||
"source_event_ts": last_ts,
|
||||
"stability_state": stability_state,
|
||||
"stability_score": (
|
||||
float(stability_score_value) if stability_score_value is not None else None
|
||||
),
|
||||
"stability_confidence": round(confidence, 3),
|
||||
"stability_sample_messages": message_count,
|
||||
"stability_sample_days": sample_days,
|
||||
"commitment_inbound_score": (
|
||||
float(commitment_in_value) if commitment_in_value is not None else None
|
||||
),
|
||||
"commitment_outbound_score": (
|
||||
float(commitment_out_value) if commitment_out_value is not None else None
|
||||
),
|
||||
"commitment_confidence": round(confidence, 3),
|
||||
"inbound_messages": inbound_count,
|
||||
"outbound_messages": outbound_count,
|
||||
"reciprocity_score": round(reciprocity_score, 3),
|
||||
"continuity_score": round(continuity_score, 3),
|
||||
"response_score": round(response_score, 3),
|
||||
"volatility_score": round(volatility_score, 3),
|
||||
"inbound_response_score": round(in_resp_score, 3),
|
||||
"outbound_response_score": round(out_resp_score, 3),
|
||||
"balance_inbound_score": round(balance_in, 3),
|
||||
"balance_outbound_score": round(balance_out, 3),
|
||||
}
|
||||
return payload, latest_service, feedback_state
|
||||
|
||||
|
||||
def _payload_signature(payload: dict) -> tuple:
|
||||
return (
|
||||
int(payload.get("source_event_ts") or 0),
|
||||
str(payload.get("stability_state") or ""),
|
||||
payload.get("stability_score"),
|
||||
float(payload.get("stability_confidence") or 0.0),
|
||||
int(payload.get("stability_sample_messages") or 0),
|
||||
int(payload.get("stability_sample_days") or 0),
|
||||
payload.get("commitment_inbound_score"),
|
||||
payload.get("commitment_outbound_score"),
|
||||
float(payload.get("commitment_confidence") or 0.0),
|
||||
int(payload.get("inbound_messages") or 0),
|
||||
int(payload.get("outbound_messages") or 0),
|
||||
)
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
"Reconcile AI Workspace metric history by deterministically rebuilding "
|
||||
"WorkspaceMetricSnapshot points from message history."
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument("--days", type=int, default=365)
|
||||
parser.add_argument("--service", default="")
|
||||
parser.add_argument("--user-id", default="")
|
||||
parser.add_argument("--person-id", default="")
|
||||
parser.add_argument("--step-messages", type=int, default=2)
|
||||
parser.add_argument("--limit", type=int, default=200000)
|
||||
parser.add_argument("--dry-run", action="store_true", default=False)
|
||||
parser.add_argument("--no-reset", action="store_true", default=False)
|
||||
parser.add_argument("--skip-compact", action="store_true", default=False)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
days = max(1, int(options.get("days") or 365))
|
||||
service = str(options.get("service") or "").strip().lower()
|
||||
user_id = str(options.get("user_id") or "").strip()
|
||||
person_id = str(options.get("person_id") or "").strip()
|
||||
step_messages = max(1, int(options.get("step_messages") or 2))
|
||||
limit = max(1, int(options.get("limit") or 200000))
|
||||
dry_run = bool(options.get("dry_run"))
|
||||
reset = not bool(options.get("no_reset"))
|
||||
compact_enabled = not bool(options.get("skip_compact"))
|
||||
today_start = (
|
||||
dj_timezone.now()
|
||||
.astimezone(timezone.utc)
|
||||
.replace(
|
||||
hour=0,
|
||||
minute=0,
|
||||
second=0,
|
||||
microsecond=0,
|
||||
)
|
||||
)
|
||||
cutoff_ts = int((today_start.timestamp() * 1000) - (days * 24 * 60 * 60 * 1000))
|
||||
|
||||
people_qs = Person.objects.all()
|
||||
if user_id:
|
||||
people_qs = people_qs.filter(user_id=user_id)
|
||||
if person_id:
|
||||
people_qs = people_qs.filter(id=person_id)
|
||||
people = list(people_qs.order_by("user_id", "name", "id"))
|
||||
|
||||
conversations_scanned = 0
|
||||
deleted = 0
|
||||
snapshots_created = 0
|
||||
checkpoints_total = 0
|
||||
compacted_deleted = 0
|
||||
|
||||
for person in people:
|
||||
identifiers_qs = PersonIdentifier.objects.filter(
|
||||
user=person.user, person=person
|
||||
)
|
||||
if service:
|
||||
identifiers_qs = identifiers_qs.filter(service=service)
|
||||
identifiers = list(identifiers_qs)
|
||||
if not identifiers:
|
||||
continue
|
||||
identifier_values = {
|
||||
str(row.identifier or "").strip()
|
||||
for row in identifiers
|
||||
if row.identifier
|
||||
}
|
||||
if not identifier_values:
|
||||
continue
|
||||
|
||||
rows = list(
|
||||
Message.objects.filter(
|
||||
user=person.user,
|
||||
session__identifier__in=identifiers,
|
||||
ts__gte=cutoff_ts,
|
||||
)
|
||||
.order_by("ts", "id")
|
||||
.values(
|
||||
"id",
|
||||
"ts",
|
||||
"sender_uuid",
|
||||
"custom_author",
|
||||
"session__identifier__service",
|
||||
)[:limit]
|
||||
)
|
||||
if not rows:
|
||||
continue
|
||||
|
||||
conversation = _conversation_for_person(person.user, person)
|
||||
conversations_scanned += 1
|
||||
|
||||
if reset and not dry_run:
|
||||
deleted += WorkspaceMetricSnapshot.objects.filter(
|
||||
conversation=conversation
|
||||
).delete()[0]
|
||||
|
||||
existing_signatures = set()
|
||||
if not reset:
|
||||
existing_signatures = set(
|
||||
_payload_signature(
|
||||
{
|
||||
"source_event_ts": row.source_event_ts,
|
||||
"stability_state": row.stability_state,
|
||||
"stability_score": row.stability_score,
|
||||
"stability_confidence": row.stability_confidence,
|
||||
"stability_sample_messages": row.stability_sample_messages,
|
||||
"stability_sample_days": row.stability_sample_days,
|
||||
"commitment_inbound_score": row.commitment_inbound_score,
|
||||
"commitment_outbound_score": row.commitment_outbound_score,
|
||||
"commitment_confidence": row.commitment_confidence,
|
||||
"inbound_messages": row.inbound_messages,
|
||||
"outbound_messages": row.outbound_messages,
|
||||
}
|
||||
)
|
||||
for row in WorkspaceMetricSnapshot.objects.filter(
|
||||
conversation=conversation
|
||||
).only(
|
||||
"source_event_ts",
|
||||
"stability_state",
|
||||
"stability_score",
|
||||
"stability_confidence",
|
||||
"stability_sample_messages",
|
||||
"stability_sample_days",
|
||||
"commitment_inbound_score",
|
||||
"commitment_outbound_score",
|
||||
"commitment_confidence",
|
||||
"inbound_messages",
|
||||
"outbound_messages",
|
||||
)
|
||||
)
|
||||
|
||||
checkpoints = list(range(step_messages, len(rows) + 1, step_messages))
|
||||
if not checkpoints or checkpoints[-1] != len(rows):
|
||||
checkpoints.append(len(rows))
|
||||
checkpoints_total += len(checkpoints)
|
||||
|
||||
latest_payload = None
|
||||
latest_service = ""
|
||||
latest_feedback_state = "balanced"
|
||||
|
||||
for stop in checkpoints:
|
||||
computed = _compute_payload(rows[:stop], identifier_values)
|
||||
payload = computed[0]
|
||||
latest_payload = payload
|
||||
latest_service = computed[1]
|
||||
latest_feedback_state = computed[2]
|
||||
signature = _payload_signature(payload)
|
||||
if not reset and signature in existing_signatures:
|
||||
continue
|
||||
snapshots_created += 1
|
||||
if dry_run:
|
||||
continue
|
||||
WorkspaceMetricSnapshot.objects.create(
|
||||
conversation=conversation, **payload
|
||||
)
|
||||
existing_signatures.add(signature)
|
||||
|
||||
if not latest_payload:
|
||||
continue
|
||||
|
||||
feedback = dict(conversation.participant_feedback or {})
|
||||
feedback[str(person.id)] = {
|
||||
"state": latest_feedback_state,
|
||||
"inbound_messages": int(latest_payload.get("inbound_messages") or 0),
|
||||
"outbound_messages": int(latest_payload.get("outbound_messages") or 0),
|
||||
"sample_messages": int(
|
||||
latest_payload.get("stability_sample_messages") or 0
|
||||
),
|
||||
"sample_days": int(latest_payload.get("stability_sample_days") or 0),
|
||||
"updated_at": dj_timezone.now().isoformat(),
|
||||
}
|
||||
if not dry_run:
|
||||
conversation.platform_type = (
|
||||
latest_service or conversation.platform_type
|
||||
)
|
||||
conversation.last_event_ts = latest_payload.get("source_event_ts")
|
||||
conversation.stability_state = str(
|
||||
latest_payload.get("stability_state")
|
||||
or WorkspaceConversation.StabilityState.CALIBRATING
|
||||
)
|
||||
conversation.stability_score = latest_payload.get("stability_score")
|
||||
conversation.stability_confidence = float(
|
||||
latest_payload.get("stability_confidence") or 0.0
|
||||
)
|
||||
conversation.stability_sample_messages = int(
|
||||
latest_payload.get("stability_sample_messages") or 0
|
||||
)
|
||||
conversation.stability_sample_days = int(
|
||||
latest_payload.get("stability_sample_days") or 0
|
||||
)
|
||||
conversation.commitment_inbound_score = latest_payload.get(
|
||||
"commitment_inbound_score"
|
||||
)
|
||||
conversation.commitment_outbound_score = latest_payload.get(
|
||||
"commitment_outbound_score"
|
||||
)
|
||||
conversation.commitment_confidence = float(
|
||||
latest_payload.get("commitment_confidence") or 0.0
|
||||
)
|
||||
now_ts = dj_timezone.now()
|
||||
conversation.stability_last_computed_at = now_ts
|
||||
conversation.commitment_last_computed_at = now_ts
|
||||
conversation.participant_feedback = feedback
|
||||
conversation.save(
|
||||
update_fields=[
|
||||
"platform_type",
|
||||
"last_event_ts",
|
||||
"stability_state",
|
||||
"stability_score",
|
||||
"stability_confidence",
|
||||
"stability_sample_messages",
|
||||
"stability_sample_days",
|
||||
"stability_last_computed_at",
|
||||
"commitment_inbound_score",
|
||||
"commitment_outbound_score",
|
||||
"commitment_confidence",
|
||||
"commitment_last_computed_at",
|
||||
"participant_feedback",
|
||||
]
|
||||
)
|
||||
if compact_enabled:
|
||||
snapshot_rows = list(
|
||||
WorkspaceMetricSnapshot.objects.filter(
|
||||
conversation=conversation
|
||||
)
|
||||
.order_by("computed_at", "id")
|
||||
.values("id", "computed_at", "source_event_ts")
|
||||
)
|
||||
now_ts_ms = int(dj_timezone.now().timestamp() * 1000)
|
||||
keep_ids = compact_snapshot_rows(
|
||||
snapshot_rows=snapshot_rows,
|
||||
now_ts_ms=now_ts_ms,
|
||||
cutoff_ts_ms=cutoff_ts,
|
||||
)
|
||||
if keep_ids:
|
||||
compacted_deleted += (
|
||||
WorkspaceMetricSnapshot.objects.filter(
|
||||
conversation=conversation
|
||||
)
|
||||
.exclude(id__in=list(keep_ids))
|
||||
.delete()[0]
|
||||
)
|
||||
else:
|
||||
compacted_deleted += WorkspaceMetricSnapshot.objects.filter(
|
||||
conversation=conversation
|
||||
).delete()[0]
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
"reconcile_workspace_metric_history complete "
|
||||
f"conversations_scanned={conversations_scanned} "
|
||||
f"checkpoints={checkpoints_total} "
|
||||
f"created={snapshots_created} "
|
||||
f"deleted={deleted} "
|
||||
f"compacted_deleted={compacted_deleted} "
|
||||
f"compact_enabled={compact_enabled} "
|
||||
f"reset={reset} dry_run={dry_run} "
|
||||
f"days={days} step_messages={step_messages} limit={limit}"
|
||||
)
|
||||
)
|
||||
9
core/management/commands/task_sync_worker.py
Normal file
9
core/management/commands/task_sync_worker.py
Normal file
@@ -0,0 +1,9 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from core.management.commands.codex_worker import Command as LegacyCodexWorkerCommand
|
||||
|
||||
|
||||
class Command(LegacyCodexWorkerCommand):
|
||||
help = (
|
||||
"Process queued task-sync events for worker-backed providers (Codex + Claude)."
|
||||
)
|
||||
3
core/mcp/__init__.py
Normal file
3
core/mcp/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .server import run_stdio_server
|
||||
|
||||
__all__ = ["run_stdio_server"]
|
||||
151
core/mcp/server.py
Normal file
151
core/mcp/server.py
Normal file
@@ -0,0 +1,151 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from typing import Any
|
||||
|
||||
import django
|
||||
|
||||
from core.mcp.tools import execute_tool, format_tool_content, tool_specs
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("mcp-server")
|
||||
|
||||
_compat_newline_mode = False
|
||||
|
||||
|
||||
def _setup_django() -> None:
|
||||
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings")
|
||||
django.setup()
|
||||
|
||||
|
||||
def _response(msg_id: Any, result: dict[str, Any]) -> dict[str, Any]:
|
||||
return {"jsonrpc": "2.0", "id": msg_id, "result": result}
|
||||
|
||||
|
||||
def _error(msg_id: Any, code: int, message: str) -> dict[str, Any]:
|
||||
return {"jsonrpc": "2.0", "id": msg_id, "error": {"code": code, "message": message}}
|
||||
|
||||
|
||||
def _read_message() -> dict[str, Any] | None:
|
||||
global _compat_newline_mode
|
||||
headers: dict[str, str] = {}
|
||||
pending_body = b""
|
||||
while True:
|
||||
line = sys.stdin.buffer.readline()
|
||||
if not line:
|
||||
return None
|
||||
if not headers and line.lstrip().startswith((b"{", b"[")):
|
||||
_compat_newline_mode = True
|
||||
return json.loads(line.decode("utf-8").strip())
|
||||
|
||||
sep = None
|
||||
if b"\r\n\r\n" in line:
|
||||
sep = b"\r\n\r\n"
|
||||
elif b"\n\n" in line:
|
||||
sep = b"\n\n"
|
||||
if sep is not None:
|
||||
header_line, tail = line.split(sep, 1)
|
||||
pending_body = tail
|
||||
else:
|
||||
header_line = line
|
||||
|
||||
if header_line in (b"\r\n", b"\n"):
|
||||
break
|
||||
|
||||
decoded = header_line.decode("utf-8").strip()
|
||||
if ":" in decoded:
|
||||
key, value = decoded.split(":", 1)
|
||||
headers[key.strip().lower()] = value.strip()
|
||||
if sep is not None:
|
||||
break
|
||||
|
||||
length_raw = headers.get("content-length")
|
||||
if not length_raw:
|
||||
if not pending_body:
|
||||
pending_body = sys.stdin.buffer.readline()
|
||||
if not pending_body:
|
||||
return None
|
||||
_compat_newline_mode = True
|
||||
return json.loads(pending_body.decode("utf-8").strip())
|
||||
|
||||
length = int(length_raw)
|
||||
body = pending_body
|
||||
if len(body) < length:
|
||||
body += sys.stdin.buffer.read(length - len(body))
|
||||
body = body[:length]
|
||||
if not body:
|
||||
return None
|
||||
return json.loads(body.decode("utf-8"))
|
||||
|
||||
|
||||
def _write_message(payload: dict[str, Any]) -> None:
|
||||
raw_json = json.dumps(payload, separators=(",", ":"), ensure_ascii=False)
|
||||
if _compat_newline_mode:
|
||||
sys.stdout.buffer.write((raw_json + "\n").encode("utf-8"))
|
||||
else:
|
||||
raw = raw_json.encode("utf-8")
|
||||
sys.stdout.buffer.write(f"Content-Length: {len(raw)}\r\n\r\n".encode("utf-8"))
|
||||
sys.stdout.buffer.write(raw)
|
||||
sys.stdout.buffer.flush()
|
||||
|
||||
|
||||
def _handle_message(message: dict[str, Any]) -> dict[str, Any] | None:
|
||||
msg_id = message.get("id")
|
||||
method = str(message.get("method") or "")
|
||||
params = message.get("params") or {}
|
||||
|
||||
if method == "notifications/initialized":
|
||||
return None
|
||||
if method == "initialize":
|
||||
return _response(
|
||||
msg_id,
|
||||
{
|
||||
"protocolVersion": "2025-06-18",
|
||||
"serverInfo": {"name": "gia-manticore-mcp", "version": "0.1.0"},
|
||||
"capabilities": {"tools": {}},
|
||||
},
|
||||
)
|
||||
if method == "ping":
|
||||
return _response(msg_id, {})
|
||||
if method == "tools/list":
|
||||
return _response(msg_id, {"tools": tool_specs()})
|
||||
if method == "tools/call":
|
||||
name = str(params.get("name") or "").strip()
|
||||
arguments = params.get("arguments") or {}
|
||||
try:
|
||||
payload = execute_tool(name, arguments)
|
||||
return _response(msg_id, format_tool_content(payload))
|
||||
except Exception as exc:
|
||||
log.warning("tool call failed name=%s err=%s", name, exc)
|
||||
return _response(
|
||||
msg_id,
|
||||
{
|
||||
"isError": True,
|
||||
"content": [
|
||||
{"type": "text", "text": json.dumps({"error": str(exc)})}
|
||||
],
|
||||
},
|
||||
)
|
||||
|
||||
return _error(msg_id, -32601, f"Method not found: {method}")
|
||||
|
||||
|
||||
def run_stdio_server() -> None:
|
||||
_setup_django()
|
||||
while True:
|
||||
message = _read_message()
|
||||
if message is None:
|
||||
return
|
||||
try:
|
||||
response = _handle_message(message)
|
||||
if response is not None:
|
||||
_write_message(response)
|
||||
except Exception as exc:
|
||||
msg_id = message.get("id")
|
||||
_write_message(_error(msg_id, -32000, str(exc)))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_stdio_server()
|
||||
1335
core/mcp/tools.py
Normal file
1335
core/mcp/tools.py
Normal file
File diff suppressed because it is too large
Load Diff
4
core/memory/__init__.py
Normal file
4
core/memory/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from .retrieval import retrieve_memories_for_prompt
|
||||
from .search_backend import get_memory_search_backend
|
||||
|
||||
__all__ = ["get_memory_search_backend", "retrieve_memories_for_prompt"]
|
||||
425
core/memory/pipeline.py
Normal file
425
core/memory/pipeline.py
Normal file
@@ -0,0 +1,425 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from datetime import timezone as dt_timezone
|
||||
from typing import Any
|
||||
|
||||
from django.db import transaction
|
||||
from django.utils import timezone
|
||||
from django.utils.dateparse import parse_datetime
|
||||
|
||||
from core.models import (
|
||||
MemoryChangeRequest,
|
||||
MemoryItem,
|
||||
MemorySourceReference,
|
||||
MessageEvent,
|
||||
WorkspaceConversation,
|
||||
)
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("memory-pipeline")
|
||||
|
||||
_LIKE_RE = re.compile(
|
||||
r"\b(?:i (?:like|love|prefer)|my favorite)\s+(?P<value>[^.!?]{2,120})",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
_DISLIKE_RE = re.compile(
|
||||
r"\b(?:i (?:dislike|hate|avoid)|i don't like)\s+(?P<value>[^.!?]{2,120})",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
_STYLE_RE = re.compile(
|
||||
r"\b(?:please|pls)\s+(?P<value>[^.!?]{3,120})",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
|
||||
|
||||
def _clean_value(value: str) -> str:
|
||||
return " ".join(str(value or "").strip().split())
|
||||
|
||||
|
||||
def extract_memory_candidates(text: str) -> list[dict[str, Any]]:
|
||||
source = str(text or "").strip()
|
||||
if not source:
|
||||
return []
|
||||
|
||||
candidates: list[dict[str, Any]] = []
|
||||
for regex, field, kind, confidence in (
|
||||
(_LIKE_RE, "likes", "fact", 0.68),
|
||||
(_DISLIKE_RE, "dislikes", "fact", 0.68),
|
||||
(_STYLE_RE, "communication_style", "state", 0.52),
|
||||
):
|
||||
for match in regex.finditer(source):
|
||||
value = _clean_value(match.group("value"))
|
||||
if len(value) < 3:
|
||||
continue
|
||||
candidates.append(
|
||||
{
|
||||
"memory_kind": kind,
|
||||
"field": field,
|
||||
"text": value,
|
||||
"confidence_score": confidence,
|
||||
}
|
||||
)
|
||||
return candidates
|
||||
|
||||
|
||||
def _existing_fingerprints(user_id: int) -> set[tuple[str, str, str, str]]:
|
||||
items = MemoryItem.objects.filter(user_id=int(user_id)).only(
|
||||
"memory_kind",
|
||||
"conversation_id",
|
||||
"person_id",
|
||||
"content",
|
||||
)
|
||||
fingerprints = set()
|
||||
for item in items:
|
||||
content = item.content or {}
|
||||
field = str(content.get("field") or "").strip().lower()
|
||||
text = _clean_value(str(content.get("text") or "")).lower()
|
||||
fingerprints.add(
|
||||
(
|
||||
str(item.memory_kind or "").strip().lower(),
|
||||
str(item.conversation_id or "").strip(),
|
||||
str(item.person_id or "").strip(),
|
||||
f"{field}:{text}",
|
||||
)
|
||||
)
|
||||
return fingerprints
|
||||
|
||||
|
||||
def _infer_single_person_id(conversation: WorkspaceConversation) -> str:
|
||||
participant_ids = list(conversation.participants.values_list("id", flat=True)[:2])
|
||||
if len(participant_ids) != 1:
|
||||
return ""
|
||||
return str(participant_ids[0] or "")
|
||||
|
||||
|
||||
@transaction.atomic
|
||||
def suggest_memories_from_recent_messages(
|
||||
*,
|
||||
user_id: int,
|
||||
limit_messages: int = 300,
|
||||
max_items: int = 30,
|
||||
) -> dict[str, int]:
|
||||
safe_limit_messages = max(1, min(2000, int(limit_messages or 300)))
|
||||
safe_max_items = max(1, min(500, int(max_items or 30)))
|
||||
existing = _existing_fingerprints(int(user_id))
|
||||
|
||||
scanned = 0
|
||||
queued = 0
|
||||
rows = (
|
||||
MessageEvent.objects.filter(user_id=int(user_id), direction="in")
|
||||
.select_related("conversation")
|
||||
.order_by("-ts")[:safe_limit_messages]
|
||||
)
|
||||
for event in rows:
|
||||
scanned += 1
|
||||
person_id = _infer_single_person_id(event.conversation)
|
||||
for candidate in extract_memory_candidates(event.text or ""):
|
||||
field = str(candidate.get("field") or "").strip().lower()
|
||||
text = _clean_value(str(candidate.get("text") or ""))
|
||||
if not text:
|
||||
continue
|
||||
fingerprint = (
|
||||
str(candidate.get("memory_kind") or "fact").strip().lower(),
|
||||
str(event.conversation_id or "").strip(),
|
||||
person_id,
|
||||
f"{field}:{text.lower()}",
|
||||
)
|
||||
if fingerprint in existing:
|
||||
continue
|
||||
|
||||
item = MemoryItem.objects.create(
|
||||
user_id=int(user_id),
|
||||
conversation=event.conversation,
|
||||
person_id=person_id or None,
|
||||
memory_kind=str(candidate.get("memory_kind") or "fact"),
|
||||
status="proposed",
|
||||
content={"field": field, "text": text},
|
||||
provenance={
|
||||
"pipeline": "message_regex",
|
||||
"message_event_id": str(event.id),
|
||||
},
|
||||
confidence_score=float(candidate.get("confidence_score") or 0.5),
|
||||
)
|
||||
MemorySourceReference.objects.create(
|
||||
memory=item,
|
||||
message_event=event,
|
||||
source_label="message_event",
|
||||
)
|
||||
MemoryChangeRequest.objects.create(
|
||||
user_id=int(user_id),
|
||||
memory=item,
|
||||
conversation=event.conversation,
|
||||
person_id=person_id or None,
|
||||
action="create",
|
||||
status="pending",
|
||||
proposed_memory_kind=item.memory_kind,
|
||||
proposed_content=item.content,
|
||||
proposed_confidence_score=item.confidence_score,
|
||||
reason="Auto-suggested from recent inbound messages.",
|
||||
requested_by_identifier="memory-pipeline",
|
||||
)
|
||||
existing.add(fingerprint)
|
||||
queued += 1
|
||||
if queued >= safe_max_items:
|
||||
return {"scanned": scanned, "queued": queued}
|
||||
return {"scanned": scanned, "queued": queued}
|
||||
|
||||
|
||||
def _coerce_expires_at(value: Any):
|
||||
raw = str(value or "").strip()
|
||||
if not raw:
|
||||
return None
|
||||
parsed = parse_datetime(raw)
|
||||
if parsed is None:
|
||||
raise ValueError("expires_at must be an ISO datetime")
|
||||
if parsed.tzinfo is None:
|
||||
return timezone.make_aware(parsed, dt_timezone.utc)
|
||||
return parsed
|
||||
|
||||
|
||||
@transaction.atomic
|
||||
def create_memory_change_request(
|
||||
*,
|
||||
user_id: int,
|
||||
action: str,
|
||||
conversation_id: str = "",
|
||||
person_id: str = "",
|
||||
memory_id: str = "",
|
||||
memory_kind: str = "",
|
||||
content: dict[str, Any] | None = None,
|
||||
confidence_score: float | None = None,
|
||||
expires_at: str = "",
|
||||
reason: str = "",
|
||||
requested_by_identifier: str = "",
|
||||
) -> MemoryChangeRequest:
|
||||
normalized_action = str(action or "").strip().lower()
|
||||
if normalized_action not in {"create", "update", "delete"}:
|
||||
raise ValueError("action must be create/update/delete")
|
||||
|
||||
memory = None
|
||||
if memory_id:
|
||||
memory = MemoryItem.objects.filter(user_id=int(user_id), id=memory_id).first()
|
||||
if memory is None:
|
||||
raise ValueError("memory_id not found")
|
||||
|
||||
conversation = None
|
||||
if conversation_id:
|
||||
conversation = WorkspaceConversation.objects.filter(
|
||||
user_id=int(user_id),
|
||||
id=conversation_id,
|
||||
).first()
|
||||
if conversation is None:
|
||||
raise ValueError("conversation_id not found")
|
||||
|
||||
if normalized_action == "create" and conversation is None:
|
||||
raise ValueError("conversation_id is required for create")
|
||||
if normalized_action in {"update", "delete"} and memory is None:
|
||||
raise ValueError("memory_id is required for update/delete")
|
||||
|
||||
return MemoryChangeRequest.objects.create(
|
||||
user_id=int(user_id),
|
||||
memory=memory,
|
||||
conversation=conversation or (memory.conversation if memory else None),
|
||||
person_id=person_id or (str(memory.person_id or "") if memory else "") or None,
|
||||
action=normalized_action,
|
||||
status="pending",
|
||||
proposed_memory_kind=str(
|
||||
memory_kind or (memory.memory_kind if memory else "")
|
||||
).strip(),
|
||||
proposed_content=dict(content or {}),
|
||||
proposed_confidence_score=(
|
||||
float(confidence_score)
|
||||
if confidence_score is not None
|
||||
else (float(memory.confidence_score) if memory else None)
|
||||
),
|
||||
proposed_expires_at=_coerce_expires_at(expires_at),
|
||||
reason=str(reason or "").strip(),
|
||||
requested_by_identifier=str(requested_by_identifier or "").strip(),
|
||||
)
|
||||
|
||||
|
||||
@transaction.atomic
|
||||
def review_memory_change_request(
|
||||
*,
|
||||
user_id: int,
|
||||
request_id: str,
|
||||
decision: str,
|
||||
reviewer_identifier: str = "",
|
||||
note: str = "",
|
||||
) -> MemoryChangeRequest:
|
||||
req = MemoryChangeRequest.objects.select_related("memory", "conversation").get(
|
||||
id=request_id,
|
||||
user_id=int(user_id),
|
||||
)
|
||||
if req.status != "pending":
|
||||
raise ValueError("request is not pending")
|
||||
|
||||
now = timezone.now()
|
||||
normalized_decision = str(decision or "").strip().lower()
|
||||
if normalized_decision not in {"approve", "reject"}:
|
||||
raise ValueError("decision must be approve/reject")
|
||||
|
||||
req.reviewed_by_identifier = str(reviewer_identifier or "").strip()
|
||||
req.reviewed_at = now
|
||||
if note:
|
||||
req.reason = f"{req.reason}\n\nReview note: {str(note).strip()}".strip()
|
||||
|
||||
if normalized_decision == "reject":
|
||||
req.status = "rejected"
|
||||
req.save(
|
||||
update_fields=[
|
||||
"status",
|
||||
"reviewed_by_identifier",
|
||||
"reviewed_at",
|
||||
"reason",
|
||||
"updated_at",
|
||||
]
|
||||
)
|
||||
return req
|
||||
|
||||
req.status = "approved"
|
||||
req.save(
|
||||
update_fields=[
|
||||
"status",
|
||||
"reviewed_by_identifier",
|
||||
"reviewed_at",
|
||||
"reason",
|
||||
"updated_at",
|
||||
]
|
||||
)
|
||||
|
||||
memory = req.memory
|
||||
if req.action == "create":
|
||||
if memory is None:
|
||||
if req.conversation is None:
|
||||
raise ValueError("approved create request missing conversation")
|
||||
memory = MemoryItem.objects.create(
|
||||
user_id=int(user_id),
|
||||
conversation=req.conversation,
|
||||
person_id=req.person_id,
|
||||
memory_kind=req.proposed_memory_kind or "fact",
|
||||
status="active",
|
||||
content=req.proposed_content or {},
|
||||
confidence_score=float(req.proposed_confidence_score or 0.5),
|
||||
expires_at=req.proposed_expires_at,
|
||||
last_verified_at=now,
|
||||
provenance={"approved_request_id": str(req.id)},
|
||||
)
|
||||
req.memory = memory
|
||||
else:
|
||||
memory.status = "active"
|
||||
memory.last_verified_at = now
|
||||
memory.save(update_fields=["status", "last_verified_at", "updated_at"])
|
||||
elif req.action == "update":
|
||||
if memory is None:
|
||||
raise ValueError("approved update request missing memory")
|
||||
if req.proposed_memory_kind:
|
||||
memory.memory_kind = req.proposed_memory_kind
|
||||
if req.proposed_content:
|
||||
memory.content = req.proposed_content
|
||||
if req.proposed_confidence_score is not None:
|
||||
memory.confidence_score = float(req.proposed_confidence_score)
|
||||
memory.expires_at = req.proposed_expires_at
|
||||
memory.last_verified_at = now
|
||||
memory.status = "active"
|
||||
memory.save()
|
||||
else:
|
||||
if memory is None:
|
||||
raise ValueError("approved delete request missing memory")
|
||||
memory.status = "deprecated"
|
||||
memory.last_verified_at = now
|
||||
memory.save(update_fields=["status", "last_verified_at", "updated_at"])
|
||||
|
||||
req.status = "applied"
|
||||
req.save(update_fields=["status", "memory", "updated_at"])
|
||||
return req
|
||||
|
||||
|
||||
@transaction.atomic
|
||||
def run_memory_hygiene(
|
||||
*, user_id: int | None = None, dry_run: bool = False
|
||||
) -> dict[str, int]:
|
||||
now = timezone.now()
|
||||
queryset = MemoryItem.objects.filter(status="active")
|
||||
if user_id is not None:
|
||||
queryset = queryset.filter(user_id=int(user_id))
|
||||
|
||||
expired_ids = list(
|
||||
queryset.filter(expires_at__isnull=False, expires_at__lte=now).values_list(
|
||||
"id",
|
||||
flat=True,
|
||||
)
|
||||
)
|
||||
expired = len(expired_ids)
|
||||
if expired and not dry_run:
|
||||
MemoryItem.objects.filter(id__in=expired_ids).update(status="deprecated")
|
||||
|
||||
contradictions = 0
|
||||
queued = 0
|
||||
grouped: dict[tuple[int, str, str, str, str], dict[str, list[MemoryItem]]] = {}
|
||||
for item in queryset.select_related("conversation", "person"):
|
||||
content = item.content or {}
|
||||
field = str(content.get("field") or content.get("key") or "").strip().lower()
|
||||
text = _clean_value(
|
||||
str(content.get("text") or content.get("value") or "")
|
||||
).lower()
|
||||
if not field or not text:
|
||||
continue
|
||||
scope = (
|
||||
int(item.user_id),
|
||||
str(item.person_id or ""),
|
||||
str(item.conversation_id or ""),
|
||||
str(item.memory_kind or ""),
|
||||
field,
|
||||
)
|
||||
grouped.setdefault(scope, {})
|
||||
grouped[scope].setdefault(text, [])
|
||||
grouped[scope][text].append(item)
|
||||
|
||||
for values in grouped.values():
|
||||
if len(values.keys()) <= 1:
|
||||
continue
|
||||
flat = [item for subset in values.values() for item in subset]
|
||||
contradictions += len(flat)
|
||||
if dry_run:
|
||||
continue
|
||||
for item in flat:
|
||||
already_pending = MemoryChangeRequest.objects.filter(
|
||||
user_id=item.user_id,
|
||||
memory=item,
|
||||
action="update",
|
||||
status="pending",
|
||||
reason__icontains="contradiction",
|
||||
).exists()
|
||||
if already_pending:
|
||||
continue
|
||||
MemoryChangeRequest.objects.create(
|
||||
user_id=item.user_id,
|
||||
memory=item,
|
||||
conversation=item.conversation,
|
||||
person=item.person,
|
||||
action="update",
|
||||
status="pending",
|
||||
proposed_memory_kind=item.memory_kind,
|
||||
proposed_content=item.content,
|
||||
proposed_confidence_score=item.confidence_score,
|
||||
proposed_expires_at=item.expires_at,
|
||||
reason="Contradiction detected by hygiene job.",
|
||||
requested_by_identifier="memory-hygiene",
|
||||
)
|
||||
queued += 1
|
||||
|
||||
log.info(
|
||||
"memory hygiene user=%s dry_run=%s expired=%s contradictions=%s queued=%s",
|
||||
user_id if user_id is not None else "-",
|
||||
dry_run,
|
||||
expired,
|
||||
contradictions,
|
||||
queued,
|
||||
)
|
||||
return {
|
||||
"expired": expired,
|
||||
"contradictions": contradictions,
|
||||
"queued_requests": queued,
|
||||
}
|
||||
133
core/memory/retrieval.py
Normal file
133
core/memory/retrieval.py
Normal file
@@ -0,0 +1,133 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from django.db.models import Q
|
||||
from django.utils import timezone
|
||||
|
||||
from core.memory.search_backend import get_memory_search_backend
|
||||
from core.models import MemoryItem
|
||||
|
||||
|
||||
def _coerce_statuses(value: Any, default: tuple[str, ...]) -> tuple[str, ...]:
|
||||
if isinstance(value, (list, tuple, set)):
|
||||
items = [str(item or "").strip().lower() for item in value]
|
||||
else:
|
||||
items = [item.strip().lower() for item in str(value or "").split(",")]
|
||||
cleaned = tuple(item for item in items if item)
|
||||
return cleaned or default
|
||||
|
||||
|
||||
def _base_queryset(
|
||||
*,
|
||||
user_id: int,
|
||||
person_id: str = "",
|
||||
conversation_id: str = "",
|
||||
statuses: tuple[str, ...] = ("active",),
|
||||
):
|
||||
now = timezone.now()
|
||||
queryset = MemoryItem.objects.filter(user_id=int(user_id))
|
||||
if statuses:
|
||||
queryset = queryset.filter(status__in=list(statuses))
|
||||
queryset = queryset.filter(Q(expires_at__isnull=True) | Q(expires_at__gt=now))
|
||||
if person_id:
|
||||
queryset = queryset.filter(person_id=person_id)
|
||||
if conversation_id:
|
||||
queryset = queryset.filter(conversation_id=conversation_id)
|
||||
return queryset
|
||||
|
||||
|
||||
def retrieve_memories_for_prompt(
|
||||
*,
|
||||
user_id: int,
|
||||
query: str = "",
|
||||
person_id: str = "",
|
||||
conversation_id: str = "",
|
||||
statuses: tuple[str, ...] = ("active",),
|
||||
limit: int = 20,
|
||||
) -> list[dict[str, Any]]:
|
||||
statuses = _coerce_statuses(statuses, ("active",))
|
||||
safe_limit = max(1, min(200, int(limit or 20)))
|
||||
search_text = str(query or "").strip()
|
||||
|
||||
if search_text:
|
||||
backend = get_memory_search_backend()
|
||||
hits = backend.search(
|
||||
user_id=int(user_id),
|
||||
query=search_text,
|
||||
conversation_id=conversation_id,
|
||||
limit=safe_limit,
|
||||
include_statuses=statuses,
|
||||
)
|
||||
ids = [
|
||||
str(hit.memory_id or "").strip()
|
||||
for hit in hits
|
||||
if str(hit.memory_id or "").strip()
|
||||
]
|
||||
scoped = _base_queryset(
|
||||
user_id=int(user_id),
|
||||
person_id=person_id,
|
||||
conversation_id=conversation_id,
|
||||
statuses=statuses,
|
||||
).filter(id__in=ids)
|
||||
by_id = {str(item.id): item for item in scoped}
|
||||
rows = []
|
||||
for hit in hits:
|
||||
item = by_id.get(str(hit.memory_id))
|
||||
if not item:
|
||||
continue
|
||||
rows.append(
|
||||
{
|
||||
"id": str(item.id),
|
||||
"memory_kind": str(item.memory_kind or ""),
|
||||
"status": str(item.status or ""),
|
||||
"person_id": str(item.person_id or ""),
|
||||
"conversation_id": str(item.conversation_id or ""),
|
||||
"content": item.content or {},
|
||||
"provenance": item.provenance or {},
|
||||
"confidence_score": float(item.confidence_score or 0.0),
|
||||
"expires_at": (
|
||||
item.expires_at.isoformat() if item.expires_at else ""
|
||||
),
|
||||
"last_verified_at": (
|
||||
item.last_verified_at.isoformat()
|
||||
if item.last_verified_at
|
||||
else ""
|
||||
),
|
||||
"updated_at": (
|
||||
item.updated_at.isoformat() if item.updated_at else ""
|
||||
),
|
||||
"search_score": float(hit.score or 0.0),
|
||||
"search_summary": str(hit.summary or ""),
|
||||
}
|
||||
)
|
||||
return rows
|
||||
|
||||
queryset = _base_queryset(
|
||||
user_id=int(user_id),
|
||||
person_id=person_id,
|
||||
conversation_id=conversation_id,
|
||||
statuses=statuses,
|
||||
).order_by("-last_verified_at", "-updated_at")
|
||||
rows = []
|
||||
for item in queryset[:safe_limit]:
|
||||
rows.append(
|
||||
{
|
||||
"id": str(item.id),
|
||||
"memory_kind": str(item.memory_kind or ""),
|
||||
"status": str(item.status or ""),
|
||||
"person_id": str(item.person_id or ""),
|
||||
"conversation_id": str(item.conversation_id or ""),
|
||||
"content": item.content or {},
|
||||
"provenance": item.provenance or {},
|
||||
"confidence_score": float(item.confidence_score or 0.0),
|
||||
"expires_at": item.expires_at.isoformat() if item.expires_at else "",
|
||||
"last_verified_at": (
|
||||
item.last_verified_at.isoformat() if item.last_verified_at else ""
|
||||
),
|
||||
"updated_at": item.updated_at.isoformat() if item.updated_at else "",
|
||||
"search_score": 0.0,
|
||||
"search_summary": "",
|
||||
}
|
||||
)
|
||||
return rows
|
||||
353
core/memory/search_backend.py
Normal file
353
core/memory/search_backend.py
Normal file
@@ -0,0 +1,353 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
import requests
|
||||
from django.conf import settings
|
||||
|
||||
from core.models import MemoryItem
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("memory-search")
|
||||
|
||||
|
||||
@dataclass
|
||||
class MemorySearchHit:
|
||||
memory_id: str
|
||||
score: float
|
||||
summary: str
|
||||
payload: dict[str, Any]
|
||||
|
||||
|
||||
def _flatten_to_text(value: Any) -> str:
|
||||
if value is None:
|
||||
return ""
|
||||
if isinstance(value, dict):
|
||||
parts = []
|
||||
for key, item in value.items():
|
||||
parts.append(str(key))
|
||||
parts.append(_flatten_to_text(item))
|
||||
return " ".join(part for part in parts if part).strip()
|
||||
if isinstance(value, (list, tuple, set)):
|
||||
return " ".join(_flatten_to_text(item) for item in value if item).strip()
|
||||
return str(value).strip()
|
||||
|
||||
|
||||
class BaseMemorySearchBackend:
|
||||
def upsert(self, item: MemoryItem) -> None:
|
||||
raise NotImplementedError
|
||||
|
||||
def delete(self, memory_id: str) -> None:
|
||||
raise NotImplementedError
|
||||
|
||||
def search(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
query: str,
|
||||
conversation_id: str = "",
|
||||
limit: int = 20,
|
||||
include_statuses: tuple[str, ...] = ("active",),
|
||||
) -> list[MemorySearchHit]:
|
||||
raise NotImplementedError
|
||||
|
||||
def reindex(
|
||||
self,
|
||||
*,
|
||||
user_id: int | None = None,
|
||||
include_statuses: tuple[str, ...] = ("active",),
|
||||
limit: int = 2000,
|
||||
) -> dict[str, int]:
|
||||
queryset = MemoryItem.objects.all().order_by("-updated_at")
|
||||
if user_id is not None:
|
||||
queryset = queryset.filter(user_id=int(user_id))
|
||||
if include_statuses:
|
||||
queryset = queryset.filter(status__in=list(include_statuses))
|
||||
|
||||
scanned = 0
|
||||
indexed = 0
|
||||
for item in queryset[: max(1, int(limit))]:
|
||||
scanned += 1
|
||||
try:
|
||||
self.upsert(item)
|
||||
indexed += 1
|
||||
except Exception as exc:
|
||||
log.warning("memory-search upsert failed id=%s err=%s", item.id, exc)
|
||||
return {"scanned": scanned, "indexed": indexed}
|
||||
|
||||
|
||||
class DjangoMemorySearchBackend(BaseMemorySearchBackend):
|
||||
name = "django"
|
||||
|
||||
def upsert(self, item: MemoryItem) -> None:
|
||||
# No-op because Django backend queries source-of-truth rows directly.
|
||||
_ = item
|
||||
|
||||
def delete(self, memory_id: str) -> None:
|
||||
_ = memory_id
|
||||
|
||||
def search(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
query: str,
|
||||
conversation_id: str = "",
|
||||
limit: int = 20,
|
||||
include_statuses: tuple[str, ...] = ("active",),
|
||||
) -> list[MemorySearchHit]:
|
||||
needle = str(query or "").strip().lower()
|
||||
if not needle:
|
||||
return []
|
||||
|
||||
queryset = MemoryItem.objects.filter(user_id=int(user_id))
|
||||
if conversation_id:
|
||||
queryset = queryset.filter(conversation_id=conversation_id)
|
||||
if include_statuses:
|
||||
queryset = queryset.filter(status__in=list(include_statuses))
|
||||
|
||||
hits: list[MemorySearchHit] = []
|
||||
for item in queryset.order_by("-updated_at")[:500]:
|
||||
text_blob = _flatten_to_text(item.content).lower()
|
||||
if needle not in text_blob:
|
||||
continue
|
||||
raw_summary = _flatten_to_text(item.content)
|
||||
summary = raw_summary[:280]
|
||||
score = 1.0 + min(1.0, len(needle) / max(1.0, len(text_blob)))
|
||||
hits.append(
|
||||
MemorySearchHit(
|
||||
memory_id=str(item.id),
|
||||
score=float(score),
|
||||
summary=summary,
|
||||
payload={
|
||||
"memory_kind": str(item.memory_kind or ""),
|
||||
"status": str(item.status or ""),
|
||||
"conversation_id": str(item.conversation_id or ""),
|
||||
"updated_at": item.updated_at.isoformat(),
|
||||
},
|
||||
)
|
||||
)
|
||||
if len(hits) >= max(1, int(limit)):
|
||||
break
|
||||
return hits
|
||||
|
||||
|
||||
class ManticoreMemorySearchBackend(BaseMemorySearchBackend):
|
||||
name = "manticore"
|
||||
_table_ready_cache: dict[str, float] = {}
|
||||
_table_ready_ttl_seconds = 30.0
|
||||
|
||||
def __init__(self):
|
||||
self.base_url = str(
|
||||
getattr(settings, "MANTICORE_HTTP_URL", "http://localhost:9308")
|
||||
).rstrip("/")
|
||||
self.table = (
|
||||
str(getattr(settings, "MANTICORE_MEMORY_TABLE", "gia_memory_items")).strip()
|
||||
or "gia_memory_items"
|
||||
)
|
||||
self.timeout_seconds = int(getattr(settings, "MANTICORE_HTTP_TIMEOUT", 5) or 5)
|
||||
self._table_cache_key = f"{self.base_url}|{self.table}"
|
||||
|
||||
def _sql(self, query: str) -> dict[str, Any]:
|
||||
response = requests.post(
|
||||
f"{self.base_url}/sql",
|
||||
data={"mode": "raw", "query": query},
|
||||
timeout=self.timeout_seconds,
|
||||
)
|
||||
response.raise_for_status()
|
||||
payload = response.json()
|
||||
if isinstance(payload, list):
|
||||
return payload[0] if payload else {}
|
||||
return dict(payload or {})
|
||||
|
||||
def ensure_table(self) -> None:
|
||||
last_ready = float(
|
||||
self._table_ready_cache.get(self._table_cache_key, 0.0) or 0.0
|
||||
)
|
||||
if (time.time() - last_ready) <= float(self._table_ready_ttl_seconds):
|
||||
return
|
||||
self._sql(
|
||||
(
|
||||
f"CREATE TABLE IF NOT EXISTS {self.table} ("
|
||||
"id BIGINT,"
|
||||
"memory_uuid STRING,"
|
||||
"user_id BIGINT,"
|
||||
"conversation_id STRING,"
|
||||
"memory_kind STRING,"
|
||||
"status STRING,"
|
||||
"updated_ts BIGINT,"
|
||||
"summary TEXT,"
|
||||
"body TEXT"
|
||||
")"
|
||||
)
|
||||
)
|
||||
self._table_ready_cache[self._table_cache_key] = time.time()
|
||||
|
||||
def _doc_id(self, memory_id: str) -> int:
|
||||
digest = hashlib.blake2b(
|
||||
str(memory_id or "").encode("utf-8"),
|
||||
digest_size=8,
|
||||
).digest()
|
||||
value = int.from_bytes(digest, byteorder="big", signed=False)
|
||||
return max(1, int(value))
|
||||
|
||||
def _escape(self, value: Any) -> str:
|
||||
text = str(value or "")
|
||||
text = text.replace("\\", "\\\\").replace("'", "\\'")
|
||||
return text
|
||||
|
||||
def upsert(self, item: MemoryItem) -> None:
|
||||
self.ensure_table()
|
||||
memory_id = str(item.id)
|
||||
doc_id = self._doc_id(memory_id)
|
||||
summary = _flatten_to_text(item.content)[:280]
|
||||
body = _flatten_to_text(item.content)
|
||||
updated_ts = int(item.updated_at.timestamp() * 1000)
|
||||
query = (
|
||||
f"REPLACE INTO {self.table} "
|
||||
"(id,memory_uuid,user_id,conversation_id,memory_kind,status,updated_ts,summary,body) "
|
||||
f"VALUES ({doc_id},'{self._escape(memory_id)}',{int(item.user_id)},"
|
||||
f"'{self._escape(item.conversation_id)}','{self._escape(item.memory_kind)}',"
|
||||
f"'{self._escape(item.status)}',{updated_ts},"
|
||||
f"'{self._escape(summary)}','{self._escape(body)}')"
|
||||
)
|
||||
self._sql(query)
|
||||
|
||||
def _build_upsert_values_clause(self, item: MemoryItem) -> str:
|
||||
memory_id = str(item.id)
|
||||
doc_id = self._doc_id(memory_id)
|
||||
summary = _flatten_to_text(item.content)[:280]
|
||||
body = _flatten_to_text(item.content)
|
||||
updated_ts = int(item.updated_at.timestamp() * 1000)
|
||||
return (
|
||||
f"({doc_id},'{self._escape(memory_id)}',{int(item.user_id)},"
|
||||
f"'{self._escape(item.conversation_id)}','{self._escape(item.memory_kind)}',"
|
||||
f"'{self._escape(item.status)}',{updated_ts},"
|
||||
f"'{self._escape(summary)}','{self._escape(body)}')"
|
||||
)
|
||||
|
||||
def delete(self, memory_id: str) -> None:
|
||||
self.ensure_table()
|
||||
doc_id = self._doc_id(memory_id)
|
||||
self._sql(f"DELETE FROM {self.table} WHERE id={doc_id}")
|
||||
|
||||
def reindex(
|
||||
self,
|
||||
*,
|
||||
user_id: int | None = None,
|
||||
include_statuses: tuple[str, ...] = ("active",),
|
||||
limit: int = 2000,
|
||||
) -> dict[str, int]:
|
||||
self.ensure_table()
|
||||
queryset = MemoryItem.objects.all().order_by("-updated_at")
|
||||
if user_id is not None:
|
||||
queryset = queryset.filter(user_id=int(user_id))
|
||||
if include_statuses:
|
||||
queryset = queryset.filter(status__in=list(include_statuses))
|
||||
|
||||
scanned = 0
|
||||
indexed = 0
|
||||
batch_size = 100
|
||||
values: list[str] = []
|
||||
for item in queryset[: max(1, int(limit))]:
|
||||
scanned += 1
|
||||
try:
|
||||
values.append(self._build_upsert_values_clause(item))
|
||||
except Exception as exc:
|
||||
log.warning(
|
||||
"memory-search upsert build failed id=%s err=%s", item.id, exc
|
||||
)
|
||||
continue
|
||||
if len(values) >= batch_size:
|
||||
self._sql(
|
||||
f"REPLACE INTO {self.table} "
|
||||
"(id,memory_uuid,user_id,conversation_id,memory_kind,status,updated_ts,summary,body) "
|
||||
f"VALUES {','.join(values)}"
|
||||
)
|
||||
indexed += len(values)
|
||||
values = []
|
||||
if values:
|
||||
self._sql(
|
||||
f"REPLACE INTO {self.table} "
|
||||
"(id,memory_uuid,user_id,conversation_id,memory_kind,status,updated_ts,summary,body) "
|
||||
f"VALUES {','.join(values)}"
|
||||
)
|
||||
indexed += len(values)
|
||||
return {"scanned": scanned, "indexed": indexed}
|
||||
|
||||
def search(
|
||||
self,
|
||||
*,
|
||||
user_id: int,
|
||||
query: str,
|
||||
conversation_id: str = "",
|
||||
limit: int = 20,
|
||||
include_statuses: tuple[str, ...] = ("active",),
|
||||
) -> list[MemorySearchHit]:
|
||||
self.ensure_table()
|
||||
needle = str(query or "").strip()
|
||||
if not needle:
|
||||
return []
|
||||
|
||||
where_parts = [f"user_id={int(user_id)}", f"MATCH('{self._escape(needle)}')"]
|
||||
if conversation_id:
|
||||
where_parts.append(f"conversation_id='{self._escape(conversation_id)}'")
|
||||
statuses = [
|
||||
str(item or "").strip()
|
||||
for item in include_statuses
|
||||
if str(item or "").strip()
|
||||
]
|
||||
if statuses:
|
||||
in_clause = ",".join(f"'{self._escape(item)}'" for item in statuses)
|
||||
where_parts.append(f"status IN ({in_clause})")
|
||||
where_sql = " AND ".join(where_parts)
|
||||
query_sql = (
|
||||
f"SELECT memory_uuid,memory_kind,status,conversation_id,updated_ts,summary,WEIGHT() AS score "
|
||||
f"FROM {self.table} WHERE {where_sql} ORDER BY score DESC LIMIT {max(1, int(limit))}"
|
||||
)
|
||||
payload = self._sql(query_sql)
|
||||
rows = list(payload.get("data") or [])
|
||||
hits = []
|
||||
for row in rows:
|
||||
item = dict(row or {})
|
||||
hits.append(
|
||||
MemorySearchHit(
|
||||
memory_id=str(item.get("memory_uuid") or ""),
|
||||
score=float(item.get("score") or 0.0),
|
||||
summary=str(item.get("summary") or ""),
|
||||
payload={
|
||||
"memory_kind": str(item.get("memory_kind") or ""),
|
||||
"status": str(item.get("status") or ""),
|
||||
"conversation_id": str(item.get("conversation_id") or ""),
|
||||
"updated_ts": int(item.get("updated_ts") or 0),
|
||||
},
|
||||
)
|
||||
)
|
||||
return hits
|
||||
|
||||
|
||||
def get_memory_search_backend() -> BaseMemorySearchBackend:
|
||||
backend = str(getattr(settings, "MEMORY_SEARCH_BACKEND", "django")).strip().lower()
|
||||
if backend == "manticore":
|
||||
return ManticoreMemorySearchBackend()
|
||||
return DjangoMemorySearchBackend()
|
||||
|
||||
|
||||
def backend_status() -> dict[str, Any]:
|
||||
backend = get_memory_search_backend()
|
||||
status = {
|
||||
"backend": getattr(backend, "name", "unknown"),
|
||||
"ok": True,
|
||||
"ts": int(time.time() * 1000),
|
||||
}
|
||||
if isinstance(backend, ManticoreMemorySearchBackend):
|
||||
try:
|
||||
backend.ensure_table()
|
||||
status["manticore_http_url"] = backend.base_url
|
||||
status["manticore_table"] = backend.table
|
||||
except Exception as exc:
|
||||
status["ok"] = False
|
||||
status["error"] = str(exc)
|
||||
return status
|
||||
@@ -1,20 +1,68 @@
|
||||
import time
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
from django.utils import timezone
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
from core.models import AI
|
||||
from core.models import AI, AIRunLog
|
||||
|
||||
|
||||
def _prompt_metrics(prompt: list[dict]) -> tuple[int, int]:
|
||||
rows = list(prompt or [])
|
||||
message_count = len(rows)
|
||||
prompt_chars = 0
|
||||
for row in rows:
|
||||
content = ""
|
||||
if isinstance(row, dict):
|
||||
content = str(row.get("content") or "")
|
||||
else:
|
||||
content = str(row or "")
|
||||
prompt_chars += len(content)
|
||||
return message_count, prompt_chars
|
||||
|
||||
|
||||
async def run_prompt(
|
||||
prompt: list[str],
|
||||
prompt: list[dict],
|
||||
ai: AI,
|
||||
operation: str = "",
|
||||
):
|
||||
started_monotonic = time.perf_counter()
|
||||
message_count, prompt_chars = _prompt_metrics(prompt)
|
||||
run_log = await sync_to_async(AIRunLog.objects.create)(
|
||||
user=ai.user,
|
||||
ai=ai,
|
||||
operation=str(operation or "").strip(),
|
||||
model=str(ai.model or ""),
|
||||
base_url=str(ai.base_url or ""),
|
||||
status="running",
|
||||
message_count=message_count,
|
||||
prompt_chars=prompt_chars,
|
||||
)
|
||||
cast = {"api_key": ai.api_key}
|
||||
if ai.base_url is not None:
|
||||
cast["base_url"] = ai.base_url
|
||||
client = AsyncOpenAI(**cast)
|
||||
response = await client.chat.completions.create(
|
||||
model=ai.model,
|
||||
messages=prompt,
|
||||
)
|
||||
content = response.choices[0].message.content
|
||||
|
||||
return content
|
||||
try:
|
||||
response = await client.chat.completions.create(
|
||||
model=ai.model,
|
||||
messages=prompt,
|
||||
)
|
||||
content = response.choices[0].message.content
|
||||
duration_ms = int((time.perf_counter() - started_monotonic) * 1000)
|
||||
await sync_to_async(AIRunLog.objects.filter(id=run_log.id).update)(
|
||||
status="ok",
|
||||
response_chars=len(str(content or "")),
|
||||
finished_at=timezone.now(),
|
||||
duration_ms=duration_ms,
|
||||
error="",
|
||||
)
|
||||
return content
|
||||
except Exception as exc:
|
||||
duration_ms = int((time.perf_counter() - started_monotonic) * 1000)
|
||||
await sync_to_async(AIRunLog.objects.filter(id=run_log.id).update)(
|
||||
status="failed",
|
||||
finished_at=timezone.now(),
|
||||
duration_ms=duration_ms,
|
||||
error=str(exc),
|
||||
)
|
||||
raise
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from django.utils import timezone
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
from core.messaging import ai as ai_runner
|
||||
from core.models import AI, Manipulation, Person
|
||||
|
||||
|
||||
@@ -50,17 +50,11 @@ def generate_prompt(msg: dict, person: Person, manip: Manipulation, chat_history
|
||||
|
||||
|
||||
async def run_context_prompt(
|
||||
prompt: list[str],
|
||||
prompt: list[dict],
|
||||
ai: AI,
|
||||
):
|
||||
cast = {"api_key": ai.api_key}
|
||||
if ai.base_url is not None:
|
||||
cast["base_url"] = ai.base_url
|
||||
client = AsyncOpenAI(**cast)
|
||||
response = await client.chat.completions.create(
|
||||
model=ai.model,
|
||||
messages=prompt,
|
||||
return await ai_runner.run_prompt(
|
||||
prompt,
|
||||
ai,
|
||||
operation="analysis_context",
|
||||
)
|
||||
content = response.choices[0].message.content
|
||||
|
||||
return content
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
from django.conf import settings
|
||||
|
||||
from core.events.ledger import append_event
|
||||
from core.messaging.utils import messages_to_string
|
||||
from core.models import ChatSession, Message, QueuedMessage
|
||||
from core.observability.tracing import ensure_trace_id
|
||||
from core.util import logs
|
||||
|
||||
log = logs.get_logger("history")
|
||||
@@ -144,7 +149,22 @@ async def get_chat_session(user, identifier):
|
||||
return chat_session
|
||||
|
||||
|
||||
async def store_message(session, sender, text, ts, outgoing=False):
|
||||
async def store_message(
|
||||
session,
|
||||
sender,
|
||||
text,
|
||||
ts,
|
||||
outgoing=False,
|
||||
source_service="",
|
||||
source_message_id="",
|
||||
source_chat_id="",
|
||||
reply_to=None,
|
||||
reply_source_service="",
|
||||
reply_source_message_id="",
|
||||
message_meta=None,
|
||||
trace_id="",
|
||||
raw_payload=None,
|
||||
):
|
||||
log.debug("Storing message for session=%s outgoing=%s", session.id, outgoing)
|
||||
msg = await sync_to_async(Message.objects.create)(
|
||||
user=session.user,
|
||||
@@ -154,12 +174,57 @@ async def store_message(session, sender, text, ts, outgoing=False):
|
||||
ts=ts,
|
||||
delivered_ts=ts,
|
||||
custom_author="USER" if outgoing else None,
|
||||
source_service=(source_service or None),
|
||||
source_message_id=str(source_message_id or "").strip() or None,
|
||||
source_chat_id=str(source_chat_id or "").strip() or None,
|
||||
reply_to=reply_to,
|
||||
reply_source_service=str(reply_source_service or "").strip() or None,
|
||||
reply_source_message_id=str(reply_source_message_id or "").strip() or None,
|
||||
message_meta=dict(message_meta or {}),
|
||||
)
|
||||
try:
|
||||
await append_event(
|
||||
user=session.user,
|
||||
session=session,
|
||||
ts=int(ts),
|
||||
event_type="message_created",
|
||||
direction="out" if bool(outgoing) else "in",
|
||||
actor_identifier=str(sender or ""),
|
||||
origin_transport=str(source_service or ""),
|
||||
origin_message_id=str(source_message_id or ""),
|
||||
origin_chat_id=str(source_chat_id or ""),
|
||||
payload={
|
||||
"message_id": str(msg.id),
|
||||
"text": str(text or ""),
|
||||
"reply_source_service": str(reply_source_service or ""),
|
||||
"reply_source_message_id": str(reply_source_message_id or ""),
|
||||
"outgoing": bool(outgoing),
|
||||
},
|
||||
raw_payload=dict(raw_payload or {}),
|
||||
trace_id=ensure_trace_id(trace_id, message_meta or {}),
|
||||
)
|
||||
except Exception as exc:
|
||||
log.warning("Event ledger append failed for message=%s: %s", msg.id, exc)
|
||||
|
||||
return msg
|
||||
|
||||
|
||||
async def store_own_message(session, text, ts, manip=None, queue=False):
|
||||
async def store_own_message(
|
||||
session,
|
||||
text,
|
||||
ts,
|
||||
manip=None,
|
||||
queue=False,
|
||||
source_service="",
|
||||
source_message_id="",
|
||||
source_chat_id="",
|
||||
reply_to=None,
|
||||
reply_source_service="",
|
||||
reply_source_message_id="",
|
||||
message_meta=None,
|
||||
trace_id="",
|
||||
raw_payload=None,
|
||||
):
|
||||
log.debug("Storing own message for session=%s queue=%s", session.id, queue)
|
||||
cast = {
|
||||
"user": session.user,
|
||||
@@ -168,6 +233,13 @@ async def store_own_message(session, text, ts, manip=None, queue=False):
|
||||
"text": text,
|
||||
"ts": ts,
|
||||
"delivered_ts": ts,
|
||||
"source_service": (source_service or None),
|
||||
"source_message_id": str(source_message_id or "").strip() or None,
|
||||
"source_chat_id": str(source_chat_id or "").strip() or None,
|
||||
"reply_to": reply_to,
|
||||
"reply_source_service": str(reply_source_service or "").strip() or None,
|
||||
"reply_source_message_id": str(reply_source_message_id or "").strip() or None,
|
||||
"message_meta": dict(message_meta or {}),
|
||||
}
|
||||
if queue:
|
||||
msg_object = QueuedMessage
|
||||
@@ -178,6 +250,32 @@ async def store_own_message(session, text, ts, manip=None, queue=False):
|
||||
msg = await sync_to_async(msg_object.objects.create)(
|
||||
**cast,
|
||||
)
|
||||
if msg_object is Message:
|
||||
try:
|
||||
await append_event(
|
||||
user=session.user,
|
||||
session=session,
|
||||
ts=int(ts),
|
||||
event_type="message_created",
|
||||
direction="out",
|
||||
actor_identifier="BOT",
|
||||
origin_transport=str(source_service or ""),
|
||||
origin_message_id=str(source_message_id or ""),
|
||||
origin_chat_id=str(source_chat_id or ""),
|
||||
payload={
|
||||
"message_id": str(msg.id),
|
||||
"text": str(text or ""),
|
||||
"queued": bool(queue),
|
||||
"reply_source_service": str(reply_source_service or ""),
|
||||
"reply_source_message_id": str(reply_source_message_id or ""),
|
||||
},
|
||||
raw_payload=dict(raw_payload or {}),
|
||||
trace_id=ensure_trace_id(trace_id, message_meta or {}),
|
||||
)
|
||||
except Exception as exc:
|
||||
log.warning(
|
||||
"Event ledger append failed for own message=%s: %s", msg.id, exc
|
||||
)
|
||||
|
||||
return msg
|
||||
|
||||
@@ -194,6 +292,8 @@ async def apply_read_receipts(
|
||||
source_service="signal",
|
||||
read_by_identifier="",
|
||||
payload=None,
|
||||
trace_id="",
|
||||
receipt_event_type="read_receipt",
|
||||
):
|
||||
"""
|
||||
Persist delivery/read metadata for one identifier's messages.
|
||||
@@ -211,6 +311,9 @@ async def apply_read_receipts(
|
||||
read_at = int(read_ts) if read_ts else None
|
||||
except Exception:
|
||||
read_at = None
|
||||
normalized_event_type = str(receipt_event_type or "read_receipt").strip().lower()
|
||||
if normalized_event_type not in {"read_receipt", "delivery_receipt"}:
|
||||
normalized_event_type = "read_receipt"
|
||||
|
||||
rows = await sync_to_async(list)(
|
||||
Message.objects.filter(
|
||||
@@ -225,13 +328,25 @@ async def apply_read_receipts(
|
||||
if message.delivered_ts is None:
|
||||
message.delivered_ts = read_at or message.ts
|
||||
dirty.append("delivered_ts")
|
||||
if read_at and (message.read_ts is None or read_at > message.read_ts):
|
||||
if (
|
||||
normalized_event_type == "read_receipt"
|
||||
and read_at
|
||||
and (message.read_ts is None or read_at > message.read_ts)
|
||||
):
|
||||
message.read_ts = read_at
|
||||
dirty.append("read_ts")
|
||||
if source_service and message.read_source_service != source_service:
|
||||
if (
|
||||
normalized_event_type == "read_receipt"
|
||||
and source_service
|
||||
and message.read_source_service != source_service
|
||||
):
|
||||
message.read_source_service = source_service
|
||||
dirty.append("read_source_service")
|
||||
if read_by_identifier and message.read_by_identifier != read_by_identifier:
|
||||
if (
|
||||
normalized_event_type == "read_receipt"
|
||||
and read_by_identifier
|
||||
and message.read_by_identifier != read_by_identifier
|
||||
):
|
||||
message.read_by_identifier = read_by_identifier
|
||||
dirty.append("read_by_identifier")
|
||||
if payload:
|
||||
@@ -242,6 +357,34 @@ async def apply_read_receipts(
|
||||
if dirty:
|
||||
await sync_to_async(message.save)(update_fields=dirty)
|
||||
updated += 1
|
||||
try:
|
||||
await append_event(
|
||||
user=user,
|
||||
session=message.session,
|
||||
ts=int(read_at or message.ts or 0),
|
||||
event_type=normalized_event_type,
|
||||
direction="system",
|
||||
actor_identifier=str(read_by_identifier or ""),
|
||||
origin_transport=str(source_service or ""),
|
||||
origin_message_id=str(message.source_message_id or message.id),
|
||||
origin_chat_id=str(message.source_chat_id or ""),
|
||||
payload={
|
||||
"message_id": str(message.id),
|
||||
"message_ts": int(message.ts or 0),
|
||||
"read_ts": int(read_at or 0),
|
||||
"receipt_event_type": normalized_event_type,
|
||||
"read_by_identifier": str(read_by_identifier or ""),
|
||||
"timestamps": [int(v) for v in ts_values],
|
||||
},
|
||||
raw_payload=dict(payload or {}),
|
||||
trace_id=ensure_trace_id(trace_id, payload or {}),
|
||||
)
|
||||
except Exception as exc:
|
||||
log.warning(
|
||||
"Event ledger append failed for receipt message=%s: %s",
|
||||
message.id,
|
||||
exc,
|
||||
)
|
||||
return updated
|
||||
|
||||
|
||||
@@ -256,6 +399,8 @@ async def apply_reaction(
|
||||
actor="",
|
||||
remove=False,
|
||||
payload=None,
|
||||
trace_id="",
|
||||
target_author="",
|
||||
):
|
||||
log.debug(
|
||||
"reaction-bridge history-apply start user=%s person_identifier=%s target_message_id=%s target_ts=%s source=%s actor=%s remove=%s emoji=%s",
|
||||
@@ -274,11 +419,29 @@ async def apply_reaction(
|
||||
).select_related("session")
|
||||
|
||||
target = None
|
||||
match_strategy = "none"
|
||||
target_author_value = str(target_author or "").strip()
|
||||
target_uuid = str(target_message_id or "").strip()
|
||||
if target_uuid:
|
||||
target = await sync_to_async(
|
||||
lambda: queryset.filter(id=target_uuid).order_by("-ts").first()
|
||||
)()
|
||||
is_uuid = True
|
||||
try:
|
||||
uuid.UUID(str(target_uuid))
|
||||
except Exception:
|
||||
is_uuid = False
|
||||
if is_uuid:
|
||||
target = await sync_to_async(
|
||||
lambda: queryset.filter(id=target_uuid).order_by("-ts").first()
|
||||
)()
|
||||
if target is not None:
|
||||
match_strategy = "local_message_id"
|
||||
if target is None:
|
||||
target = await sync_to_async(
|
||||
lambda: queryset.filter(source_message_id=target_uuid)
|
||||
.order_by("-ts")
|
||||
.first()
|
||||
)()
|
||||
if target is not None:
|
||||
match_strategy = "source_message_id"
|
||||
|
||||
if target is None:
|
||||
try:
|
||||
@@ -286,11 +449,64 @@ async def apply_reaction(
|
||||
except Exception:
|
||||
ts_value = 0
|
||||
if ts_value > 0:
|
||||
# Signal reactions target source timestamp; prefer deterministic exact matches.
|
||||
exact_candidates = await sync_to_async(list)(
|
||||
queryset.filter(source_message_id=str(ts_value)).order_by("-ts")[:20]
|
||||
)
|
||||
if target_author_value and exact_candidates:
|
||||
filtered = [
|
||||
row
|
||||
for row in exact_candidates
|
||||
if str(row.sender_uuid or "").strip() == target_author_value
|
||||
]
|
||||
if filtered:
|
||||
exact_candidates = filtered
|
||||
if exact_candidates:
|
||||
target = exact_candidates[0]
|
||||
match_strategy = "exact_source_message_id_ts"
|
||||
log.debug(
|
||||
"reaction-bridge history-apply exact-source-ts target_ts=%s picked_message_id=%s candidates=%s",
|
||||
ts_value,
|
||||
str(target.id),
|
||||
len(exact_candidates),
|
||||
)
|
||||
|
||||
if target is None and ts_value > 0:
|
||||
strict_ts_rows = await sync_to_async(list)(
|
||||
queryset.filter(ts=ts_value).order_by("-id")[:20]
|
||||
)
|
||||
if target_author_value and strict_ts_rows:
|
||||
filtered = [
|
||||
row
|
||||
for row in strict_ts_rows
|
||||
if str(row.sender_uuid or "").strip() == target_author_value
|
||||
]
|
||||
if filtered:
|
||||
strict_ts_rows = filtered
|
||||
if strict_ts_rows:
|
||||
target = strict_ts_rows[0]
|
||||
match_strategy = "strict_ts_match"
|
||||
log.debug(
|
||||
"reaction-bridge history-apply strict-ts target_ts=%s picked_message_id=%s candidates=%s",
|
||||
ts_value,
|
||||
str(target.id),
|
||||
len(strict_ts_rows),
|
||||
)
|
||||
|
||||
if target is None and ts_value > 0:
|
||||
lower = ts_value - 10_000
|
||||
upper = ts_value + 10_000
|
||||
window_rows = await sync_to_async(list)(
|
||||
queryset.filter(ts__gte=lower, ts__lte=upper).order_by("ts")[:200]
|
||||
)
|
||||
if target_author_value and window_rows:
|
||||
author_rows = [
|
||||
row
|
||||
for row in window_rows
|
||||
if str(row.sender_uuid or "").strip() == target_author_value
|
||||
]
|
||||
if author_rows:
|
||||
window_rows = author_rows
|
||||
if window_rows:
|
||||
target = min(
|
||||
window_rows,
|
||||
@@ -306,6 +522,7 @@ async def apply_reaction(
|
||||
int(target.ts or 0),
|
||||
len(window_rows),
|
||||
)
|
||||
match_strategy = "nearest_ts_window"
|
||||
|
||||
if target is None:
|
||||
log.warning(
|
||||
@@ -318,10 +535,13 @@ async def apply_reaction(
|
||||
return None
|
||||
|
||||
reactions = list((target.receipt_payload or {}).get("reactions") or [])
|
||||
normalized_source = str(source_service or "").strip().lower()
|
||||
normalized_actor = str(actor or "").strip()
|
||||
normalized_emoji = str(emoji or "").strip()
|
||||
reaction_key = (
|
||||
str(source_service or "").strip().lower(),
|
||||
str(actor or "").strip(),
|
||||
str(emoji or "").strip(),
|
||||
normalized_source,
|
||||
normalized_actor,
|
||||
normalized_emoji,
|
||||
)
|
||||
|
||||
merged = []
|
||||
@@ -333,31 +553,94 @@ async def apply_reaction(
|
||||
str(row.get("actor") or "").strip(),
|
||||
str(row.get("emoji") or "").strip(),
|
||||
)
|
||||
if not row_key[2] and bool(row.get("removed")):
|
||||
# Keep malformed remove rows out of active reaction set.
|
||||
continue
|
||||
if row_key == reaction_key:
|
||||
row["removed"] = bool(remove)
|
||||
row["updated_at"] = int(target_ts or target.ts or 0)
|
||||
row["payload"] = dict(payload or {})
|
||||
row["match_strategy"] = match_strategy
|
||||
merged.append(row)
|
||||
replaced = True
|
||||
continue
|
||||
merged.append(row)
|
||||
|
||||
if not replaced:
|
||||
if not replaced and (normalized_emoji or not bool(remove)):
|
||||
merged.append(
|
||||
{
|
||||
"emoji": str(emoji or ""),
|
||||
"source_service": str(source_service or ""),
|
||||
"actor": str(actor or ""),
|
||||
"emoji": normalized_emoji,
|
||||
"source_service": normalized_source,
|
||||
"actor": normalized_actor,
|
||||
"removed": bool(remove),
|
||||
"updated_at": int(target_ts or target.ts or 0),
|
||||
"payload": dict(payload or {}),
|
||||
"match_strategy": match_strategy,
|
||||
}
|
||||
)
|
||||
elif not replaced and bool(remove):
|
||||
receipt_payload = dict(target.receipt_payload or {})
|
||||
reaction_events = list(receipt_payload.get("reaction_events") or [])
|
||||
reaction_events.append(
|
||||
{
|
||||
"emoji": normalized_emoji,
|
||||
"source_service": normalized_source,
|
||||
"actor": normalized_actor,
|
||||
"removed": True,
|
||||
"updated_at": int(target_ts or target.ts or 0),
|
||||
"payload": dict(payload or {}),
|
||||
"match_strategy": match_strategy,
|
||||
"skip_reason": "remove_without_emoji_or_match",
|
||||
}
|
||||
)
|
||||
if len(reaction_events) > 200:
|
||||
reaction_events = reaction_events[-200:]
|
||||
receipt_payload["reaction_events"] = reaction_events
|
||||
target.receipt_payload = receipt_payload
|
||||
await sync_to_async(target.save)(update_fields=["receipt_payload"])
|
||||
log.debug(
|
||||
"reaction-bridge history-apply remove-without-match message_id=%s strategy=%s",
|
||||
str(target.id),
|
||||
match_strategy,
|
||||
)
|
||||
return target
|
||||
|
||||
receipt_payload = dict(target.receipt_payload or {})
|
||||
receipt_payload["reactions"] = merged
|
||||
if match_strategy:
|
||||
receipt_payload["reaction_last_match_strategy"] = str(match_strategy)
|
||||
target.receipt_payload = receipt_payload
|
||||
await sync_to_async(target.save)(update_fields=["receipt_payload"])
|
||||
try:
|
||||
await append_event(
|
||||
user=user,
|
||||
session=target.session,
|
||||
ts=int(target_ts or target.ts or 0),
|
||||
event_type="reaction_removed" if bool(remove) else "reaction_added",
|
||||
direction="system",
|
||||
actor_identifier=str(actor or ""),
|
||||
origin_transport=str(source_service or ""),
|
||||
origin_message_id=str(target.source_message_id or target.id),
|
||||
origin_chat_id=str(target.source_chat_id or ""),
|
||||
payload={
|
||||
"message_id": str(target.id),
|
||||
"target_message_id": str(target_message_id or target.id),
|
||||
"target_ts": int(target_ts or target.ts or 0),
|
||||
"emoji": str(emoji or ""),
|
||||
"remove": bool(remove),
|
||||
"source_service": normalized_source,
|
||||
"actor": normalized_actor,
|
||||
"match_strategy": match_strategy,
|
||||
},
|
||||
raw_payload=dict(payload or {}),
|
||||
trace_id=ensure_trace_id(trace_id, payload or {}),
|
||||
)
|
||||
except Exception as exc:
|
||||
log.warning(
|
||||
"Event ledger append failed for reaction on message=%s: %s",
|
||||
target.id,
|
||||
exc,
|
||||
)
|
||||
log.debug(
|
||||
"reaction-bridge history-apply ok message_id=%s reactions=%s",
|
||||
str(target.id),
|
||||
@@ -366,6 +649,277 @@ async def apply_reaction(
|
||||
return target
|
||||
|
||||
|
||||
async def _resolve_message_target(
|
||||
user,
|
||||
identifier,
|
||||
*,
|
||||
target_message_id="",
|
||||
target_ts=0,
|
||||
target_author="",
|
||||
):
|
||||
queryset = Message.objects.filter(
|
||||
user=user,
|
||||
session__identifier=identifier,
|
||||
).select_related("session")
|
||||
|
||||
target = None
|
||||
match_strategy = "none"
|
||||
target_author_value = str(target_author or "").strip()
|
||||
target_uuid = str(target_message_id or "").strip()
|
||||
if target_uuid:
|
||||
is_uuid = True
|
||||
try:
|
||||
uuid.UUID(str(target_uuid))
|
||||
except Exception:
|
||||
is_uuid = False
|
||||
if is_uuid:
|
||||
target = await sync_to_async(
|
||||
lambda: queryset.filter(id=target_uuid).order_by("-ts").first()
|
||||
)()
|
||||
if target is not None:
|
||||
match_strategy = "local_message_id"
|
||||
if target is None:
|
||||
target = await sync_to_async(
|
||||
lambda: queryset.filter(source_message_id=target_uuid)
|
||||
.order_by("-ts")
|
||||
.first()
|
||||
)()
|
||||
if target is not None:
|
||||
match_strategy = "source_message_id"
|
||||
|
||||
if target is None:
|
||||
try:
|
||||
ts_value = int(target_ts or 0)
|
||||
except Exception:
|
||||
ts_value = 0
|
||||
if ts_value > 0:
|
||||
exact_candidates = await sync_to_async(list)(
|
||||
queryset.filter(source_message_id=str(ts_value)).order_by("-ts")[:20]
|
||||
)
|
||||
if target_author_value and exact_candidates:
|
||||
filtered = [
|
||||
row
|
||||
for row in exact_candidates
|
||||
if str(row.sender_uuid or "").strip() == target_author_value
|
||||
]
|
||||
if filtered:
|
||||
exact_candidates = filtered
|
||||
if exact_candidates:
|
||||
target = exact_candidates[0]
|
||||
match_strategy = "exact_source_message_id_ts"
|
||||
|
||||
if target is None and ts_value > 0:
|
||||
strict_ts_rows = await sync_to_async(list)(
|
||||
queryset.filter(ts=ts_value).order_by("-id")[:20]
|
||||
)
|
||||
if target_author_value and strict_ts_rows:
|
||||
filtered = [
|
||||
row
|
||||
for row in strict_ts_rows
|
||||
if str(row.sender_uuid or "").strip() == target_author_value
|
||||
]
|
||||
if filtered:
|
||||
strict_ts_rows = filtered
|
||||
if strict_ts_rows:
|
||||
target = strict_ts_rows[0]
|
||||
match_strategy = "strict_ts_match"
|
||||
|
||||
if target is None and ts_value > 0:
|
||||
lower = ts_value - 10_000
|
||||
upper = ts_value + 10_000
|
||||
window_rows = await sync_to_async(list)(
|
||||
queryset.filter(ts__gte=lower, ts__lte=upper).order_by("ts")[:200]
|
||||
)
|
||||
if target_author_value and window_rows:
|
||||
author_rows = [
|
||||
row
|
||||
for row in window_rows
|
||||
if str(row.sender_uuid or "").strip() == target_author_value
|
||||
]
|
||||
if author_rows:
|
||||
window_rows = author_rows
|
||||
if window_rows:
|
||||
target = min(
|
||||
window_rows,
|
||||
key=lambda row: (
|
||||
abs(int(row.ts or 0) - ts_value),
|
||||
-int(row.ts or 0),
|
||||
),
|
||||
)
|
||||
match_strategy = "nearest_ts_window"
|
||||
|
||||
return target, match_strategy
|
||||
|
||||
|
||||
async def apply_message_edit(
|
||||
user,
|
||||
identifier,
|
||||
*,
|
||||
target_message_id="",
|
||||
target_ts=0,
|
||||
new_text="",
|
||||
source_service="",
|
||||
actor="",
|
||||
payload=None,
|
||||
trace_id="",
|
||||
target_author="",
|
||||
):
|
||||
target, match_strategy = await _resolve_message_target(
|
||||
user,
|
||||
identifier,
|
||||
target_message_id=target_message_id,
|
||||
target_ts=target_ts,
|
||||
target_author=target_author,
|
||||
)
|
||||
if target is None:
|
||||
log.warning(
|
||||
"edit-sync history-apply miss user=%s person_identifier=%s target_message_id=%s target_ts=%s",
|
||||
getattr(user, "id", "-"),
|
||||
getattr(identifier, "id", "-"),
|
||||
str(target_message_id or "") or "-",
|
||||
int(target_ts or 0),
|
||||
)
|
||||
return None
|
||||
|
||||
old_text = str(target.text or "")
|
||||
updated_text = str(new_text or "")
|
||||
event_ts = int(target_ts or target.ts or int(time.time() * 1000))
|
||||
receipt_payload = dict(target.receipt_payload or {})
|
||||
edit_history = list(receipt_payload.get("edit_history") or [])
|
||||
edit_history.append(
|
||||
{
|
||||
"edited_ts": int(event_ts),
|
||||
"source_service": str(source_service or "").strip().lower(),
|
||||
"actor": str(actor or "").strip(),
|
||||
"previous_text": old_text,
|
||||
"new_text": updated_text,
|
||||
"match_strategy": str(match_strategy or ""),
|
||||
"payload": dict(payload or {}),
|
||||
}
|
||||
)
|
||||
if len(edit_history) > 200:
|
||||
edit_history = edit_history[-200:]
|
||||
receipt_payload["edit_history"] = edit_history
|
||||
receipt_payload["last_edited_ts"] = int(event_ts)
|
||||
receipt_payload["edit_count"] = len(edit_history)
|
||||
target.receipt_payload = receipt_payload
|
||||
|
||||
update_fields = ["receipt_payload"]
|
||||
if old_text != updated_text:
|
||||
target.text = updated_text
|
||||
update_fields.append("text")
|
||||
await sync_to_async(target.save)(update_fields=update_fields)
|
||||
try:
|
||||
await append_event(
|
||||
user=user,
|
||||
session=target.session,
|
||||
ts=int(event_ts),
|
||||
event_type="message_edited",
|
||||
direction="system",
|
||||
actor_identifier=str(actor or ""),
|
||||
origin_transport=str(source_service or ""),
|
||||
origin_message_id=str(target.source_message_id or target.id),
|
||||
origin_chat_id=str(target.source_chat_id or ""),
|
||||
payload={
|
||||
"message_id": str(target.id),
|
||||
"target_message_id": str(target_message_id or target.id),
|
||||
"target_ts": int(target_ts or target.ts or 0),
|
||||
"old_text": old_text,
|
||||
"new_text": updated_text,
|
||||
"source_service": str(source_service or "").strip().lower(),
|
||||
"actor": str(actor or ""),
|
||||
"match_strategy": str(match_strategy or ""),
|
||||
},
|
||||
raw_payload=dict(payload or {}),
|
||||
trace_id=ensure_trace_id(trace_id, payload or {}),
|
||||
)
|
||||
except Exception as exc:
|
||||
log.warning(
|
||||
"Event ledger append failed for message edit message=%s: %s",
|
||||
target.id,
|
||||
exc,
|
||||
)
|
||||
return target
|
||||
|
||||
|
||||
async def apply_message_delete(
|
||||
user,
|
||||
identifier,
|
||||
*,
|
||||
target_message_id="",
|
||||
target_ts=0,
|
||||
source_service="",
|
||||
actor="",
|
||||
payload=None,
|
||||
trace_id="",
|
||||
target_author="",
|
||||
):
|
||||
target, match_strategy = await _resolve_message_target(
|
||||
user,
|
||||
identifier,
|
||||
target_message_id=target_message_id,
|
||||
target_ts=target_ts,
|
||||
target_author=target_author,
|
||||
)
|
||||
if target is None:
|
||||
log.warning(
|
||||
"delete-sync history-apply miss user=%s person_identifier=%s target_message_id=%s target_ts=%s",
|
||||
getattr(user, "id", "-"),
|
||||
getattr(identifier, "id", "-"),
|
||||
str(target_message_id or "") or "-",
|
||||
int(target_ts or 0),
|
||||
)
|
||||
return None
|
||||
|
||||
event_ts = int(target_ts or target.ts or int(time.time() * 1000))
|
||||
deleted_row = {
|
||||
"deleted_ts": int(event_ts),
|
||||
"source_service": str(source_service or "").strip().lower(),
|
||||
"actor": str(actor or "").strip(),
|
||||
"match_strategy": str(match_strategy or ""),
|
||||
"payload": dict(payload or {}),
|
||||
}
|
||||
receipt_payload = dict(target.receipt_payload or {})
|
||||
delete_events = list(receipt_payload.get("delete_events") or [])
|
||||
delete_events.append(dict(deleted_row))
|
||||
if len(delete_events) > 200:
|
||||
delete_events = delete_events[-200:]
|
||||
receipt_payload["delete_events"] = delete_events
|
||||
receipt_payload["deleted"] = deleted_row
|
||||
receipt_payload["is_deleted"] = True
|
||||
target.receipt_payload = receipt_payload
|
||||
await sync_to_async(target.save)(update_fields=["receipt_payload"])
|
||||
try:
|
||||
await append_event(
|
||||
user=user,
|
||||
session=target.session,
|
||||
ts=int(event_ts),
|
||||
event_type="message_deleted",
|
||||
direction="system",
|
||||
actor_identifier=str(actor or ""),
|
||||
origin_transport=str(source_service or ""),
|
||||
origin_message_id=str(target.source_message_id or target.id),
|
||||
origin_chat_id=str(target.source_chat_id or ""),
|
||||
payload={
|
||||
"message_id": str(target.id),
|
||||
"target_message_id": str(target_message_id or target.id),
|
||||
"target_ts": int(target_ts or target.ts or 0),
|
||||
"source_service": str(source_service or "").strip().lower(),
|
||||
"actor": str(actor or ""),
|
||||
"match_strategy": str(match_strategy or ""),
|
||||
},
|
||||
raw_payload=dict(payload or {}),
|
||||
trace_id=ensure_trace_id(trace_id, payload or {}),
|
||||
)
|
||||
except Exception as exc:
|
||||
log.warning(
|
||||
"Event ledger append failed for message delete message=%s: %s",
|
||||
target.id,
|
||||
exc,
|
||||
)
|
||||
return target
|
||||
|
||||
|
||||
def _iter_bridge_refs(receipt_payload, source_service):
|
||||
payload = dict(receipt_payload or {})
|
||||
refs = payload.get("bridge_refs") or {}
|
||||
|
||||
462
core/messaging/reply_sync.py
Normal file
462
core/messaging/reply_sync.py
Normal file
@@ -0,0 +1,462 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
from typing import Any
|
||||
|
||||
from asgiref.sync import sync_to_async
|
||||
|
||||
from core.messaging import history
|
||||
from core.models import Message
|
||||
|
||||
|
||||
def _as_dict(value: Any) -> dict[str, Any]:
|
||||
return dict(value) if isinstance(value, dict) else {}
|
||||
|
||||
|
||||
def _pluck(data: Any, *path: str):
|
||||
cur = data
|
||||
for key in path:
|
||||
if isinstance(cur, dict):
|
||||
cur = cur.get(key)
|
||||
continue
|
||||
if hasattr(cur, key):
|
||||
cur = getattr(cur, key)
|
||||
continue
|
||||
return None
|
||||
return cur
|
||||
|
||||
|
||||
def _clean(value: Any) -> str:
|
||||
return str(value or "").strip()
|
||||
|
||||
|
||||
def _find_origin_tag(value: Any, depth: int = 0) -> str:
|
||||
if depth > 4:
|
||||
return ""
|
||||
if isinstance(value, dict):
|
||||
direct = _clean(value.get("origin_tag"))
|
||||
if direct:
|
||||
return direct
|
||||
for key in ("metadata", "meta", "message_meta", "contextInfo", "context_info"):
|
||||
nested = _find_origin_tag(value.get(key), depth + 1)
|
||||
if nested:
|
||||
return nested
|
||||
for nested_value in value.values():
|
||||
nested = _find_origin_tag(nested_value, depth + 1)
|
||||
if nested:
|
||||
return nested
|
||||
return ""
|
||||
if isinstance(value, list):
|
||||
for item in value:
|
||||
nested = _find_origin_tag(item, depth + 1)
|
||||
if nested:
|
||||
return nested
|
||||
return ""
|
||||
|
||||
|
||||
def _extract_signal_reply(raw_payload: dict[str, Any]) -> dict[str, str]:
|
||||
envelope = _as_dict((raw_payload or {}).get("envelope"))
|
||||
sync_message = _as_dict(envelope.get("syncMessage"))
|
||||
sent_message = _as_dict(sync_message.get("sentMessage"))
|
||||
data_candidates = [
|
||||
_as_dict(envelope.get("dataMessage")),
|
||||
_as_dict(sent_message.get("message")),
|
||||
_as_dict(sent_message),
|
||||
_as_dict((raw_payload or {}).get("dataMessage")),
|
||||
_as_dict(raw_payload),
|
||||
]
|
||||
quote_key_candidates = (
|
||||
"id",
|
||||
"targetSentTimestamp",
|
||||
"targetTimestamp",
|
||||
"quotedMessageId",
|
||||
"quoted_message_id",
|
||||
"quotedMessageID",
|
||||
"messageId",
|
||||
"message_id",
|
||||
"timestamp",
|
||||
)
|
||||
quote_author_candidates = (
|
||||
"author",
|
||||
"authorUuid",
|
||||
"authorAci",
|
||||
"authorNumber",
|
||||
"source",
|
||||
"sourceNumber",
|
||||
"sourceUuid",
|
||||
)
|
||||
quote_candidates: list[dict[str, Any]] = []
|
||||
for data_message in data_candidates:
|
||||
if not data_message:
|
||||
continue
|
||||
direct_quote = _as_dict(data_message.get("quote") or data_message.get("Quote"))
|
||||
if direct_quote:
|
||||
quote_candidates.append(direct_quote)
|
||||
|
||||
stack = [data_message]
|
||||
while stack:
|
||||
current = stack.pop()
|
||||
if not isinstance(current, dict):
|
||||
continue
|
||||
for key, value in current.items():
|
||||
if isinstance(value, dict):
|
||||
key_text = str(key or "").strip().lower()
|
||||
if "quote" in key_text or "reply" in key_text:
|
||||
quote_candidates.append(_as_dict(value))
|
||||
stack.append(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
stack.append(item)
|
||||
|
||||
for quote in quote_candidates:
|
||||
quote_id = ""
|
||||
for key in quote_key_candidates:
|
||||
quote_id = _clean(quote.get(key))
|
||||
if quote_id:
|
||||
break
|
||||
if not quote_id:
|
||||
nested = _as_dict(quote.get("id"))
|
||||
if nested:
|
||||
for key in quote_key_candidates:
|
||||
quote_id = _clean(nested.get(key))
|
||||
if quote_id:
|
||||
break
|
||||
if quote_id:
|
||||
reply_chat_id = ""
|
||||
for key in quote_author_candidates:
|
||||
reply_chat_id = _clean(quote.get(key))
|
||||
if reply_chat_id:
|
||||
break
|
||||
return {
|
||||
"reply_source_message_id": quote_id,
|
||||
"reply_source_service": "signal",
|
||||
"reply_source_chat_id": reply_chat_id,
|
||||
}
|
||||
return {}
|
||||
|
||||
|
||||
def _extract_whatsapp_reply(raw_payload: dict[str, Any]) -> dict[str, str]:
|
||||
# Handles common and nested contextInfo/messageContextInfo shapes for
|
||||
# WhatsApp payloads (extended text, media, ephemeral, view-once wrappers).
|
||||
candidate_paths = (
|
||||
("contextInfo",),
|
||||
("ContextInfo",),
|
||||
("messageContextInfo",),
|
||||
("MessageContextInfo",),
|
||||
("extendedTextMessage", "contextInfo"),
|
||||
("ExtendedTextMessage", "ContextInfo"),
|
||||
("imageMessage", "contextInfo"),
|
||||
("ImageMessage", "ContextInfo"),
|
||||
("videoMessage", "contextInfo"),
|
||||
("VideoMessage", "ContextInfo"),
|
||||
("documentMessage", "contextInfo"),
|
||||
("DocumentMessage", "ContextInfo"),
|
||||
("ephemeralMessage", "message", "contextInfo"),
|
||||
("ephemeralMessage", "message", "extendedTextMessage", "contextInfo"),
|
||||
("viewOnceMessage", "message", "contextInfo"),
|
||||
("viewOnceMessage", "message", "extendedTextMessage", "contextInfo"),
|
||||
("viewOnceMessageV2", "message", "contextInfo"),
|
||||
("viewOnceMessageV2", "message", "extendedTextMessage", "contextInfo"),
|
||||
("viewOnceMessageV2Extension", "message", "contextInfo"),
|
||||
("viewOnceMessageV2Extension", "message", "extendedTextMessage", "contextInfo"),
|
||||
# snake_case protobuf dict variants
|
||||
("context_info",),
|
||||
("message_context_info",),
|
||||
("extended_text_message", "context_info"),
|
||||
("image_message", "context_info"),
|
||||
("video_message", "context_info"),
|
||||
("document_message", "context_info"),
|
||||
("ephemeral_message", "message", "context_info"),
|
||||
("ephemeral_message", "message", "extended_text_message", "context_info"),
|
||||
("view_once_message", "message", "context_info"),
|
||||
("view_once_message", "message", "extended_text_message", "context_info"),
|
||||
("view_once_message_v2", "message", "context_info"),
|
||||
("view_once_message_v2", "message", "extended_text_message", "context_info"),
|
||||
("view_once_message_v2_extension", "message", "context_info"),
|
||||
(
|
||||
"view_once_message_v2_extension",
|
||||
"message",
|
||||
"extended_text_message",
|
||||
"context_info",
|
||||
),
|
||||
)
|
||||
contexts = []
|
||||
for path in candidate_paths:
|
||||
row = _as_dict(_pluck(raw_payload, *path))
|
||||
if row:
|
||||
contexts.append(row)
|
||||
# Recursive fallback for unknown wrapper shapes.
|
||||
stack = [_as_dict(raw_payload)]
|
||||
while stack:
|
||||
current = stack.pop()
|
||||
if not isinstance(current, dict):
|
||||
continue
|
||||
if isinstance(current.get("contextInfo"), dict):
|
||||
contexts.append(_as_dict(current.get("contextInfo")))
|
||||
if isinstance(current.get("ContextInfo"), dict):
|
||||
contexts.append(_as_dict(current.get("ContextInfo")))
|
||||
if isinstance(current.get("messageContextInfo"), dict):
|
||||
contexts.append(_as_dict(current.get("messageContextInfo")))
|
||||
if isinstance(current.get("MessageContextInfo"), dict):
|
||||
contexts.append(_as_dict(current.get("MessageContextInfo")))
|
||||
if isinstance(current.get("context_info"), dict):
|
||||
contexts.append(_as_dict(current.get("context_info")))
|
||||
if isinstance(current.get("message_context_info"), dict):
|
||||
contexts.append(_as_dict(current.get("message_context_info")))
|
||||
for value in current.values():
|
||||
if isinstance(value, dict):
|
||||
stack.append(value)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, dict):
|
||||
stack.append(item)
|
||||
|
||||
for context in contexts:
|
||||
stanza_id = _clean(
|
||||
context.get("stanzaId")
|
||||
or context.get("stanzaID")
|
||||
or context.get("stanza_id")
|
||||
or context.get("StanzaId")
|
||||
or context.get("StanzaID")
|
||||
or context.get("quotedMessageID")
|
||||
or context.get("quotedMessageId")
|
||||
or context.get("QuotedMessageID")
|
||||
or context.get("QuotedMessageId")
|
||||
or _pluck(context, "quotedMessageKey", "id")
|
||||
or _pluck(context, "quoted_message_key", "id")
|
||||
or _pluck(context, "quotedMessage", "key", "id")
|
||||
or _pluck(context, "quoted_message", "key", "id")
|
||||
)
|
||||
if not stanza_id:
|
||||
continue
|
||||
participant = _clean(
|
||||
context.get("participant")
|
||||
or context.get("remoteJid")
|
||||
or context.get("chat")
|
||||
or context.get("Participant")
|
||||
or context.get("RemoteJid")
|
||||
or context.get("RemoteJID")
|
||||
or context.get("Chat")
|
||||
)
|
||||
return {
|
||||
"reply_source_message_id": stanza_id,
|
||||
"reply_source_service": "whatsapp",
|
||||
"reply_source_chat_id": participant,
|
||||
}
|
||||
return {}
|
||||
|
||||
|
||||
def extract_whatsapp_reply_debug(raw_payload: dict[str, Any]) -> dict[str, Any]:
|
||||
payload = _as_dict(raw_payload)
|
||||
candidate_paths = (
|
||||
("contextInfo",),
|
||||
("ContextInfo",),
|
||||
("messageContextInfo",),
|
||||
("MessageContextInfo",),
|
||||
("extendedTextMessage", "contextInfo"),
|
||||
("ExtendedTextMessage", "ContextInfo"),
|
||||
("imageMessage", "contextInfo"),
|
||||
("ImageMessage", "ContextInfo"),
|
||||
("videoMessage", "contextInfo"),
|
||||
("VideoMessage", "ContextInfo"),
|
||||
("documentMessage", "contextInfo"),
|
||||
("DocumentMessage", "ContextInfo"),
|
||||
("ephemeralMessage", "message", "contextInfo"),
|
||||
("ephemeralMessage", "message", "extendedTextMessage", "contextInfo"),
|
||||
("viewOnceMessage", "message", "contextInfo"),
|
||||
("viewOnceMessage", "message", "extendedTextMessage", "contextInfo"),
|
||||
("viewOnceMessageV2", "message", "contextInfo"),
|
||||
("viewOnceMessageV2", "message", "extendedTextMessage", "contextInfo"),
|
||||
("viewOnceMessageV2Extension", "message", "contextInfo"),
|
||||
("viewOnceMessageV2Extension", "message", "extendedTextMessage", "contextInfo"),
|
||||
("context_info",),
|
||||
("message_context_info",),
|
||||
("extended_text_message", "context_info"),
|
||||
("image_message", "context_info"),
|
||||
("video_message", "context_info"),
|
||||
("document_message", "context_info"),
|
||||
("ephemeral_message", "message", "context_info"),
|
||||
("ephemeral_message", "message", "extended_text_message", "context_info"),
|
||||
("view_once_message", "message", "context_info"),
|
||||
("view_once_message", "message", "extended_text_message", "context_info"),
|
||||
("view_once_message_v2", "message", "context_info"),
|
||||
("view_once_message_v2", "message", "extended_text_message", "context_info"),
|
||||
("view_once_message_v2_extension", "message", "context_info"),
|
||||
(
|
||||
"view_once_message_v2_extension",
|
||||
"message",
|
||||
"extended_text_message",
|
||||
"context_info",
|
||||
),
|
||||
)
|
||||
rows = []
|
||||
for path in candidate_paths:
|
||||
context = _as_dict(_pluck(payload, *path))
|
||||
if not context:
|
||||
continue
|
||||
rows.append(
|
||||
{
|
||||
"path": ".".join(path),
|
||||
"keys": sorted([str(key) for key in context.keys()])[:40],
|
||||
"stanzaId": _clean(
|
||||
context.get("stanzaId")
|
||||
or context.get("stanzaID")
|
||||
or context.get("stanza_id")
|
||||
or context.get("StanzaId")
|
||||
or context.get("StanzaID")
|
||||
or context.get("quotedMessageID")
|
||||
or context.get("quotedMessageId")
|
||||
or context.get("QuotedMessageID")
|
||||
or context.get("QuotedMessageId")
|
||||
or _pluck(context, "quotedMessageKey", "id")
|
||||
or _pluck(context, "quoted_message_key", "id")
|
||||
or _pluck(context, "quotedMessage", "key", "id")
|
||||
or _pluck(context, "quoted_message", "key", "id")
|
||||
),
|
||||
"participant": _clean(
|
||||
context.get("participant")
|
||||
or context.get("remoteJid")
|
||||
or context.get("chat")
|
||||
or context.get("Participant")
|
||||
or context.get("RemoteJid")
|
||||
or context.get("RemoteJID")
|
||||
or context.get("Chat")
|
||||
),
|
||||
}
|
||||
)
|
||||
return {
|
||||
"candidate_count": len(rows),
|
||||
"candidates": rows[:20],
|
||||
}
|
||||
|
||||
|
||||
def extract_reply_ref(service: str, raw_payload: dict[str, Any]) -> dict[str, str]:
|
||||
svc = _clean(service).lower()
|
||||
payload = _as_dict(raw_payload)
|
||||
if svc == "xmpp":
|
||||
reply_id = _clean(
|
||||
payload.get("reply_source_message_id") or payload.get("reply_id")
|
||||
)
|
||||
reply_chat = _clean(
|
||||
payload.get("reply_source_chat_id") or payload.get("reply_chat_id")
|
||||
)
|
||||
if reply_id:
|
||||
return {
|
||||
"reply_source_message_id": reply_id,
|
||||
"reply_source_service": "xmpp",
|
||||
"reply_source_chat_id": reply_chat,
|
||||
}
|
||||
return {}
|
||||
if svc == "signal":
|
||||
return _extract_signal_reply(payload)
|
||||
if svc == "whatsapp":
|
||||
return _extract_whatsapp_reply(payload)
|
||||
if svc == "web":
|
||||
reply_id = _clean(payload.get("reply_to_message_id"))
|
||||
if reply_id:
|
||||
return {
|
||||
"reply_source_message_id": reply_id,
|
||||
"reply_source_service": "web",
|
||||
"reply_source_chat_id": _clean(payload.get("reply_source_chat_id")),
|
||||
}
|
||||
return {}
|
||||
|
||||
|
||||
def extract_origin_tag(raw_payload: dict[str, Any] | None) -> str:
|
||||
return _find_origin_tag(_as_dict(raw_payload))
|
||||
|
||||
|
||||
async def resolve_reply_target(
|
||||
user, session, reply_ref: dict[str, str]
|
||||
) -> Message | None:
|
||||
if not reply_ref or session is None:
|
||||
return None
|
||||
reply_source_message_id = _clean(reply_ref.get("reply_source_message_id"))
|
||||
if not reply_source_message_id:
|
||||
return None
|
||||
|
||||
# Direct local UUID fallback (web compose references local Message IDs).
|
||||
if re.fullmatch(
|
||||
r"[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}",
|
||||
reply_source_message_id,
|
||||
):
|
||||
direct = await sync_to_async(
|
||||
lambda: Message.objects.filter(
|
||||
user=user,
|
||||
session=session,
|
||||
id=reply_source_message_id,
|
||||
).first()
|
||||
)()
|
||||
if direct is not None:
|
||||
return direct
|
||||
|
||||
source_service = _clean(reply_ref.get("reply_source_service"))
|
||||
by_source = await sync_to_async(
|
||||
lambda: Message.objects.filter(
|
||||
user=user,
|
||||
session=session,
|
||||
source_service=source_service or None,
|
||||
source_message_id=reply_source_message_id,
|
||||
)
|
||||
.order_by("-ts")
|
||||
.first()
|
||||
)()
|
||||
if by_source is not None:
|
||||
return by_source
|
||||
|
||||
# Bridge ref fallback: resolve replies against bridge mappings persisted in
|
||||
# message receipt payloads.
|
||||
identifier = getattr(session, "identifier", None)
|
||||
if identifier is not None:
|
||||
service_candidates = []
|
||||
if source_service:
|
||||
service_candidates.append(source_service)
|
||||
# XMPP replies can target bridged messages from any external service.
|
||||
if source_service == "xmpp":
|
||||
service_candidates.extend(["signal", "whatsapp", "instagram"])
|
||||
for candidate in service_candidates:
|
||||
bridge = await history.resolve_bridge_ref(
|
||||
user=user,
|
||||
identifier=identifier,
|
||||
source_service=candidate,
|
||||
xmpp_message_id=reply_source_message_id,
|
||||
upstream_message_id=reply_source_message_id,
|
||||
)
|
||||
local_message_id = _clean((bridge or {}).get("local_message_id"))
|
||||
if not local_message_id:
|
||||
continue
|
||||
bridged = await sync_to_async(
|
||||
lambda: Message.objects.filter(
|
||||
user=user,
|
||||
session=session,
|
||||
id=local_message_id,
|
||||
).first()
|
||||
)()
|
||||
if bridged is not None:
|
||||
return bridged
|
||||
|
||||
fallback = await sync_to_async(
|
||||
lambda: Message.objects.filter(
|
||||
user=user,
|
||||
session=session,
|
||||
reply_source_message_id=reply_source_message_id,
|
||||
)
|
||||
.order_by("-ts")
|
||||
.first()
|
||||
)()
|
||||
return fallback
|
||||
|
||||
|
||||
def apply_sync_origin(message_meta: dict | None, origin_tag: str) -> dict:
|
||||
payload = dict(message_meta or {})
|
||||
tag = _clean(origin_tag)
|
||||
if not tag:
|
||||
return payload
|
||||
payload["origin_tag"] = tag
|
||||
return payload
|
||||
|
||||
|
||||
def is_mirrored_origin(message_meta: dict | None) -> bool:
|
||||
payload = dict(message_meta or {})
|
||||
return bool(_clean(payload.get("origin_tag")))
|
||||
18
core/messaging/text_export.py
Normal file
18
core/messaging/text_export.py
Normal file
@@ -0,0 +1,18 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Iterable
|
||||
|
||||
from core.models import Message
|
||||
|
||||
|
||||
def normalize_message_text(message: Message) -> str:
|
||||
text = str(getattr(message, "text", "") or "").strip()
|
||||
return text or "(no text)"
|
||||
|
||||
|
||||
def plain_text_lines(messages: Iterable[Message]) -> list[str]:
|
||||
return [normalize_message_text(message) for message in list(messages)]
|
||||
|
||||
|
||||
def plain_text_blob(messages: Iterable[Message]) -> str:
|
||||
return "\n".join(plain_text_lines(messages))
|
||||
24
core/migrations/0026_platformchatlink_is_group.py
Normal file
24
core/migrations/0026_platformchatlink_is_group.py
Normal file
@@ -0,0 +1,24 @@
|
||||
# Generated by Django 5.2.11 on 2026-02-19 14:47
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('core', '0025_platformchatlink'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='platformchatlink',
|
||||
name='is_group',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='platformchatlink',
|
||||
name='person',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.person'),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,311 @@
|
||||
# Generated by Django 5.2.7 on 2026-03-01 20:03
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('core', '0026_platformchatlink_is_group'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='BusinessPlanDocument',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('source_service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('source_channel_identifier', models.CharField(blank=True, default='', max_length=255)),
|
||||
('title', models.CharField(default='Business Plan', max_length=255)),
|
||||
('status', models.CharField(choices=[('draft', 'Draft'), ('final', 'Final')], default='draft', max_length=32)),
|
||||
('content_markdown', models.TextField(blank=True, default='')),
|
||||
('structured_payload', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='BusinessPlanRevision',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('content_markdown', models.TextField(blank=True, default='')),
|
||||
('structured_payload', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
],
|
||||
options={
|
||||
'ordering': ['created_at'],
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='CommandAction',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('action_type', models.CharField(choices=[('extract_bp', 'Extract Business Plan'), ('post_result', 'Post Result'), ('save_document', 'Save Document')], max_length=64)),
|
||||
('enabled', models.BooleanField(default=True)),
|
||||
('config', models.JSONField(blank=True, default=dict)),
|
||||
('position', models.PositiveIntegerField(default=0)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
options={
|
||||
'ordering': ['position', 'id'],
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='CommandChannelBinding',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('direction', models.CharField(choices=[('ingress', 'Ingress'), ('egress', 'Egress'), ('scratchpad_mirror', 'Scratchpad Mirror')], max_length=64)),
|
||||
('service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('channel_identifier', models.CharField(max_length=255)),
|
||||
('enabled', models.BooleanField(default=True)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='CommandProfile',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('slug', models.CharField(default='bp', max_length=64)),
|
||||
('name', models.CharField(default='Business Plan', max_length=255)),
|
||||
('enabled', models.BooleanField(default=True)),
|
||||
('trigger_token', models.CharField(default='#bp#', max_length=64)),
|
||||
('reply_required', models.BooleanField(default=True)),
|
||||
('exact_match_only', models.BooleanField(default=True)),
|
||||
('window_scope', models.CharField(choices=[('conversation', 'Conversation')], default='conversation', max_length=64)),
|
||||
('template_text', models.TextField(blank=True, default='')),
|
||||
('visibility_mode', models.CharField(choices=[('status_in_source', 'Status In Source'), ('silent', 'Silent')], default='status_in_source', max_length=64)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='CommandRun',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('status', models.CharField(choices=[('pending', 'Pending'), ('running', 'Running'), ('ok', 'OK'), ('failed', 'Failed'), ('skipped', 'Skipped')], default='pending', max_length=32)),
|
||||
('error', models.TextField(blank=True, default='')),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='TranslationBridge',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('name', models.CharField(default='Translation Bridge', max_length=255)),
|
||||
('enabled', models.BooleanField(default=True)),
|
||||
('a_service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('a_channel_identifier', models.CharField(max_length=255)),
|
||||
('a_language', models.CharField(default='en', max_length=64)),
|
||||
('b_service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('b_channel_identifier', models.CharField(max_length=255)),
|
||||
('b_language', models.CharField(default='en', max_length=64)),
|
||||
('direction', models.CharField(choices=[('a_to_b', 'A To B'), ('b_to_a', 'B To A'), ('bidirectional', 'Bidirectional')], default='bidirectional', max_length=32)),
|
||||
('quick_mode_title', models.CharField(blank=True, default='', max_length=255)),
|
||||
('settings', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='TranslationEventLog',
|
||||
fields=[
|
||||
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
|
||||
('target_service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('target_channel', models.CharField(max_length=255)),
|
||||
('status', models.CharField(choices=[('pending', 'Pending'), ('ok', 'OK'), ('failed', 'Failed'), ('skipped', 'Skipped')], default='pending', max_length=32)),
|
||||
('error', models.TextField(blank=True, default='')),
|
||||
('origin_tag', models.CharField(blank=True, default='', max_length=255)),
|
||||
('content_hash', models.CharField(blank=True, default='', max_length=255)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='message',
|
||||
name='message_meta',
|
||||
field=models.JSONField(blank=True, default=dict, help_text='Normalized message metadata such as origin tags.'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='message',
|
||||
name='reply_source_message_id',
|
||||
field=models.CharField(blank=True, help_text='Source message id for the replied target.', max_length=255, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='message',
|
||||
name='reply_source_service',
|
||||
field=models.CharField(blank=True, choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], help_text='Source service for the replied target.', max_length=255, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='message',
|
||||
name='reply_to',
|
||||
field=models.ForeignKey(blank=True, help_text='Resolved local message this message replies to.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='reply_children', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='message',
|
||||
name='source_chat_id',
|
||||
field=models.CharField(blank=True, help_text='Source service chat or thread identifier when available.', max_length=255, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='message',
|
||||
name='source_message_id',
|
||||
field=models.CharField(blank=True, help_text='Source service message id when available.', max_length=255, null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='message',
|
||||
name='source_service',
|
||||
field=models.CharField(blank=True, choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], help_text='Source service where this message originally appeared.', max_length=255, null=True),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='message',
|
||||
index=models.Index(fields=['user', 'source_service', 'source_message_id'], name='core_messag_user_id_252699_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='message',
|
||||
index=models.Index(fields=['user', 'session', 'ts'], name='core_messag_user_id_ba0e73_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='message',
|
||||
index=models.Index(fields=['user', 'reply_source_service', 'reply_source_message_id'], name='core_messag_user_id_70ca93_idx'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='businessplandocument',
|
||||
name='anchor_message',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='business_plan_anchor_docs', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='businessplandocument',
|
||||
name='trigger_message',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='business_plan_trigger_docs', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='businessplandocument',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='businessplanrevision',
|
||||
name='document',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='revisions', to='core.businessplandocument'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='businessplanrevision',
|
||||
name='editor_user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='commandprofile',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='commandchannelbinding',
|
||||
name='profile',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='channel_bindings', to='core.commandprofile'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='commandaction',
|
||||
name='profile',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='actions', to='core.commandprofile'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='businessplandocument',
|
||||
name='command_profile',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='business_plan_documents', to='core.commandprofile'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='commandrun',
|
||||
name='profile',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='runs', to='core.commandprofile'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='commandrun',
|
||||
name='result_ref',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='command_runs', to='core.businessplandocument'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='commandrun',
|
||||
name='trigger_message',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='command_runs', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='commandrun',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='translationbridge',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='translationeventlog',
|
||||
name='bridge',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='events', to='core.translationbridge'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='translationeventlog',
|
||||
name='source_message',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='translation_events', to='core.message'),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name='commandprofile',
|
||||
constraint=models.UniqueConstraint(fields=('user', 'slug'), name='unique_command_profile_per_user'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='commandchannelbinding',
|
||||
index=models.Index(fields=['profile', 'direction', 'service'], name='core_comman_profile_6c16d5_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='commandchannelbinding',
|
||||
index=models.Index(fields=['profile', 'service', 'channel_identifier'], name='core_comman_profile_2c801d_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='commandaction',
|
||||
index=models.Index(fields=['profile', 'action_type', 'enabled'], name='core_comman_profile_f8e752_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='businessplandocument',
|
||||
index=models.Index(fields=['user', 'status', 'updated_at'], name='core_busine_user_id_028f36_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='businessplandocument',
|
||||
index=models.Index(fields=['user', 'source_service', 'source_channel_identifier'], name='core_busine_user_id_54ef14_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='commandrun',
|
||||
index=models.Index(fields=['user', 'status', 'updated_at'], name='core_comman_user_id_aa2881_idx'),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name='commandrun',
|
||||
constraint=models.UniqueConstraint(fields=('profile', 'trigger_message'), name='unique_command_run_profile_trigger_message'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='translationbridge',
|
||||
index=models.Index(fields=['user', 'enabled'], name='core_transl_user_id_ce99cd_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='translationbridge',
|
||||
index=models.Index(fields=['user', 'a_service', 'a_channel_identifier'], name='core_transl_user_id_2f26ee_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='translationbridge',
|
||||
index=models.Index(fields=['user', 'b_service', 'b_channel_identifier'], name='core_transl_user_id_1f910a_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='translationeventlog',
|
||||
index=models.Index(fields=['bridge', 'created_at'], name='core_transl_bridge__509ffc_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='translationeventlog',
|
||||
index=models.Index(fields=['bridge', 'status', 'updated_at'], name='core_transl_bridge__0a7676_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='translationeventlog',
|
||||
index=models.Index(fields=['origin_tag'], name='core_transl_origin__a5c2f3_idx'),
|
||||
),
|
||||
]
|
||||
86
core/migrations/0028_airunlog.py
Normal file
86
core/migrations/0028_airunlog.py
Normal file
@@ -0,0 +1,86 @@
|
||||
# Generated by Django 5.2.7 on 2026-03-02 00:00
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
dependencies = [
|
||||
("core", "0027_businessplandocument_businessplanrevision_and_more"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="AIRunLog",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.BigAutoField(
|
||||
auto_created=True,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
verbose_name="ID",
|
||||
),
|
||||
),
|
||||
("operation", models.CharField(blank=True, default="", max_length=64)),
|
||||
("model", models.CharField(blank=True, default="", max_length=255)),
|
||||
("base_url", models.CharField(blank=True, default="", max_length=255)),
|
||||
(
|
||||
"status",
|
||||
models.CharField(
|
||||
choices=[("running", "Running"), ("ok", "OK"), ("failed", "Failed")],
|
||||
default="running",
|
||||
max_length=16,
|
||||
),
|
||||
),
|
||||
("message_count", models.PositiveIntegerField(default=0)),
|
||||
("prompt_chars", models.PositiveIntegerField(default=0)),
|
||||
("response_chars", models.PositiveIntegerField(default=0)),
|
||||
("error", models.TextField(blank=True, default="")),
|
||||
("started_at", models.DateTimeField(auto_now_add=True)),
|
||||
("finished_at", models.DateTimeField(blank=True, null=True)),
|
||||
("duration_ms", models.PositiveIntegerField(blank=True, null=True)),
|
||||
(
|
||||
"ai",
|
||||
models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
related_name="run_logs",
|
||||
to="core.ai",
|
||||
),
|
||||
),
|
||||
(
|
||||
"user",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="ai_run_logs",
|
||||
to=settings.AUTH_USER_MODEL,
|
||||
),
|
||||
),
|
||||
],
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="airunlog",
|
||||
index=models.Index(fields=["user", "started_at"], name="core_airunl_user_id_6f4700_idx"),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="airunlog",
|
||||
index=models.Index(
|
||||
fields=["user", "status", "started_at"],
|
||||
name="core_airunl_user_id_b4486e_idx",
|
||||
),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="airunlog",
|
||||
index=models.Index(
|
||||
fields=["user", "operation", "started_at"],
|
||||
name="core_airunl_user_id_4f0f5e_idx",
|
||||
),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="airunlog",
|
||||
index=models.Index(fields=["user", "model", "started_at"], name="core_airunl_user_id_953bff_idx"),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,342 @@
|
||||
# Generated by Django 5.2.11 on 2026-03-02 11:55
|
||||
|
||||
import uuid
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('core', '0028_airunlog'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='AnswerMemory',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('channel_identifier', models.CharField(max_length=255)),
|
||||
('question_fingerprint', models.CharField(max_length=128)),
|
||||
('question_text', models.TextField(blank=True, default='')),
|
||||
('answer_text', models.TextField(blank=True, default='')),
|
||||
('confidence_meta', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='AnswerSuggestionEvent',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('status', models.CharField(choices=[('suggested', 'Suggested'), ('accepted', 'Accepted'), ('dismissed', 'Dismissed')], default='suggested', max_length=32)),
|
||||
('score', models.FloatField(default=0.0)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='ChatTaskSource',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('channel_identifier', models.CharField(max_length=255)),
|
||||
('enabled', models.BooleanField(default=True)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='DerivedTask',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('title', models.CharField(max_length=255)),
|
||||
('source_service', models.CharField(choices=[('signal', 'Signal'), ('whatsapp', 'WhatsApp'), ('xmpp', 'XMPP'), ('instagram', 'Instagram'), ('web', 'Web')], max_length=255)),
|
||||
('source_channel', models.CharField(max_length=255)),
|
||||
('reference_code', models.CharField(blank=True, default='', max_length=64)),
|
||||
('external_key', models.CharField(blank=True, default='', max_length=255)),
|
||||
('status_snapshot', models.CharField(blank=True, default='open', max_length=64)),
|
||||
('immutable_payload', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='DerivedTaskEvent',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('event_type', models.CharField(choices=[('created', 'Created'), ('progress', 'Progress'), ('completion_marked', 'Completion Marked'), ('synced', 'Synced'), ('sync_failed', 'Sync Failed'), ('parse_warning', 'Parse Warning')], max_length=32)),
|
||||
('actor_identifier', models.CharField(blank=True, default='', max_length=255)),
|
||||
('payload', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
],
|
||||
options={
|
||||
'ordering': ['created_at', 'id'],
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='ExternalSyncEvent',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('provider', models.CharField(default='mock', max_length=64)),
|
||||
('idempotency_key', models.CharField(blank=True, default='', max_length=255)),
|
||||
('status', models.CharField(choices=[('pending', 'Pending'), ('ok', 'OK'), ('failed', 'Failed'), ('retrying', 'Retrying')], default='pending', max_length=32)),
|
||||
('payload', models.JSONField(blank=True, default=dict)),
|
||||
('error', models.TextField(blank=True, default='')),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='TaskCompletionPattern',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('phrase', models.CharField(max_length=64)),
|
||||
('enabled', models.BooleanField(default=True)),
|
||||
('position', models.PositiveIntegerField(default=0)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='TaskEpic',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('name', models.CharField(max_length=255)),
|
||||
('external_key', models.CharField(blank=True, default='', max_length=255)),
|
||||
('active', models.BooleanField(default=True)),
|
||||
('settings', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='TaskProject',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('name', models.CharField(max_length=255)),
|
||||
('external_key', models.CharField(blank=True, default='', max_length=255)),
|
||||
('active', models.BooleanField(default=True)),
|
||||
('settings', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='TaskProviderConfig',
|
||||
fields=[
|
||||
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
|
||||
('provider', models.CharField(default='mock', max_length=64)),
|
||||
('enabled', models.BooleanField(default=False)),
|
||||
('settings', models.JSONField(blank=True, default=dict)),
|
||||
('created_at', models.DateTimeField(auto_now_add=True)),
|
||||
('updated_at', models.DateTimeField(auto_now=True)),
|
||||
],
|
||||
),
|
||||
migrations.RenameIndex(
|
||||
model_name='airunlog',
|
||||
new_name='core_airunl_user_id_13b24a_idx',
|
||||
old_name='core_airunl_user_id_6f4700_idx',
|
||||
),
|
||||
migrations.RenameIndex(
|
||||
model_name='airunlog',
|
||||
new_name='core_airunl_user_id_678025_idx',
|
||||
old_name='core_airunl_user_id_b4486e_idx',
|
||||
),
|
||||
migrations.RenameIndex(
|
||||
model_name='airunlog',
|
||||
new_name='core_airunl_user_id_55c2d4_idx',
|
||||
old_name='core_airunl_user_id_4f0f5e_idx',
|
||||
),
|
||||
migrations.RenameIndex(
|
||||
model_name='airunlog',
|
||||
new_name='core_airunl_user_id_bef024_idx',
|
||||
old_name='core_airunl_user_id_953bff_idx',
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='answermemory',
|
||||
name='answer_message',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='answer_memory_rows', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='answermemory',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='answer_memory', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='answersuggestionevent',
|
||||
name='candidate_answer',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='suggestion_events', to='core.answermemory'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='answersuggestionevent',
|
||||
name='message',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='answer_suggestion_events', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='answersuggestionevent',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='answer_suggestion_events', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='chattasksource',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='chat_task_sources', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='derivedtask',
|
||||
name='origin_message',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='derived_task_origins', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='derivedtask',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='derived_tasks', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='derivedtaskevent',
|
||||
name='source_message',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='derived_task_events', to='core.message'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='derivedtaskevent',
|
||||
name='task',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='events', to='core.derivedtask'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='externalsyncevent',
|
||||
name='task',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='external_sync_events', to='core.derivedtask'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='externalsyncevent',
|
||||
name='task_event',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='external_sync_events', to='core.derivedtaskevent'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='externalsyncevent',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='external_sync_events', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='taskcompletionpattern',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='task_completion_patterns', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='derivedtask',
|
||||
name='epic',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='derived_tasks', to='core.taskepic'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='chattasksource',
|
||||
name='epic',
|
||||
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='chat_sources', to='core.taskepic'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='taskproject',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='task_projects', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='taskepic',
|
||||
name='project',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='epics', to='core.taskproject'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='derivedtask',
|
||||
name='project',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='derived_tasks', to='core.taskproject'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='chattasksource',
|
||||
name='project',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='chat_sources', to='core.taskproject'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='taskproviderconfig',
|
||||
name='user',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='task_provider_configs', to=settings.AUTH_USER_MODEL),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='answermemory',
|
||||
index=models.Index(fields=['user', 'service', 'channel_identifier', 'created_at'], name='core_answer_user_id_b88ba6_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='answermemory',
|
||||
index=models.Index(fields=['user', 'question_fingerprint', 'created_at'], name='core_answer_user_id_9353c7_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='answersuggestionevent',
|
||||
index=models.Index(fields=['user', 'status', 'created_at'], name='core_answer_user_id_05d0f9_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='answersuggestionevent',
|
||||
index=models.Index(fields=['message', 'status'], name='core_answer_message_1cb119_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='derivedtaskevent',
|
||||
index=models.Index(fields=['task', 'created_at'], name='core_derive_task_id_897ae5_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='derivedtaskevent',
|
||||
index=models.Index(fields=['event_type', 'created_at'], name='core_derive_event_t_1cf04b_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='externalsyncevent',
|
||||
index=models.Index(fields=['user', 'provider', 'status', 'updated_at'], name='core_extern_user_id_e71276_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='externalsyncevent',
|
||||
index=models.Index(fields=['idempotency_key'], name='core_extern_idempot_dce064_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='taskcompletionpattern',
|
||||
index=models.Index(fields=['user', 'enabled', 'position'], name='core_taskco_user_id_0c1b5e_idx'),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name='taskcompletionpattern',
|
||||
constraint=models.UniqueConstraint(fields=('user', 'phrase'), name='unique_task_completion_phrase_per_user'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='taskproject',
|
||||
index=models.Index(fields=['user', 'active', 'updated_at'], name='core_taskpr_user_id_4f8472_idx'),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name='taskproject',
|
||||
constraint=models.UniqueConstraint(fields=('user', 'name'), name='unique_task_project_name_per_user'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='taskepic',
|
||||
index=models.Index(fields=['project', 'active', 'updated_at'], name='core_taskep_project_ea76c3_idx'),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name='taskepic',
|
||||
constraint=models.UniqueConstraint(fields=('project', 'name'), name='unique_task_epic_name_per_project'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='derivedtask',
|
||||
index=models.Index(fields=['user', 'project', 'created_at'], name='core_derive_user_id_a98675_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='derivedtask',
|
||||
index=models.Index(fields=['user', 'source_service', 'source_channel'], name='core_derive_user_id_aaa167_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='derivedtask',
|
||||
index=models.Index(fields=['user', 'reference_code'], name='core_derive_user_id_d06303_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='chattasksource',
|
||||
index=models.Index(fields=['user', 'service', 'channel_identifier', 'enabled'], name='core_chatta_user_id_01f271_idx'),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name='chattasksource',
|
||||
index=models.Index(fields=['project', 'enabled'], name='core_chatta_project_826bab_idx'),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name='taskproviderconfig',
|
||||
constraint=models.UniqueConstraint(fields=('user', 'provider'), name='unique_task_provider_config_per_user'),
|
||||
),
|
||||
]
|
||||
18
core/migrations/0030_chattasksource_settings.py
Normal file
18
core/migrations/0030_chattasksource_settings.py
Normal file
@@ -0,0 +1,18 @@
|
||||
# Generated by Django 5.2.11 on 2026-03-02 12:55
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('core', '0029_answermemory_answersuggestionevent_chattasksource_and_more'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='chattasksource',
|
||||
name='settings',
|
||||
field=models.JSONField(blank=True, default=dict),
|
||||
),
|
||||
]
|
||||
107
core/migrations/0031_commandvariantpolicy.py
Normal file
107
core/migrations/0031_commandvariantpolicy.py
Normal file
@@ -0,0 +1,107 @@
|
||||
# Generated by Django 5.2.11 on 2026-03-02 14:17
|
||||
|
||||
import django.db.models.deletion
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
def _backfill_variant_policies(apps, schema_editor):
|
||||
CommandProfile = apps.get_model("core", "CommandProfile")
|
||||
CommandAction = apps.get_model("core", "CommandAction")
|
||||
CommandVariantPolicy = apps.get_model("core", "CommandVariantPolicy")
|
||||
|
||||
for profile in CommandProfile.objects.all().iterator():
|
||||
actions = list(CommandAction.objects.filter(profile=profile))
|
||||
post_result_enabled = any(
|
||||
str(getattr(row, "action_type", "")) == "post_result"
|
||||
and bool(getattr(row, "enabled", False))
|
||||
for row in actions
|
||||
)
|
||||
send_status_to_source = (
|
||||
str(getattr(profile, "visibility_mode", "") or "") == "status_in_source"
|
||||
)
|
||||
if str(getattr(profile, "slug", "") or "") == "bp":
|
||||
rows = (
|
||||
("bp", "ai", 0),
|
||||
("bp_set", "verbatim", 1),
|
||||
("bp_set_range", "verbatim", 2),
|
||||
)
|
||||
else:
|
||||
rows = (("default", "verbatim", 0),)
|
||||
|
||||
for key, generation_mode, position in rows:
|
||||
CommandVariantPolicy.objects.get_or_create(
|
||||
profile=profile,
|
||||
variant_key=key,
|
||||
defaults={
|
||||
"enabled": True,
|
||||
"generation_mode": generation_mode,
|
||||
"send_plan_to_egress": bool(post_result_enabled),
|
||||
"send_status_to_source": bool(send_status_to_source),
|
||||
"send_status_to_egress": False,
|
||||
"position": int(position),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
("core", "0030_chattasksource_settings"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="CommandVariantPolicy",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.BigAutoField(
|
||||
auto_created=True,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
verbose_name="ID",
|
||||
),
|
||||
),
|
||||
("variant_key", models.CharField(default="default", max_length=64)),
|
||||
("enabled", models.BooleanField(default=True)),
|
||||
(
|
||||
"generation_mode",
|
||||
models.CharField(
|
||||
choices=[("ai", "AI"), ("verbatim", "Verbatim")],
|
||||
default="verbatim",
|
||||
max_length=32,
|
||||
),
|
||||
),
|
||||
("send_plan_to_egress", models.BooleanField(default=True)),
|
||||
("send_status_to_source", models.BooleanField(default=True)),
|
||||
("send_status_to_egress", models.BooleanField(default=False)),
|
||||
("position", models.PositiveIntegerField(default=0)),
|
||||
("created_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"profile",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="variant_policies",
|
||||
to="core.commandprofile",
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"ordering": ["position", "id"],
|
||||
"indexes": [
|
||||
models.Index(
|
||||
fields=["profile", "enabled", "variant_key"],
|
||||
name="core_comman_profile_7913f5_idx",
|
||||
)
|
||||
],
|
||||
"constraints": [
|
||||
models.UniqueConstraint(
|
||||
fields=("profile", "variant_key"),
|
||||
name="unique_command_variant_policy_per_profile",
|
||||
)
|
||||
],
|
||||
},
|
||||
),
|
||||
migrations.RunPython(_backfill_variant_policies, migrations.RunPython.noop),
|
||||
]
|
||||
18
core/migrations/0032_commandvariantpolicy_store_document.py
Normal file
18
core/migrations/0032_commandvariantpolicy_store_document.py
Normal file
@@ -0,0 +1,18 @@
|
||||
# Generated by Django 5.2.11 on 2026-03-02 17:38
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('core', '0031_commandvariantpolicy'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='commandvariantpolicy',
|
||||
name='store_document',
|
||||
field=models.BooleanField(default=True),
|
||||
),
|
||||
]
|
||||
236
core/migrations/0033_contactavailability_and_externalchatlink.py
Normal file
236
core/migrations/0033_contactavailability_and_externalchatlink.py
Normal file
@@ -0,0 +1,236 @@
|
||||
import django.db.models.deletion
|
||||
from django.conf import settings
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
("core", "0032_commandvariantpolicy_store_document"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="ContactAvailabilitySettings",
|
||||
fields=[
|
||||
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
|
||||
("enabled", models.BooleanField(default=True)),
|
||||
("show_in_chat", models.BooleanField(default=True)),
|
||||
("show_in_groups", models.BooleanField(default=True)),
|
||||
("inference_enabled", models.BooleanField(default=True)),
|
||||
("retention_days", models.PositiveIntegerField(default=90)),
|
||||
("fade_threshold_seconds", models.PositiveIntegerField(default=900)),
|
||||
("created_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"user",
|
||||
models.OneToOneField(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="contact_availability_settings",
|
||||
to=settings.AUTH_USER_MODEL,
|
||||
),
|
||||
),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="ContactAvailabilityEvent",
|
||||
fields=[
|
||||
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
|
||||
(
|
||||
"service",
|
||||
models.CharField(
|
||||
choices=[("signal", "Signal"), ("whatsapp", "WhatsApp"), ("xmpp", "XMPP"), ("instagram", "Instagram"), ("web", "Web")],
|
||||
max_length=255,
|
||||
),
|
||||
),
|
||||
(
|
||||
"source_kind",
|
||||
models.CharField(
|
||||
choices=[
|
||||
("native_presence", "Native Presence"),
|
||||
("read_receipt", "Read Receipt"),
|
||||
("typing_start", "Typing Start"),
|
||||
("typing_stop", "Typing Stop"),
|
||||
("message_in", "Message In"),
|
||||
("message_out", "Message Out"),
|
||||
("inferred_timeout", "Inferred Timeout"),
|
||||
],
|
||||
max_length=32,
|
||||
),
|
||||
),
|
||||
(
|
||||
"availability_state",
|
||||
models.CharField(
|
||||
choices=[("available", "Available"), ("unavailable", "Unavailable"), ("unknown", "Unknown"), ("fading", "Fading")],
|
||||
max_length=32,
|
||||
),
|
||||
),
|
||||
("confidence", models.FloatField(default=0.0)),
|
||||
("ts", models.BigIntegerField(db_index=True)),
|
||||
("payload", models.JSONField(blank=True, default=dict)),
|
||||
("created_at", models.DateTimeField(auto_now_add=True)),
|
||||
(
|
||||
"person",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="availability_events",
|
||||
to="core.person",
|
||||
),
|
||||
),
|
||||
(
|
||||
"person_identifier",
|
||||
models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
related_name="availability_events",
|
||||
to="core.personidentifier",
|
||||
),
|
||||
),
|
||||
(
|
||||
"user",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="contact_availability_events",
|
||||
to=settings.AUTH_USER_MODEL,
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"indexes": [
|
||||
models.Index(fields=["user", "person", "ts"], name="core_contac_user_id_0da9b2_idx"),
|
||||
models.Index(fields=["user", "service", "ts"], name="core_contac_user_id_bce271_idx"),
|
||||
models.Index(fields=["user", "availability_state", "ts"], name="core_contac_user_id_1b50b3_idx"),
|
||||
],
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="ExternalChatLink",
|
||||
fields=[
|
||||
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
|
||||
("provider", models.CharField(default="codex_cli", max_length=64)),
|
||||
("external_chat_id", models.CharField(max_length=255)),
|
||||
("metadata", models.JSONField(blank=True, default=dict)),
|
||||
("enabled", models.BooleanField(default=True)),
|
||||
("created_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"person",
|
||||
models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="external_chat_links",
|
||||
to="core.person",
|
||||
),
|
||||
),
|
||||
(
|
||||
"person_identifier",
|
||||
models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
related_name="external_chat_links",
|
||||
to="core.personidentifier",
|
||||
),
|
||||
),
|
||||
(
|
||||
"user",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="external_chat_links",
|
||||
to=settings.AUTH_USER_MODEL,
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"indexes": [
|
||||
models.Index(fields=["user", "provider", "external_chat_id"], name="core_extern_user_id_f4a7b0_idx"),
|
||||
models.Index(fields=["user", "provider", "enabled"], name="core_extern_user_id_7d2295_idx"),
|
||||
],
|
||||
"constraints": [
|
||||
models.UniqueConstraint(fields=("user", "provider", "external_chat_id"), name="unique_external_chat_link_per_provider"),
|
||||
],
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="ContactAvailabilitySpan",
|
||||
fields=[
|
||||
("id", models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name="ID")),
|
||||
(
|
||||
"service",
|
||||
models.CharField(
|
||||
choices=[("signal", "Signal"), ("whatsapp", "WhatsApp"), ("xmpp", "XMPP"), ("instagram", "Instagram"), ("web", "Web")],
|
||||
max_length=255,
|
||||
),
|
||||
),
|
||||
(
|
||||
"state",
|
||||
models.CharField(
|
||||
choices=[("available", "Available"), ("unavailable", "Unavailable"), ("unknown", "Unknown"), ("fading", "Fading")],
|
||||
max_length=32,
|
||||
),
|
||||
),
|
||||
("start_ts", models.BigIntegerField(db_index=True)),
|
||||
("end_ts", models.BigIntegerField(db_index=True)),
|
||||
("confidence_start", models.FloatField(default=0.0)),
|
||||
("confidence_end", models.FloatField(default=0.0)),
|
||||
("payload", models.JSONField(blank=True, default=dict)),
|
||||
("created_at", models.DateTimeField(auto_now_add=True)),
|
||||
("updated_at", models.DateTimeField(auto_now=True)),
|
||||
(
|
||||
"closing_event",
|
||||
models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
related_name="closing_spans",
|
||||
to="core.contactavailabilityevent",
|
||||
),
|
||||
),
|
||||
(
|
||||
"opening_event",
|
||||
models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
related_name="opening_spans",
|
||||
to="core.contactavailabilityevent",
|
||||
),
|
||||
),
|
||||
(
|
||||
"person",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="availability_spans",
|
||||
to="core.person",
|
||||
),
|
||||
),
|
||||
(
|
||||
"person_identifier",
|
||||
models.ForeignKey(
|
||||
blank=True,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.SET_NULL,
|
||||
related_name="availability_spans",
|
||||
to="core.personidentifier",
|
||||
),
|
||||
),
|
||||
(
|
||||
"user",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="contact_availability_spans",
|
||||
to=settings.AUTH_USER_MODEL,
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"indexes": [
|
||||
models.Index(fields=["user", "person", "start_ts"], name="core_contac_user_id_9cd15a_idx"),
|
||||
models.Index(fields=["user", "person", "end_ts"], name="core_contac_user_id_88584a_idx"),
|
||||
models.Index(fields=["user", "service", "start_ts"], name="core_contac_user_id_182ffb_idx"),
|
||||
],
|
||||
},
|
||||
),
|
||||
]
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user