Switch to Quadlet and add agent instructions

This commit is contained in:
2026-02-19 01:33:40 +00:00
parent c400c46e7d
commit bbb19f3c2c
6 changed files with 629 additions and 4 deletions

173
AGENTS.md Normal file
View File

@@ -0,0 +1,173 @@
# GIA — Agent Knowledge Base
## Overview
GIA is a multi-transport communication platform bridging Signal, WhatsApp, XMPP, and Instagram through a Django web interface. It provides message relay, AI-powered workspace analysis, compose UX, and OSINT search. Stack: Python 3.11, Django 4.x, HTMX, Bulma CSS, SQLite, Redis, Docker Compose. Async runtime uses asyncio + uvloop.
## Structure
```
GIA/
├── app/ # Django project config (settings, urls, asgi, wsgi)
│ ├── settings.py # Main settings (imports local_settings.py at bottom)
│ ├── local_settings.py # Env-driven overrides (secrets, feature flags)
│ ├── urls.py # All URL routing (single flat file, no includes)
│ └── asgi.py # ASGI entrypoint with WebSocket routing
├── core/ # ALL application logic lives here
│ ├── models.py # All models (~1600 lines) — User, Person, AI, Message, etc.
│ ├── forms.py # Django ModelForms using RestrictedFormMixin from mixins
│ ├── admin.py # Admin registrations
│ ├── clients/ # Transport service adapters
│ │ ├── __init__.py # ClientBase ABC (start, message_received, etc.)
│ │ ├── transport.py # Shared transport layer — attachment prep, send, runtime state
│ │ ├── whatsapp.py # WhatsApp client via Neonize (~3100 lines)
│ │ ├── signal.py # Signal client via signal-cli REST API
│ │ ├── xmpp.py # XMPP client via slixmpp
│ │ ├── instagram.py # Instagram client via aiograpi
│ │ └── gateway.py # Gateway HTTP helpers
│ ├── messaging/ # Message processing pipeline
│ │ ├── ai.py # OpenAI integration (AsyncOpenAI)
│ │ ├── history.py # Prompt window builder with adaptive limits
│ │ ├── media_bridge.py # Media attachment resolution
│ │ ├── analysis.py # Conversation analysis
│ │ ├── natural.py # Natural language processing
│ │ ├── replies.py # Reply generation
│ │ └── utils.py # Message formatting helpers
│ ├── modules/
│ │ └── router.py # UnifiedRouter — orchestrates all transport clients
│ ├── views/ # Django views (class-based)
│ │ ├── compose.py # Compose UX (~3400 lines) — send, drafts, thread, media
│ │ ├── workspace.py # AI workspace (~5200 lines) — insights, mitigation, patterns
│ │ ├── osint.py # OSINT/search interface
│ │ └── ... # CRUD views for people, groups, sessions, etc.
│ ├── lib/prompts/ # AI persona prompt templates
│ ├── realtime/ # WebSocket handlers (compose thread)
│ ├── templates/ # Django templates (75 files, partials/ heavy)
│ ├── management/commands/ # ur (unified router), scheduling
│ └── util/logs.py # Custom colored logger — use logs.get_logger("name")
├── Makefile # Docker Compose orchestration commands
├── docker-compose.yml # Services: app, asgi, ur, scheduling, redis, signal-cli
├── Dockerfile # Python 3.11, venv at /venv
├── requirements.txt # Pinned deps (django, openai, neonize, slixmpp, etc.)
├── stack.env # Runtime env vars (from stack.env.example)
└── LLM_CODING_STANDARDS.md # Project-specific coding rules (READ THIS)
```
## Commands
```bash
# All commands run via Docker Compose with stack.env
make build # Build Docker images
make run # Start all services (quadlet manager)
make stop # Stop all services
make log # Tail logs
make compose-run # Start via docker-compose directly
make compose-stop # Stop via docker-compose
make compose-log # Tail via docker-compose
# Database
make migrate # Run Django migrations
make makemigrations # Generate new migrations
make auth # Create superuser
# Testing
make test # Run all tests
make test MODULES=core.tests # Run specific test module
# Inside container (or with venv activated):
python manage.py test core.tests -v 2 # All tests
python manage.py test core.tests.test_foo -v 2 # Single test module
python manage.py test core.tests.test_foo.TestBar -v 2 # Single class
python manage.py test core.tests.test_foo.TestBar.test_method -v 2 # Single test
# Service restarts after code changes
docker-compose restart ur # Restart unified router
docker-compose restart scheduling # Restart scheduler
# uWSGI auto-reloads for app/core code changes
```
## Code Style
### Formatting & Linting (pre-commit enforced)
- **Black**: Line length 88, excludes `core/migrations/`
- **isort**: Profile `black` (compatible grouping)
- **flake8**: Max line 88, ignores E203, E231, E501, E702, W291
- **djhtml**: Template indent 2 spaces (`-t 2`)
- **ripsecrets**: Scans for leaked credentials
### Imports
- Standard library first, then third-party, then Django, then project
- Project imports use absolute paths: `from core.models import Person`
- Within same package, relative OK: `from .models import User` (seen in admin/forms)
- Views import models explicitly: `from core.models import AI, Person, Message`
### Naming
- `snake_case` for functions, variables, modules
- `PascalCase` for classes (Django views, models)
- `UPPER_CASE` for module-level constants and Django settings
- Private helpers prefixed with `_`: `_safe_limit()`, `_service_key()`
- Service names always lowercase strings: `"signal"`, `"whatsapp"`, `"xmpp"`, `"instagram"`
### Logging
- Use `from core.util import logs` then `log = logs.get_logger("name")` for transport/messaging code
- Use `import logging; logger = logging.getLogger(__name__)` for views/models
- Info for lifecycle events, warning/error for failures, debug for high-volume traces
- Debug logs gated behind `GIA_DEBUG_LOGS` env var
### Views
- Class-based views inheriting `View` or `LoginRequiredMixin`
- CRUD views use `mixins.views.ObjectCreate/ObjectUpdate/ObjectDelete` from `django-crud-mixins`
- Forms use `RestrictedFormMixin` for user-scoped queryset filtering
- HTMX-driven partials in `core/templates/partials/`
### Models
- All models in single `core/models.py` (~1600 lines)
- UUIDs as primary keys for `Person` and related models
- `SERVICE_CHOICES` tuple for transport type fields
- Custom `User` model extending `AbstractUser` with billing fields
- Multi-tenant: most models have `user = ForeignKey(User)`
### Async Patterns
- Transport clients are async (`async def send_message_raw(...)`)
- Views bridge sync Django to async transport via `async_to_sync()`
- `orjson` for fast JSON serialization in transport layer
- Redis cache for runtime state, bridge maps, command queues
### Error Handling
- Standard `try/except` with specific exception types
- Django `ValidationError` for model validation
- `get_object_or_404()` in views for missing resources
- `HttpResponseBadRequest` / `HttpResponseNotFound` for view error responses
- No custom exception hierarchy — use built-in Django/Python exceptions
## LLM Coding Standards (from LLM_CODING_STANDARDS.md)
**MUST READ**: `LLM_CODING_STANDARDS.md` contains binding project rules. Key points:
- Fix root causes; don't paper over with UI-only patches
- Keep behavior symmetric across all transports where protocol permits
- Centralize shared logic — no copy/paste service forks
- Shared attachment prep goes through `core/clients/transport.py`
- Never inject internal blob links as relay body text for attachment-only messages
- After changing `core/clients/*` or router/relay/transport: restart runtime (`make stop && make run`)
- Logging: lifecycle at info, failures at warning/error, high-volume at debug
- Debug diagnostics must be gated (e.g. `WHATSAPP_DEBUG`) and removable in one patch
- When touching large files (2000+ lines): extract minimal reusable helpers, add docstrings
- Update `INSTALL.md` and `README.md` when operational commands/env requirements change
## Anti-Patterns
- **DO NOT** create separate transport-specific media pipelines — use `transport.prepare_outbound_attachments()`
- **DO NOT** add `TODO`/`FIXME` comments — codebase is currently clean of them
- **DO NOT** use `print()` — use the logging system via `logs.get_logger()`
- **DO NOT** modify `core/migrations/` files — Black/linting excludes them for a reason
- **DO NOT** commit `stack.env`, `db.sqlite3`, or any secrets — `ripsecrets` pre-commit hook will block
- **DO NOT** add new models outside `core/models.py` — all models live in one file
- **DO NOT** use type suppression or ignore runtime errors silently
## Key Architecture Notes
- **Unified Router** (`core/modules/router.py`): Management command `python manage.py ur` runs the event loop with all transport clients. Each client inherits `ClientBase` ABC.
- **Transport Layer** (`core/clients/transport.py`): Shared cache-backed runtime state, command queuing, and attachment prep. All outbound media goes through `prepare_outbound_attachments()`.
- **Settings Chain**: `app/settings.py` → imports `app/local_settings.py` (wildcard `*`) → env vars from `stack.env`. Feature flags: `WHATSAPP_ENABLED`, `INSTAGRAM_ENABLED`, `COMPOSE_WS_ENABLED`.
- **Services in docker-compose**: `app` (uWSGI), `asgi` (uvicorn for WebSockets), `ur` (unified router), `scheduling` (APScheduler), `redis`, `signal-cli-rest-api`.
- **No test suite currently**: `core/tests.py` is empty scaffold; `core/tests/` has only `__init__.py`. Tests run via `make test MODULES=...` but need to be written.

View File

@@ -1,13 +1,30 @@
QUADLET_MGR := ./scripts/quadlet/manage.sh
run:
docker-compose --env-file=stack.env up -d
bash $(QUADLET_MGR) up
build:
docker-compose --env-file=stack.env build
stop:
docker-compose --env-file=stack.env down
bash $(QUADLET_MGR) down
log:
bash $(QUADLET_MGR) logs
status:
bash $(QUADLET_MGR) status
quadlet-install:
bash $(QUADLET_MGR) install
compose-run:
docker-compose --env-file=stack.env up -d
compose-stop:
docker-compose --env-file=stack.env down
compose-log:
docker-compose --env-file=stack.env logs -f --names
test:

View File

@@ -336,7 +336,8 @@ services:
source: /code/vrun
target: /var/run
healthcheck:
test: "CMD-SHELL redis-cli -s /var/run/gia-redis.sock ping"
test:
["CMD", "redis-cli", "-s", "/var/run/gia-redis.sock", "ping"]
interval: 2s
timeout: 2s
retries: 15

View File

@@ -26,7 +26,7 @@ pydantic
git+https://git.zm.is/XF/django-crud-mixins
# pyroscope-io
# For caching
redis
redis<7
hiredis
django-cachalot
django_redis

183
scripts/quadlet/manage.sh Executable file
View File

@@ -0,0 +1,183 @@
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
STACK_ENV="${STACK_ENV:-$ROOT_DIR/stack.env}"
POD_NAME="gia"
REDIS_CONTAINER="redis_gia"
SIGNAL_CONTAINER="signal"
MIGRATION_CONTAINER="migration_gia"
COLLECTSTATIC_CONTAINER="collectstatic_gia"
APP_CONTAINER="gia"
ASGI_CONTAINER="asgi_gia"
UR_CONTAINER="ur_gia"
SCHED_CONTAINER="scheduling_gia"
REDIS_DATA_DIR="${QUADLET_REDIS_DATA_DIR:-$ROOT_DIR/.podman/gia_redis_data}"
WHATSAPP_DATA_DIR="${QUADLET_WHATSAPP_DATA_DIR:-$ROOT_DIR/.podman/gia_whatsapp_data}"
VRUN_DIR="/code/vrun"
load_env() {
set -a
. "$STACK_ENV"
set +a
}
require_podman() {
if ! command -v podman >/dev/null 2>&1; then
echo "podman not found" >&2
exit 1
fi
}
ensure_dirs() {
mkdir -p "$REDIS_DATA_DIR" "$WHATSAPP_DATA_DIR" "$VRUN_DIR" "$ROOT_DIR/signal-cli-config"
}
rm_if_exists() {
podman rm -f "$1" >/dev/null 2>&1 || true
}
wait_for_redis_socket() {
local sock="$VRUN_DIR/gia-redis.sock"
local i
for i in $(seq 1 60); do
[[ -S "$sock" ]] && return 0
sleep 1
done
echo "redis socket did not appear at $sock" >&2
return 1
}
run_worker_container() {
local name="$1"
local cmd="$2"
local with_uwsgi="${3:-0}"
local with_whatsapp="${4:-0}"
rm_if_exists "$name"
local args=(
--replace
--name "$name"
--pod "$POD_NAME"
--env-file "$STACK_ENV"
--env "SIGNAL_HTTP_URL=http://127.0.0.1:8080"
-v "$REPO_DIR:/code"
-v "$APP_DATABASE_FILE:/conf/db.sqlite3"
-v "$VRUN_DIR:/var/run"
)
if [[ "$with_uwsgi" == "1" ]]; then
args+=( -v "$REPO_DIR/docker/uwsgi.ini:/conf/uwsgi.ini:ro" )
fi
if [[ "$with_whatsapp" == "1" ]]; then
args+=( -v "$WHATSAPP_DATA_DIR:${WHATSAPP_DB_DIR:-/var/tmp/whatsapp}" )
fi
podman run -d "${args[@]}" localhost/xf/gia:prod sh -c "$cmd" >/dev/null
}
run_oneshot_container() {
local name="$1"
local cmd="$2"
local with_whatsapp="${3:-0}"
rm_if_exists "$name"
local args=(
--replace
--name "$name"
--pod "$POD_NAME"
--env-file "$STACK_ENV"
--env "SIGNAL_HTTP_URL=http://127.0.0.1:8080"
-v "$REPO_DIR:/code"
-v "$APP_DATABASE_FILE:/conf/db.sqlite3"
-v "$VRUN_DIR:/var/run"
)
if [[ "$with_whatsapp" == "1" ]]; then
args+=( -v "$WHATSAPP_DATA_DIR:${WHATSAPP_DB_DIR:-/var/tmp/whatsapp}" )
fi
podman run "${args[@]}" localhost/xf/gia:prod sh -c "$cmd" >/dev/null
}
down_stack() {
podman pod rm -f "$POD_NAME" >/dev/null 2>&1 || true
rm_if_exists "$REDIS_CONTAINER"
rm_if_exists "$SIGNAL_CONTAINER"
rm_if_exists "$MIGRATION_CONTAINER"
rm_if_exists "$COLLECTSTATIC_CONTAINER"
rm_if_exists "$APP_CONTAINER"
rm_if_exists "$ASGI_CONTAINER"
rm_if_exists "$UR_CONTAINER"
rm_if_exists "$SCHED_CONTAINER"
}
start_stack() {
require_podman
load_env
ensure_dirs
down_stack
podman pod create --name "$POD_NAME" -p "${APP_PORT:-5006}:8000" -p "8080:8080" >/dev/null
podman run -d \
--replace \
--name "$REDIS_CONTAINER" \
--pod "$POD_NAME" \
-v "$REPO_DIR/docker/redis.conf:/etc/redis.conf:ro" \
-v "$REDIS_DATA_DIR:/data" \
-v "$VRUN_DIR:/var/run" \
docker.io/library/redis:latest \
redis-server /etc/redis.conf >/dev/null
podman run -d \
--replace \
--name "$SIGNAL_CONTAINER" \
--pod "$POD_NAME" \
-e MODE=json-rpc \
-v "$ROOT_DIR/signal-cli-config:/home/.local/share/signal-cli" \
docker.io/bbernhard/signal-cli-rest-api:latest >/dev/null
wait_for_redis_socket
run_oneshot_container "$MIGRATION_CONTAINER" ". /venv/bin/activate && python manage.py migrate --noinput"
run_oneshot_container "$COLLECTSTATIC_CONTAINER" ". /venv/bin/activate && python manage.py collectstatic --noinput"
run_worker_container "$APP_CONTAINER" "if [ \"\$OPERATION\" = \"uwsgi\" ] ; then . /venv/bin/activate && uwsgi --ini /conf/uwsgi.ini ; else . /venv/bin/activate && exec python manage.py runserver 0.0.0.0:8000; fi" 1 1
run_worker_container "$ASGI_CONTAINER" "rm -f /var/run/asgi-gia.sock && . /venv/bin/activate && python -m pip install --disable-pip-version-check -q uvicorn && python -m uvicorn app.asgi:application --uds /var/run/asgi-gia.sock --workers 1" 0 1
run_worker_container "$UR_CONTAINER" ". /venv/bin/activate && python manage.py ur" 1 1
run_worker_container "$SCHED_CONTAINER" ". /venv/bin/activate && python manage.py scheduling" 1 0
}
render_units() {
python3 "$ROOT_DIR/scripts/quadlet/render_units.py" --stack-env "$STACK_ENV"
}
case "${1:-}" in
install)
render_units
;;
up)
start_stack
trap 'down_stack; exit 0' INT TERM
podman logs -f "$APP_CONTAINER" "$ASGI_CONTAINER" "$UR_CONTAINER" "$SCHED_CONTAINER" "$REDIS_CONTAINER" "$SIGNAL_CONTAINER" || true
;;
down)
require_podman
down_stack
;;
restart)
start_stack
;;
status)
require_podman
podman pod ps --format "table {{.Name}}\t{{.Status}}" | grep -E "^$POD_NAME\b" || true
podman ps --format "table {{.Names}}\t{{.Status}}" | grep -E "^($APP_CONTAINER|$ASGI_CONTAINER|$UR_CONTAINER|$SCHED_CONTAINER|$REDIS_CONTAINER|$SIGNAL_CONTAINER)\b" || true
;;
logs)
require_podman
podman logs -f "$APP_CONTAINER" "$ASGI_CONTAINER" "$UR_CONTAINER" "$SCHED_CONTAINER" "$REDIS_CONTAINER" "$SIGNAL_CONTAINER"
;;
*)
echo "Usage: $0 {install|up|down|restart|status|logs}" >&2
exit 2
;;
esac

251
scripts/quadlet/render_units.py Executable file
View File

@@ -0,0 +1,251 @@
#!/usr/bin/env python3
from __future__ import annotations
from pathlib import Path
import argparse
import os
def parse_env(path: Path) -> dict[str, str]:
out: dict[str, str] = {}
if not path.exists():
return out
for raw in path.read_text().splitlines():
line = raw.strip()
if not line or line.startswith("#"):
continue
if "=" not in line:
continue
key, value = line.split("=", 1)
out[key.strip()] = value.strip().strip('"').strip("'")
return out
def abs_from(base: Path, raw: str, fallback: str) -> Path:
candidate = (raw or fallback).strip()
if not candidate:
candidate = fallback
p = Path(candidate).expanduser()
if not p.is_absolute():
p = (base / p).resolve()
return p
def write_unit(path: Path, content: str) -> None:
path.write_text(content.strip() + "\n")
def main() -> int:
parser = argparse.ArgumentParser()
parser.add_argument("--stack-env", default="stack.env")
parser.add_argument("--output-dir", default=str(Path.home() / ".config/containers/systemd"))
args = parser.parse_args()
repo_root = Path(__file__).resolve().parents[2]
stack_env_path = abs_from(repo_root, args.stack_env, "stack.env")
env = parse_env(stack_env_path)
repo_dir = abs_from(repo_root, env.get("REPO_DIR", "."), ".")
app_db_file = abs_from(repo_root, env.get("APP_DATABASE_FILE", "./db.sqlite3"), "./db.sqlite3")
redis_data_dir = abs_from(repo_root, env.get("QUADLET_REDIS_DATA_DIR", "./.podman/gia_redis_data"), "./.podman/gia_redis_data")
whatsapp_data_dir = abs_from(repo_root, env.get("QUADLET_WHATSAPP_DATA_DIR", "./.podman/gia_whatsapp_data"), "./.podman/gia_whatsapp_data")
vrun_dir = Path("/code/vrun")
signal_cli_dir = (repo_dir / "signal-cli-config").resolve()
uwsgi_ini = (repo_dir / "docker" / "uwsgi.ini").resolve()
redis_conf = (repo_dir / "docker" / "redis.conf").resolve()
for p in (redis_data_dir, whatsapp_data_dir, vrun_dir, signal_cli_dir):
p.mkdir(parents=True, exist_ok=True)
out_dir = Path(args.output_dir).expanduser().resolve()
out_dir.mkdir(parents=True, exist_ok=True)
env_file = stack_env_path
pod_unit = """
[Unit]
Description=GIA Pod
[Pod]
PodName=gia
[Install]
WantedBy=default.target
"""
target_unit = """
[Unit]
Description=GIA Stack Target
Wants=gia-redis.service gia-signal.service gia-migration.service gia-collectstatic.service gia-app.service gia-asgi.service gia-ur.service gia-scheduling.service
After=gia-redis.service gia-signal.service gia-migration.service gia-collectstatic.service
[Install]
WantedBy=default.target
"""
redis_unit = f"""
[Unit]
Description=GIA Redis
PartOf=gia.target
After=network-online.target
Wants=network-online.target
[Container]
Image=docker.io/library/redis:latest
ContainerName=redis_gia
Pod=gia.pod
Volume={redis_conf}:/etc/redis.conf:ro
Volume={redis_data_dir}:/data
Volume={vrun_dir}:/var/run
Exec=redis-server /etc/redis.conf
[Service]
Restart=always
RestartSec=2
[Install]
WantedBy=gia.target
"""
signal_unit = f"""
[Unit]
Description=GIA Signal API
PartOf=gia.target
After=network-online.target
Wants=network-online.target
[Container]
Image=docker.io/bbernhard/signal-cli-rest-api:latest
ContainerName=signal
Pod=gia.pod
Volume={signal_cli_dir}:/home/.local/share/signal-cli
Environment=MODE=json-rpc
[Service]
Restart=always
RestartSec=2
[Install]
WantedBy=gia.target
"""
def gia_container(name: str, container_name: str, command: str, include_uwsgi: bool, include_whatsapp: bool, after: str, requires: str, one_shot: bool = False) -> str:
lines = [
"[Unit]",
f"Description={name}",
"PartOf=gia.target",
f"After={after}",
f"Requires={requires}",
"",
"[Container]",
"Image=localhost/xf/gia:prod",
f"ContainerName={container_name}",
"Pod=gia.pod",
f"EnvironmentFile={env_file}",
"Environment=SIGNAL_HTTP_URL=http://127.0.0.1:8080",
f"Volume={repo_dir}:/code",
f"Volume={app_db_file}:/conf/db.sqlite3",
f"Volume={vrun_dir}:/var/run",
]
if include_uwsgi:
lines.append(f"Volume={uwsgi_ini}:/conf/uwsgi.ini")
if include_whatsapp:
lines.append(f"Volume={whatsapp_data_dir}:{env.get('WHATSAPP_DB_DIR', '/var/tmp/whatsapp')}")
lines.append(f"Exec={command}")
lines.extend(["", "[Service]"])
if one_shot:
lines.extend([
"Type=oneshot",
"RemainAfterExit=yes",
"TimeoutStartSec=0",
"ExecStartPre=/bin/sh -c 'for i in $(seq 1 60); do [ -S /code/vrun/gia-redis.sock ] && exit 0; sleep 1; done; exit 1'",
])
else:
lines.extend([
"Restart=always",
"RestartSec=2",
])
lines.extend(["", "[Install]", "WantedBy=gia.target"])
return "\n".join(lines)
migration_unit = gia_container(
"GIA Migration",
"migration_gia",
"sh -c '. /venv/bin/activate && python manage.py migrate --noinput'",
include_uwsgi=False,
include_whatsapp=False,
after="gia-redis.service gia-signal.service",
requires="gia-redis.service gia-signal.service",
one_shot=True,
)
collectstatic_unit = gia_container(
"GIA Collectstatic",
"collectstatic_gia",
"sh -c '. /venv/bin/activate && python manage.py collectstatic --noinput'",
include_uwsgi=False,
include_whatsapp=False,
after="gia-migration.service",
requires="gia-migration.service",
one_shot=True,
)
app_unit = gia_container(
"GIA App",
"gia",
"sh -c 'if [ \\\"$OPERATION\\\" = \\\"uwsgi\\\" ] ; then . /venv/bin/activate && uwsgi --ini /conf/uwsgi.ini ; else . /venv/bin/activate && exec python manage.py runserver 0.0.0.0:8000; fi'",
include_uwsgi=True,
include_whatsapp=True,
after="gia-collectstatic.service gia-redis.service gia-signal.service",
requires="gia-collectstatic.service gia-redis.service gia-signal.service",
)
asgi_unit = gia_container(
"GIA ASGI",
"asgi_gia",
"sh -c 'rm -f /var/run/asgi-gia.sock && . /venv/bin/activate && python -m pip install --disable-pip-version-check -q uvicorn && python -m uvicorn app.asgi:application --uds /var/run/asgi-gia.sock --workers 1'",
include_uwsgi=False,
include_whatsapp=True,
after="gia-collectstatic.service gia-redis.service gia-signal.service",
requires="gia-collectstatic.service gia-redis.service gia-signal.service",
)
ur_unit = gia_container(
"GIA Unified Router",
"ur_gia",
"sh -c '. /venv/bin/activate && python manage.py ur'",
include_uwsgi=True,
include_whatsapp=True,
after="gia-collectstatic.service gia-redis.service gia-signal.service",
requires="gia-collectstatic.service gia-redis.service gia-signal.service",
)
scheduling_unit = gia_container(
"GIA Scheduling",
"scheduling_gia",
"sh -c '. /venv/bin/activate && python manage.py scheduling'",
include_uwsgi=True,
include_whatsapp=False,
after="gia-collectstatic.service gia-redis.service gia-signal.service",
requires="gia-collectstatic.service gia-redis.service gia-signal.service",
)
write_unit(out_dir / "gia.pod", pod_unit)
write_unit(out_dir / "gia.target", target_unit)
write_unit(out_dir / "gia-redis.container", redis_unit)
write_unit(out_dir / "gia-signal.container", signal_unit)
write_unit(out_dir / "gia-migration.container", migration_unit)
write_unit(out_dir / "gia-collectstatic.container", collectstatic_unit)
write_unit(out_dir / "gia-app.container", app_unit)
write_unit(out_dir / "gia-asgi.container", asgi_unit)
write_unit(out_dir / "gia-ur.container", ur_unit)
write_unit(out_dir / "gia-scheduling.container", scheduling_unit)
print(f"Wrote Quadlet units to: {out_dir}")
return 0
if __name__ == "__main__":
raise SystemExit(main())