clawdesk-local-models
The local-models crate enables running AI models directly on your computer without any external software or internet connection.
What It Does (Plain English)
This crate is the engine behind ClawDesk's "Local Models" page. It detects your computer's hardware (CPU, RAM, GPU), recommends AI models that will run well, downloads model files, and manages the local inference server. It's what makes running AI offline possible.
Key Features
- Hardware detection — Identifies CPU, RAM, GPU capabilities
- Model recommendations — Suggests models matched to your hardware
- Download management — Downloads GGUF model files with progress tracking
- Server management — Starts/stops llama-server for local inference
- Provider integration — Registers as "Local (Built-in)" provider in ClawDesk
How It Works
Key Files
| File | Purpose |
|---|---|
hardware.rs | System hardware detection |
models.rs | Model database and recommendations |
server.rs | llama-server management |
download.rs | Model file download with progress |
Architecture Role
| Layer | Position |
|---|---|
| Services | Local inference management |
Storage
Models are stored in ~/.clawdesk/models/ as GGUF files.
Dependencies
clawdesk-types— Model metadata typesclawdesk-providers— Provider trait implementation