Skip to main content

clawdesk-local-models

The local-models crate enables running AI models directly on your computer without any external software or internet connection.

What It Does (Plain English)

This crate is the engine behind ClawDesk's "Local Models" page. It detects your computer's hardware (CPU, RAM, GPU), recommends AI models that will run well, downloads model files, and manages the local inference server. It's what makes running AI offline possible.

Key Features

  • Hardware detection — Identifies CPU, RAM, GPU capabilities
  • Model recommendations — Suggests models matched to your hardware
  • Download management — Downloads GGUF model files with progress tracking
  • Server management — Starts/stops llama-server for local inference
  • Provider integration — Registers as "Local (Built-in)" provider in ClawDesk

How It Works

Key Files

FilePurpose
hardware.rsSystem hardware detection
models.rsModel database and recommendations
server.rsllama-server management
download.rsModel file download with progress

Architecture Role

LayerPosition
ServicesLocal inference management

Storage

Models are stored in ~/.clawdesk/models/ as GGUF files.

Dependencies

  • clawdesk-types — Model metadata types
  • clawdesk-providers — Provider trait implementation