Package python3-ramalama

RamaLama is a command line tool for working with AI LLM models

https://github.com/containers/ramalama

RamaLama is a command line tool for working with AI LLM models

On first run RamaLama inspects your system for GPU support, falling back to CPU
support if no GPUs are present. It then uses container engines like Podman to
pull the appropriate OCI image with all of the software necessary to run an
AI Model for your systems setup. This eliminates the need for the user to
configure the system for AI themselves. After the initialization, RamaLama
will run the AI Models within a container based on the OCI image.

Version: 0.2.0

General Commands

ramalama Simple management tool for working with AI Models
ramalama-containers list all RamaLama containers
ramalama-info Display RamaLama configuration information
ramalama-list list all downloaded AI Models
ramalama-login login to remote registry
ramalama-logout logout from remote registry
ramalama-ls alias for ramalama-list
ramalama-ps alias for ramalama-containers
ramalama-pull pull AI Models from Model registries to local storage
ramalama-push push AI Models from local storage to remote registries
ramalama-rm remove AI Models from local storage
ramalama-run run specified AI Model as a chatbot
ramalama-serve serve REST API on specified AI Model
ramalama-stop stop named container that is running AI Model
ramalama-version display version of RamaLama

File Formats

ramalama.conf These configuration files specifies default configuration options and command-line flags for RamaLa.