ramalama.conf - Man Page

These configuration files specifies default configuration options and command-line flags for RamaLa.

Description

RamaLama reads the ramalama.conf file, if it exists and modify the defaults for running RamaLama on the host. ramalama.conf uses a TOML format that can be easily modified and versioned.

RamaLama reads the he following paths for global configuration that effects all users.

Paths
/usr/share/ramalama/ramalama.conf
/etc/ramalama/ramalama.conf
/etc/ramalama/ramalama.conf.d/*.conf

For user specific configuration it reads

PathsException
$XDG_CONFIG_HOME/ramalama/ramalama.conf
$XDG_CONFIG_HOME/ramalama/ramalama.conf.d/*.conf
$HOME/.config/ramalama/ramalama.conf.d/*.confWhen $XDG_CONFIG_HOME not set

Fields specified in ramalama conf override the default options, as well as options in previously read ramalama.conf files.

Config files in the .d directories, are added in alpha numeric sorted order and must end in .conf.

Environment Variables

If the RAMALAMA_CONF environment variable is set, all system and user config files are ignored and only the specified config file will be loaded.

Format

The [TOML format][toml] is used as the encoding of the configuration file. Every option is nested under its table. No bare options are used. The format of TOML can be simplified to:

[table1]
option = value

[table2]
option = value

[table3]
option = value

[table3.subtable1]
option = value

Ramalama Table

The ramalama table contains settings to configure and manage the OCI runtime.

ramalama

container=true

Run RamaLama in the default container. RAMALAMA_IN_CONTAINER environment variable overrides this field.

carimage="registry.access.redhat.com/ubi9-micro:latest"

OCI Model Car image

Image to be used when building and pushing --type=car models

engine="podman"

Run RamaLama using the specified container engine. Valid options are: Podman and Docker This field can be overridden by the RAMALAMA_CONTAINER_ENGINE environment variable.

host="0.0.0.0"

IP address for llama.cpp to listen on.

image="quay.io/ramalama/ramalama:latest"

OCI container image to run with the specified AI model RAMALAMA_IMAGE environment variable overrides this field.

port="8080"

Specify default port for services to listen on

runtime="llama.cpp"

Specify the AI runtime to use; valid options are 'llama.cpp' and 'vllm' (default: llama.cpp) Options: llama.cpp, vllm

store="$HOME/.local/share/ramalama"

Store AI Models in the specified directory

transport="ollama"

Specify the default transport to be used for pulling and pushing of AI Models. Options: oci, ollama, huggingface. RAMALAMA_TRANSPORT environment variable overrides this field.

Referenced By

ramalama(1).

RamaLama AI tool