ramalama-perplexity - Man Page

calculate the perplexity value of an AI Model

Synopsis

ramalama perplexity [options] model [arg ...]

Model Transports

TransportsPrefixWeb Site
URL basedhttps://, http://, file://https://web.site/ai.model, file://tmp/ai.model
HuggingFacehuggingface://, hf://, hf.co/huggingface.co
Ollamaollama://ollama.com
OCI Container Registriesoci://opencontainers.org
Examples: quay.io,  Docker Hub,Artifactory

RamaLama defaults to the Ollama registry transport. This default can be overridden in the ramalama.conf file or via the RAMALAMA_TRANSPORTS environment. export RAMALAMA_TRANSPORT=huggingface Changes RamaLama to use huggingface transport.

Modify individual model transports by specifying the huggingface://, oci://, ollama://, https://, http://, file:// prefix to the model.

URL support means if a model is on a web site or even on your local system, you can run it directly.

Options

--help, -h

show this help message and exit

Description

Calculate the perplexity of an AI Model. Perplexity measures how well the model can predict the next token with lower values being better.

Examples

ramalama perplexity granite-moe3

See Also

ramalama(1)

History

Jan 2025, Originally compiled by Eric Curtin ecurtin@redhat.com ⟨mailto:ecurtin@redhat.com⟩

Referenced By

ramalama(1).