Package 'quackingllama'

Title: Process text with Ollama, retrieve structured results, cache them locally in DuckDB
Description: Process text with Ollama, store results in DuckDB.
Authors: Giorgio Comai [aut, cre, cph]
Maintainer: Giorgio Comai <[email protected]>
License: MIT + file LICENSE
Version: 0.0.0.9013
Built: 2025-03-28 18:22:37 UTC
Source: https://github.com/giocomai/quackingllama

Help Index


Disable caching for the current session

Description

Disable caching for the current session

Usage

ql_disable_db()

Value

Nothing, used for its side effects.

See Also

Other database: ql_enable_db(), ql_set_db_options()

Examples

ql_disable_db()

Enable storing data in a database for the current session

Description

Enable storing data in a database for the current session

Usage

ql_enable_db(db_type = "DuckDB")

Arguments

db_type

Defaults to DuckDB.

Value

Nothing, used for its side effects.

See Also

Other database: ql_disable_db(), ql_set_db_options()

Examples

ql_enable_db()

Generate a response and return the result in a data frame

Description

Generate a response and return the result in a data frame

Usage

ql_generate(
  prompt_df,
  only_cached = FALSE,
  host = NULL,
  message = NULL,
  timeout = NULL,
  error = c("fail", "warn")
)

Arguments

prompt_df

A data frame with all inputs passed to the LLM, typically created with ql_prompt().

only_cached

Defaults to FALSE. If TRUE, only cached responses are returned.

host

The address where the Ollama API can be reached, e.g. ⁠http://localhost:11434⁠ for locally deployed Ollama.

timeout

If not set with ql_set_options(), defaults to 300 seconds (5 minutes).

error

Defines how errors should be handled, defaults to "fail", i.e. if an error emerges while querying the LLM, the function stops. If set to "warn", it sets the response to NA_character_ and stores it in database. This can be useful e.g. for proceed if the prompts include a request that routinely time outs without giving a response. This does not imply that the model would never give a respones, e.g. re-running the same query with longer time out may work.

Value

A data frame, including a response column, as well as other information returned by the model.

Examples

## Not run: 
ql_prompt("a haiku") |>
  ql_generate()
  
## End(Not run)

Retrieve

Description

Retrieve

Usage

ql_get_db_options(options = c("db", "db_type", "db_folder", "db_filename"))

Arguments

options

Available options that

Value

A list with the selected options.

Examples

ql_get_db_options()

## Retrieve only selected option
ql_get_db_options("db_type")

Get available models

Description

Get available models

Usage

ql_get_models(host = "http://localhost:11434")

Arguments

host

Defaults to "http://localhost:11434", where locally deployed Ollama usually responds.

Value

A data frame (a tibble) with details on all locally available models.

Examples

## Not run: 
ql_get_models()

## End(Not run)

Get options

Description

Get options

Usage

ql_get_options(
  options = c("system", "model", "host", "temperature", "seed", "keep_alive", "timeout"),
  system = NULL,
  model = NULL,
  host = NULL,
  temperature = NULL,
  seed = NULL,
  keep_alive = NULL,
  timeout = NULL
)

Arguments

options

A character vector used to filter which options should effectively be returned. Defaults to all available.

Value

A list with all available options (or those selected with options)

Examples

ql_set_options(
  model = "llama3.2",
  host = "http://localhost:11434",
  system = "You are a helpful assistant.",
  temperature = 0,
  seed = 42,
  keep_alive = "5m"
)

ql_get_options()

Hash all inputs relevant to the call to the LLM and create a hash to be used for caching.

Description

Mostly used internally.

Usage

ql_hash(prompt_df)

Arguments

prompt_df

A data frame with all inputs passed to the LLM, typically created with ql_prompt().

Value

A tibble, such as those returned by ql_prompt(), but always including a hash column.

Examples

ql_prompt("a haiku", hash = FALSE) |> ql_hash()

Generate a data frame with all relevant inputs for the LLM.

Description

Typically passed to ql_generate().

Usage

ql_prompt(
  prompt,
  system = NULL,
  format = NULL,
  model = NULL,
  images = NULL,
  temperature = NULL,
  seed = NULL,
  host = NULL,
  hash = TRUE
)

Arguments

prompt

A prompt for the LLM.

system

System message to pass to the model. See official documentation for details. For example: "You are a helpful assistant."

model

The name of the model, e.g. llama3.2 or ⁠phi3.5:3.8b⁠. Run ⁠ollama list⁠ from the command line to see a list of locally available models.

temperature

Numeric value comprised between 0 and 1 passed to the model. When set to 0 and with the same seed, the response to the same prompt is always exactly the same. When closer to one, the response is more variable and creative. Use 0 for consistent responses. Setting this to 0.7 is a common choice for creative or interactive tasks.

seed

An integer. When temperature is set to 0 and the seed is constant, the model consistently returns the same response to the same prompt.

host

The address where the Ollama API can be reached, e.g. ⁠http://localhost:11434⁠ for locally deployed Ollama.

hash

Defaults to TRUE. If TRUE, adds a column with the hash of all other components of the prompt. Used internally for caching. Can be added separately with ql_hash().

Details

For more details and context about each parameter, see https://github.com/ollama/ollama/blob/main/docs/api.md.

Value

A tibble with all main components of a query, to be passed to ql_generate().

Examples

ql_prompt("a haiku")

Read image in order to pass it to multimodal models

Description

Read image in order to pass it to multimodal models

Usage

ql_read_images(path)

Arguments

path

Path to image file.

Value

A list object of character vectors of base 64 encoded images.

Examples

if (interactive()) {
  library("quackingllama")

  img_path <- fs::file_temp(ext = "png")

  download.file(
    url = "https://ollama.com/public/ollama.png",
    destfile = img_path
  )

  resp_df <- ql_prompt(
    prompt = "what is this?",
    images = img_path,
    model = "llama3.2-vision"
  ) |>
    ql_generate()

  resp_df

  resp_df$response
}

Create httr2 request for both generate and chat endpoints

Description

Create httr2 request for both generate and chat endpoints

Usage

ql_request(
  prompt_df,
  endpoint = "generate",
  host = NULL,
  message = NULL,
  timeout = NULL
)

Arguments

endpoint

Defaults to "generate". Must be either "generate" or "chat".

host

The address where the Ollama API can be reached, e.g. ⁠http://localhost:11434⁠ for locally deployed Ollama.

timeout

If not set with ql_set_options(), defaults to 300 seconds (5 minutes).

Value

A httr2 request object.

Examples

ql_prompt(prompt = "a haiku")

ql_prompt(prompt = "a haiku") |>
  ql_request() |>
  httr2::req_dry_run()

Set options for the local database and enables caching

Description

Set options for the local database and enables caching

Usage

ql_set_db_options(db_filename = NULL, db_type = "DuckDB", db_folder = ".")

Arguments

db_filename

Defaults NULL. Internally, defaults to a combination of quackingllama, followed by the name of the model used. Name given to the local database file. Useful for differentiating among different approaches or projects when storing multiple database files in the same folder.

db_type

Defaults to DuckDB.

db_folder

Defaults to ., i.e., to the current working directory.

Value

Nothing, used for its side effects.

See Also

Other database: ql_disable_db(), ql_enable_db()

Examples

ql_set_db_options(db_filename = "testing_ground")

Set basic options for the current session.

Description

Set basic options for the current session.

Usage

ql_set_options(
  system = NULL,
  model = NULL,
  host = NULL,
  temperature = NULL,
  seed = NULL,
  keep_alive = NULL,
  timeout = NULL
)

Arguments

system

System message to pass to the model. See official documentation for details. For example: "You are a helpful assistant."

model

The name of the model, e.g. llama3.2 or ⁠phi3.5:3.8b⁠. Run ⁠ollama list⁠ from the command line to see a list of locally available models.

host

The address where the Ollama API can be reached, e.g. ⁠http://localhost:11434⁠ for locally deployed Ollama.

temperature

Numeric value comprised between 0 and 1 passed to the model. When set to 0 and with the same seed, the response to the same prompt is always exactly the same. When closer to one, the response is more variable and creative. Use 0 for consistent responses. Setting this to 0.7 is a common choice for creative or interactive tasks.

seed

An integer. When temperature is set to 0 and the seed is constant, the model consistently returns the same response to the same prompt.

keep_alive

Defaults to "5m". Controls controls how long the model will stay loaded into memory following the request.

timeout

Time in seconds before the request times out. Defaults to 300 (corresponding to 5 minutes).

Value

Nothing, used for its side effects. Options can be retrieved with ql_get_db_options()

Examples

ql_set_options(
  model = "llama3.2",
  host = "http://localhost:11434",
  system = "You are a helpful assistant.",
  temperature = 0,
  seed = 42
)

ql_get_options()