Configuration Reference
Overview
IOWarp uses a single YAML file to configure the Chimaera runtime and any modules (ChiMods) that are created at startup via the compose section.
When you install IOWarp, a default configuration is created at ~/.chimaera/chimaera.yaml. You can edit this file directly or override it with an environment variable.
The configuration file is located via (in priority order):
| Source | Priority | Description |
|---|---|---|
CHI_SERVER_CONF env var | 1st | Checked first. |
WRP_RUNTIME_CONF env var | 2nd | Legacy fallback. |
~/.chimaera/chimaera.yaml | 3rd | Default created at install time. |
# Use the installed default
chimaera runtime start
# Or override with a custom config
export CHI_SERVER_CONF=/etc/iowarp/chimaera.yaml
chimaera runtime start
Size values throughout the file accept: B, KB, MB, GB, TB (case-insensitive).
Networking (networking)
| Parameter | Default | Description |
|---|---|---|
port | 9413 | ZeroMQ RPC listener port. Must match across all cluster nodes. Can be overridden by CHI_PORT env var. |
neighborhood_size | 32 | Maximum nodes queried when splitting range queries. |
hostfile | (none) | Path to a file listing cluster node IPs/hostnames, one per line. Required for multi-node deployments. |
wait_for_restart | 30 | Seconds to wait for peer nodes during startup. |
wait_for_restart_poll_period | 1 | Seconds between connection retry attempts during startup. |
networking:
port: 9413
neighborhood_size: 32
# hostfile: /etc/iowarp/hostfile # Multi-node only
wait_for_restart: 30
wait_for_restart_poll_period: 1
Hostfile format (one IP or hostname per line):
192.168.1.10
192.168.1.11
192.168.1.12
Logging (Environment Variables)
Logging is controlled by HLOG, which reads environment variables at process startup. The logging section in the YAML config file is reserved for future use and is not currently parsed.
| Variable | Default | Description |
|---|---|---|
HSHM_LOG_LEVEL | info (compile-time default) | Runtime log level threshold. Messages below this level are suppressed. Accepts: debug (0), info (1), success (2), warning (3), error (4), fatal (5). Case-insensitive strings or numeric values. |
HSHM_LOG_OUT | (none — console only) | Path to a log file. When set, all log messages are also written to this file (without ANSI color codes). |
# Show debug-level output and write to a file
export HSHM_LOG_LEVEL=debug
export HSHM_LOG_OUT=/tmp/chimaera.log
chimaera runtime start
HLOG also applies a compile-time threshold (HSHM_LOG_LEVEL CMake define, default kInfo). Messages below the compile-time threshold are compiled out entirely and cannot be enabled at runtime. The runtime environment variable can only raise the threshold further (i.e., make output quieter), or match the compile-time level.
Log routing:
debug,info,successmessages go to stdout.warning,error,fatalmessages go to stderr.fatalmessages terminate the process after printing.
Runtime (runtime)
| Parameter | Default | Description |
|---|---|---|
num_threads | 4 | Worker threads for task execution. |
queue_depth | 1024 | Task queue depth per worker. |
local_sched | "default" | Local task scheduler algorithm. |
first_busy_wait | 10000 | Microseconds of busy-waiting before a worker sleeps when idle (10 ms). |
runtime:
num_threads: 4
queue_depth: 1024
local_sched: "default"
first_busy_wait: 10000
Recommendation: Set num_threads to the number of CPU cores on the node.
Compose Section
The compose section declaratively creates module pools at runtime startup. Each entry defines one pool.
compose:
- mod_name: wrp_cte_core # ChiMod shared-library name (e.g., libwrp_cte_core.so)
pool_name: cte_main # User-defined pool name
pool_query: local # Routing: local, dynamic, broadcast
pool_id: "512.0" # Unique pool ID
# ... module-specific parameters
Only chimaera_bdev is required. CTE (wrp_cte_core) and CAE (wrp_cae_core) are optional — remove their entries if you do not need them.
Common Compose Fields
| Field | Required | Description |
|---|---|---|
mod_name | Yes | Name of the ChiMod shared library (without lib prefix and .so suffix). |
pool_name | Yes | User-defined pool name. |
pool_query | Yes | Routing policy (see below). |
pool_id | Yes | Unique pool ID string (format: "<major>.<minor>"). |
pool_query Values
| Value | Description |
|---|---|
local | Create the pool on the local node only. |
dynamic | Auto-detect: reuse an existing local pool, or broadcast creation to all nodes. |
broadcast | Create the pool on all nodes in the cluster. |
Block Device ChiMod (chimaera_bdev)
Block devices provide the shared memory allocator used by other modules. At least one DRAM block device is required.
| Parameter | Required | Description |
|---|---|---|
bdev_type | Yes | "ram" for DRAM-backed, "file" for filesystem-backed. |
capacity | Yes | Maximum capacity (e.g., "512MB", "100GB"). |
compose:
# DRAM block device (required)
- mod_name: chimaera_bdev
pool_name: "ram::chi_default_bdev"
pool_query: local
pool_id: "301.0"
bdev_type: ram
capacity: "512MB"
# File-backed block device (optional — for NVMe, HDD, etc.)
# - mod_name: chimaera_bdev
# pool_name: "/mnt/nvme/chi_bdev"
# pool_query: local
# pool_id: "302.0"
# bdev_type: file
# capacity: "100GB"
For DRAM devices the pool_name uses the ram::<name> convention. For file-backed devices the pool_name is the filesystem path where data is stored.
CTE ChiMod Parameters (wrp_cte_core)
Storage Tiers (storage)
Array of storage targets. At least one entry is required when CTE is enabled.
| Parameter | Required | Description |
|---|---|---|
path | Yes | ram::<name> for DRAM storage, or a filesystem path for disk. |
bdev_type | Yes | "ram" for memory-backed, "file" for filesystem-backed. |
capacity_limit | Yes | Maximum capacity (e.g., "512MB", "200GB"). |
score | No | Placement priority (0.0–1.0). Higher = preferred. -1.0 = automatic scoring. |
storage:
# RAM tier — fastest, not persistent
- path: "ram::cte_cache"
bdev_type: ram
capacity_limit: 512MB
score: 1.0
# NVMe tier
- path: /mnt/nvme/cte
bdev_type: file
capacity_limit: 200GB
score: 0.9
# HDD tier
- path: /mnt/hdd/cte
bdev_type: file
capacity_limit: 2TB
score: 0.3
Data Placement Engine (dpe)
| Parameter | Default | Description |
|---|---|---|
dpe_type | "max_bw" | Placement algorithm: "max_bw", "round_robin", "random". |
Targets (targets)
| Parameter | Default | Description |
|---|---|---|
neighborhood | 1 | Number of storage nodes CTE can buffer to simultaneously. |
default_target_timeout_ms | 30000 | Timeout for storage target operations (ms). |
poll_period_ms | 5000 | How often to rescan targets for bandwidth/capacity stats (ms). |
Performance Tuning (performance)
All fields are optional and override compile-time defaults.
| Parameter | Default | Description |
|---|---|---|
stat_targets_period_ms | 50 | Periodic StatTargets interval (ms). |
max_concurrent_operations | 64 | Max concurrent I/O operations. |
score_threshold | 0.7 | Score above which blobs are reorganized. |
score_difference_threshold | 0.05 | Min score delta to trigger reorganization. |
flush_metadata_period_ms | 5000 | Metadata flush interval (ms). |
flush_data_period_ms | 10000 | Data flush interval (ms). |
flush_data_min_persistence | 1 | Min persistence level (1 = temp-nonvolatile). |
transaction_log_capacity | "32MB" | Write-ahead log capacity. |
CAE ChiMod Parameters (wrp_cae_core)
| Parameter | Required | Description |
|---|---|---|
pool_name | Yes | User-defined pool name. |
pool_query | Yes | Routing policy (local, dynamic, broadcast). |
pool_id | Yes | Unique pool ID. Default CAE pool ID is "400.0". |
- mod_name: wrp_cae_core
pool_name: wrp_cae_core_pool
pool_query: local
pool_id: "400.0"
Complete Examples
Minimal Single-Node
networking:
port: 9413
runtime:
num_threads: 4
compose:
- mod_name: chimaera_bdev
pool_name: "ram::chi_default_bdev"
pool_query: local
pool_id: "301.0"
bdev_type: ram
capacity: "512MB"
- mod_name: wrp_cte_core
pool_name: cte_main
pool_query: local
pool_id: "512.0"
storage:
- path: "ram::cte_ram_tier1"
bdev_type: ram
capacity_limit: 512MB
score: 1.0
dpe:
dpe_type: max_bw
Multi-Tier RAM + NVMe + HDD
networking:
port: 9413
runtime:
num_threads: 16
queue_depth: 1024
compose:
- mod_name: chimaera_bdev
pool_name: "ram::chi_default_bdev"
pool_query: local
pool_id: "301.0"
bdev_type: ram
capacity: "2GB"
- mod_name: wrp_cte_core
pool_name: cte_main
pool_query: local
pool_id: "512.0"
storage:
- path: "ram::cte_cache"
bdev_type: ram
capacity_limit: 512MB
score: 1.0
- path: /mnt/nvme/cte
bdev_type: file
capacity_limit: 200GB
score: 0.9
- path: /mnt/hdd/cte
bdev_type: file
capacity_limit: 2TB
score: 0.3
dpe:
dpe_type: max_bw
targets:
neighborhood: 1
default_target_timeout_ms: 30000
poll_period_ms: 5000
Multi-Node Cluster (4 nodes)
networking:
port: 9413
neighborhood_size: 32
hostfile: /etc/iowarp/hostfile
runtime:
num_threads: 8
queue_depth: 1024
compose:
- mod_name: chimaera_bdev
pool_name: "ram::chi_default_bdev"
pool_query: local
pool_id: "301.0"
bdev_type: ram
capacity: "2GB"
- mod_name: wrp_cte_core
pool_name: cte_main
pool_query: dynamic
pool_id: "512.0"
storage:
- path: /mnt/storage
bdev_type: file
capacity_limit: 1TB
score: 0.8
dpe:
dpe_type: max_bw
targets:
neighborhood: 4
default_target_timeout_ms: 30000
poll_period_ms: 5000
Docker Deployment
IOWarp uses memfd_create() for shared memory on Linux, so no special /dev/shm configuration is needed. Only mem_limit matters for resource control.
# docker-compose.yml
services:
iowarp:
image: iowarp/deploy-cpu:latest
container_name: iowarp
hostname: iowarp
volumes:
- ./chimaera.yaml:/home/iowarp/.chimaera/chimaera.yaml:ro
ports:
- "9413:9413"
mem_limit: 8g
command: ["chimaera", "runtime", "start"]
restart: unless-stopped
For multi-node Docker deployments, mount a shared hostfile and set the networking.hostfile path accordingly. See HPC Cluster for details.