Configuration

YAML-based configuration with profile layering

Overview

intu uses YAML configuration files to define runtime settings, destinations, and global options. The root configuration file is intu.yaml. Profile-specific overrides such as intu.dev.yaml and intu.prod.yaml are layered on top of the base configuration, letting you vary settings per environment without duplicating shared values.

intu.yaml

The root configuration file defines the runtime, destinations, and global settings for the project.

yaml
runtime:
  name: intu
  profile: dev
  log_level: info
  storage:
    driver: memory
    postgres_dsn: ${INTU_POSTGRES_DSN}

channels_dir: channels

destinations:
  kafka-output:
    type: kafka
    kafka:
      brokers:
        - ${INTU_KAFKA_BROKER}
      topic: output-topic

kafka:
  brokers:
    - ${INTU_KAFKA_BROKER}

runtime

Top-level runtime settings that control how intu operates.

name string
Application name. Defaults to intu.
profile string
Active profile name (e.g. dev, prod). Determines which override file is loaded.
log_level string
Logging verbosity: debug, info, warn, or error.
storage object
Message storage backend configuration.
storage.driver string
Storage driver: memory or postgres.
storage.postgres_dsn string
PostgreSQL connection string. Supports ${VAR} environment variable substitution.

channels_dir

Path to the directory containing channel subdirectories, relative to the project root. Defaults to channels.

destinations

A map of named destination definitions. Each destination has a type and type-specific configuration. Channels reference these destinations by name.

yaml
destinations:
  kafka-output:
    type: kafka
    kafka:
      brokers:
        - ${INTU_KAFKA_BROKER}
      topic: output-topic
  file-archive:
    type: file
    file:
      path: /data/archive
      format: json

kafka

Global Kafka settings shared across sources and destinations that use Kafka. Individual source or destination blocks can override these values.

Profile Layering

Profiles allow environment-specific configuration without duplicating shared values. The base intu.yaml is always loaded first, then a profile override file is merged on top.

Profile files follow the naming convention intu.<profile>.yaml:

Example: intu.dev.yaml

yaml
runtime:
  log_level: debug
  storage:
    driver: memory

Example: intu.prod.yaml

yaml
runtime:
  log_level: warn
  storage:
    driver: postgres
    postgres_dsn: ${INTU_POSTGRES_DSN}

Set the active profile using the INTU_PROFILE environment variable or the --profile CLI flag:

bash
# Via environment variable
export INTU_PROFILE=prod
intu validate --dir .

# Via CLI flag
intu validate --dir . --profile prod
Merge behavior Profile values are deep-merged into the base configuration. Scalar values in the profile override the base; arrays and maps are merged recursively. This means you only need to specify the values that differ from the base.

Environment Variables

intu supports environment variable substitution in YAML files using the ${VAR} syntax. Variables can be defined in a .env file in the project root or set in the shell environment.

text
# .env
INTU_KAFKA_BROKER=localhost:9092
INTU_POSTGRES_DSN=postgres://user:pass@localhost:5432/intu

Reference variables in any YAML configuration file:

yaml
kafka:
  brokers:
    - ${INTU_KAFKA_BROKER}
Tip Add .env to your .gitignore to keep secrets out of version control. Use .env.example with placeholder values to document required variables.

Destinations

Named destinations are defined at the root level of intu.yaml and referenced by channels. Each destination has a type field and a corresponding configuration block.

Type Description
kafka Publish messages to a Kafka topic
http Send messages via HTTP POST to an endpoint
tcp Send messages over a TCP connection (including MLLP)
file Write messages to files on disk
database Insert messages into a database table
sftp Upload messages to a remote SFTP server
smtp Send messages as email via SMTP
channel Route messages to another intu channel
dicom Send DICOM messages to a PACS or modality
jms Publish messages to a JMS queue or topic
fhir Submit resources to a FHIR server
direct Invoke a function or script directly

Each destination type has its own configuration block nested under the type key:

yaml
destinations:
  api-endpoint:
    type: http
    http:
      url: https://api.example.com/messages
      method: POST
      headers:
        Authorization: Bearer ${API_TOKEN}
        Content-Type: application/json
      timeout: 30s

  hl7-receiver:
    type: tcp
    tcp:
      host: 192.168.1.100
      port: 2575
      mllp: true

  fhir-server:
    type: fhir
    fhir:
      base_url: https://fhir.example.com/r4
      auth:
        type: bearer
        token: ${FHIR_TOKEN}

Channel Configuration

Each channel is defined in its own channel.yaml file within a subdirectory of channels/. A channel configuration specifies the source, transformer, validator, and destination(s) for the pipeline.

See the Sources, Transformers, and Destinations documentation for detailed configuration of each component type.

yaml
id: adt-inbound
name: ADT Inbound
description: Receives ADT messages via MLLP and routes to Kafka
enabled: true
tags:
  - hl7
  - adt
group: inbound

source:
  type: tcp
  tcp:
    port: 2575
    mllp: true

transformer: transformer.ts
validator: validator.ts

destinations:
  - kafka-output