×
Sample configuration for micro sized cluster with NGINX Ingress controller (micro, no HA)
Warning
Beginning with ITRS Analytics 2.14.2, the
Micro
t-shirt size is now supported for production deployments.This update is not backward compatible. To use
Micro
in production, you must upgrade to ITRS Analytics version 2.14.2 or newer.
Download this sample Micro sized cluster with NGINX Ingress controller configuration provided by ITRS for installations with High Availability (HA) disabled.
# Example ITRS Analytics configuration for micro-sized instance handling 10k entities, 200k time series, 3k metrics/sec,
# no HA and with nginx Ingress controller.
#
# The resource requests total ~11 cores and ~50GiB memory and does not include optional Linkerd resources.
#
# It is recommended to use a storage class with defaults of 3000 IOPS and 125 MiB/s throughput.
# However, for higher-volume installations (i.e. more time series) it is recommended to use a storage
# class with increased IOPS for the Timescale workload (see below).
#
# Disk requirements for 7 days approximately, depending on the amount and type of data being ingested:
# - Timescale:
# - 100 GiB data disk
# - 40 GiB WAL disk
# - Kafka: 100 GiB
# - Kafka controller: 1 GiB
# - Postgres: 3 GiB
# - ClickHouse Keeper: 2 GiB
# - ClickHouse Traces: 30 GiB
# - Loki: 5 GiB
# - etcd: 1 GiB
# - Downsampled Metrics:
# - Raw: 2 GiB
# - Bucketed: 2 GiB
#
# Disk requirements for default retention approximately, depending on the amount and type of data being ingested:
# - Timescale:
# - 500 GiB data disk
# - 40 GiB WAL disk
# - Kafka: 100 GiB
# - Kafka controller: 1 GiB
# - Postgres: 3 GiB
# - ClickHouse Keeper: 2 GiB
# - ClickHouse Traces: 30 GiB
# - Loki: 10 GiB
# - etcd: 1 GiB
# - Downsampled Metrics:
# - Raw: 5 GiB
# - Bucketed: 5 GiB
#
apps:
externalHostname: "obcerv.mydomain.internal"
ingress:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "master"
ingestion:
externalHostname: "obcerv-ingestion.mydomain.internal"
ingress:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
# If OTel traces are ingested, the resources of trace-ingestion workload should be overwritten with the following:
# traces:
# jvmOpts: "-XX:InitialRAMPercentage=65 -XX:MaxRAMPercentage=65 -XX:MaxDirectMemorySize=80M"
# resources:
# requests:
# memory: "1500Mi"
# cpu: "50m"
# limits:
# memory: "2500Mi"
# cpu: "1"
iam:
ingress:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "minion"
kafka:
resources:
limits:
memory: "3Gi"
requests:
memory: "3Gi"
diskSize: "100Gi"
timescale:
dataDiskSize: "100Gi"
walDiskSize: "40Gi"
resources:
limits:
memory: "6Gi"
requests:
memory: "6Gi"
# For higher-volume installations, it is recommended to use a storage class with increased IOPS, more memory and dedicated timeseries disks.
#dataStorageClass: "timescale"
#walStorageClass: "timescale"
#sharedBuffersPercentage: 40
#dataDiskSize: "100Gi"
#walDiskSize: "40Gi"
#timeseriesDiskCount: 4
#timeseriesDiskSize: "125Gi"
#resources:
# requests:
# memory: "8Gi"
# limits:
# memory: "8Gi"
loki:
diskSize: "5Gi"
downsampledMetricsStream:
diskSize: "2Gi"
bucketedDiskSize: "2Gi"
clickhouse:
traces:
diskSize: "30Gi"
resources:
limits:
cpu: "2"
memory: "6Gi"
requests:
cpu: "2"
memory: "6Gi"
# For higher-volume installations, it is recommended run additional sinkd replicas.
#sinkd:
# replicas: 2
# rawReplicas: 2
["ITRS Analytics"]
["User Guide", "Technical Reference"]