Indexer cluster deployment
Indexer cluster deployment overview
Index replication offers benefits such as data availability, data fidelity, data recovery, disaster recovery, and search affinity. It ensures that an indexer is always available to handle data, and the indexed data is searchable. It also guarantees data consistency and fault tolerance.
An indexer cluster consists of:
A single
manager nodeto manage the cluster.Several to many
peer nodesto index data and maintain multiple copies, providing redundancy and data availability.One or more
search headsto coordinate searches across the set of peer nodes and provide a unified search experience
Here is a diagram of a basic, single-site indexer cluster, containing three peer nodes and supporting a replication factor of 3:
Deploy a cluster
Configure the manager node with the CLI
/opt/splunk/bin/splunk edit cluster-config -mode manager -replication_factor 4 -search_factor 3 -secret your_key -cluster_label cluster1
/opt/splunk/bin/splunk restartConfigure a peer node with the CLI
/opt/splunk/bin/splunk edit cluster-config -mode slave -master_uri https://<master-ip>:8089 -secret your_key -replication_port 9100
/opt/splunk/bin/splunk restartVerifying the cluster configuration using the CLI
/opt/splunk/bin/splunk list cluster-config
# Manager node
/opt/splunk/bin/splunk list cluster-peers
/opt/splunk/bin/splunk show cluster-statusConfigure the search head with the CLI
/opt/splunk/bin/splunk edit cluster-config -mode searchhead -master_uri https://<master-ip>:8089 -secret your_key
/opt/splunk/bin/splunk restartAll cluster configuration data is stored in server.conf
/opt/splunk/etc/system/local/server.confRestarting Indexer Cluster Components:
Restart the master node using
/opt/splunk/bin/splunk restartRestart the search head using
/opt/splunk/bin/splunk restartPerform a rolling restart of peer nodes:
/opt/splunk/bin/splunk edit cluster-config -percent_peers_to_restart 20 /opt/splunk/bin/splunk rolling-restart cluster-peers /opt/splunk/bin/splunk rolling-restart cluster-peers -searchable true
Maintenance mode enabled/disabled on master node.
/opt/splunk/bin/splunk enable maintenance-mode
/opt/splunk/bin/splunk show maintenance-mode
/opt/splunk/bin/splunk disable maintenance-modeRemove Excess Buckets
Using the master dashboard (GUI)
Using the CLI:
/opt/splunk/bin/splunk list excess-buckets [index-name] /opt/splunk/bin/splunk remoev excess-buckets [index-name]
List of commands and parameters related to clustering
/opt/splunk/bin/splunk help clusteringConfiguration Bundle Deployment
Deployed from master node using Splunk Web or CLI
Initiates rolling restart of all peer nodes if needed
/opt/splunk/bin/splunk validate cluster-bundle --check-restart
/opt/splunk/bin/splunk apply cluster-bundle
/opt/splunk/bin/splunk show cluster-bundle-status
/opt/splunk/bin/splunk rollback cluster-bundleMake data rebalance search-safe
Master Node (Edit server.conf)
searchable_rebalance = true
rebalance_search_completion_timeout = 360Best practice: Forward manager node data to the indexer layer
Ensure necessary indexes exist on the indexers:
Check if indexes like _audit and _internal are present on both the manager node and the indexers.
If custom indexes exist only on the manager node, make sure to create the same indexes on the indexers to hold the corresponding manager data.
Configure the manager node as a forwarder:
Create an outputs.conf file on the manager node.
Configure load-balanced forwarding across the set of peer nodes.
Turn off indexing on the manager node to prevent it from retaining data locally and forwarding it to the peers.
Note: Ensure that the manager node is also set up as a search head in the indexer cluster. This allows it to perform searches and access the data it forwards to the peers.
Here is an example outputs.conf file:
# Turn off indexing
[indexAndForward]
index = false
[tcpout]
defaultGroup = my_peers_nodes
forwardedindex.filter.disable = true
indexAndForward = false
[tcpout:my_peers_nodes]
server=10.10.10.1:9997,10.10.10.2:9997,10.10.10.3:9997This example assumes that each peer node's receiving port is set to 9997.
Configure the peers for index replication:
Ensure that all necessary indexes are available on the peers.
If you need to install apps or change configurations, apply the changes to all peers in a consistent manner, ensuring that they use a common set of indexes.
If you need to add indexes (including indexes defined by an app), configure the peers to use the same set of indexes.
Note: After configuring the peers, you can start replicating data between the manager node and the peers.
Forwarder Outputs Example
[tcpout]
defaultGroup = my_peers_nodes
[tcpout:my_peers_nodes]
useACK = true
server=10.10.10.1:9997,10.10.10.2:9997,10.10.10.3:9997
autoLBFrequency = 60
autoLBVolume = 1048576Resources
Last updated