Learn Migration between Versions working project make money

Elasticsearch – Migration between Versions



In any system or software, when we are upgrading to newer version, we need to follow a few steps to maintain the application settings, configurations, data and other things. These steps are required to make the application stable in new system or to maintain the integrity of data (prevent data from getting corrupt).

You need to follow the following steps to upgrade Elasticsearch −

  • Read Upgrade docs from

  • Test the upgraded version in your non production environments like in UAT, E2E, SIT or DEV environment.

  • Note that rollback to previous Elasticsearch version is not possible without data backup. Hence, a data backup is recommended before upgrading to a higher version.

  • We can upgrade using full cluster restart or rolling upgrade. Rolling upgrade is for new versions. Note that there is no service outage, when you are using rolling upgrade method for migration.

Steps for Upgrade

  • Test the upgrade in a dev environment before upgrading your production cluster.

  • Back up your data. You cannot roll back to an earlier version unless you have a snapshot of your data.

  • Consider closing machine learning jobs before you start the upgrade process. While machine learning jobs can continue to run during a rolling upgrade, it increases the overhead on the cluster during the upgrade process.

  • Upgrade the components of your Elastic Stack in the following order −

    • Elasticsearch
    • Kibana
    • Logstash
    • Beats
    • APM Server

Upgrading from 6.6 or Earlier

To upgrade directly to Elasticsearch 7.1.0 from versions 6.0-6.6, you must manually reindex any 5.x indices you need to carry forward, and perform a full cluster restart.

Full Cluster Restart

The process of full cluster restart involves shutting down each node in the cluster, upgrading each node to 7x and then restarting the cluster.

Following are the high level steps that need to be carried out for full cluster restart −

  • Disable shard allocation
  • Stop indexing and perform a synced flush
  • Shutdown all nodes
  • Upgrade all nodes
  • Upgrade any plugins
  • Start each upgraded node
  • Wait for all nodes to join the cluster and report a status of yellow
  • Re-enable allocation

Once allocation is re-enabled, the cluster starts allocating the replica shards to the data nodes. At this point, it is safe to resume indexing and searching, but your cluster will recover more quickly if you can wait until all primary and replica shards have been successfully allocated and the status of all nodes is green.

Leave a Reply

Your email address will not be published. Required fields are marked *