Skip to content
This repository has been archived by the owner on Nov 14, 2022. It is now read-only.

Elasticsearch disk space management and migration

Samuel edited this page Apr 12, 2022 · 6 revisions

Mounting and formatting the volume

  1. Create new volume on https://openstack.stfc.ac.uk/project/instances/

  2. Mount it to elasticsearch instance

    • At this point, this is just an empty unformatted data blob
    • Note the location, shown in the volume details, at which it is mounted - the steps below assume it was mounted at /dev/vdb, which should be the default for first additional volume
  3. mkfs /dev/vdb - Make it into a filesystem

    • This will just use all defaults, which means ext2 filesystem on CentOS7
  4. mkdir /mnt/external - Create the mounting point, in this case /mnt/external is used, but any folder works

  5. Add entry into /etc/fstab to mount it

    • /dev/vdb /mnt/external ext2 defaults 0 0 - simple mount
  6. Refresh the mounts with sudo mount -a

Changing the Elasticsearch path.data

  1. service elasticsearch stop - to stop the service.
  2. mkdir /mnt/external/elasticsearch && rsync --info=progress2 -ah /var/lib/elasticsearch/ /mnt/external/elasticsearch
    • This will take a bit of time, for 75GB sized /var/lib/elasticsearch it took about 45mins at 25MB/s
  3. Open /etc/elasticsearch/elasticsearch.yml and look for the path.data config. Change the path to /mnt/external/elasticsearch
  4. service elasticsearch start - Start the service again

If you don't copy over the data, the elasticsearch service will start as a fresh new installation - meaning there won't be any accounts in it and you'll have to generate the default accounts manually (elastic, kibana_system, etc). A guide for that can be found here. To avoid redeploying configurations to monitored nodes, make sure to use the same passwords as the ones stored in Keeper.

Previous incidents

https://github.com/autoreduction/autoreduce-documents/blob/master/on-call/post-mortems/2021-06-15.md

Clone this wiki locally