Navigation
This version of the documentation is archived and no longer supported. To learn how to upgrade your version of MongoDB Ops Manager, refer to the upgrade documentation.
You were redirected from a different version of the documentation. Click here to go back.

Restore a Replica Set from a Backup

Overview

You can restore a replica set from the artifacts captured by Ops Manager Backup. You can restore either a stored snapshot or a point in time in the last 24 hours between snapshots. If you restore from a point in time, Ops Manager Backup creates a custom snapshot for the selected point by applying the oplog to the previous regular snapshot. Point-in-time recovery takes longer than recovery from a stored snapshot.

When you select a snapshot to restore, Ops Manager creates a link to download the snapshot as a tar file. The link is available for one download only and times out after an hour. You can optionally have Ops Manager scp the tar file directly to your system. The scp delivery method requires additional configuration but provides faster delivery. Windows does not come with scp and require additional setup outside the scope of this manual.

You can restore either to new hardware or existing hardware. If you restore to existing hardware, use a different data directory than used previously.

Sequence

The sequence used here to restore a replica set is to download the restore file and distribute it to each server, restore the primary, and then restore the secondaries. For additional approaches to restoring replica sets, see the procedure from the MongoDB Manual to Restore a Replica Set from a Backup.

Prerequisites

Oplog Size

To seed each replica set member, you will use the seedSecondary.sh script included in the backup restore file. When you run the script, you will provide the replica set’s oplog size, in gigabytes. If you do not have the size, see the section titled “Check the Size of the Oplog” on the Troubleshoot Replica Sets page of the MongoDB manual.

Client Requests

You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:

  • restore to new systems with new hostnames and reconfigure your application code once the new deployment is running, or
  • ensure that the MongoDB deployment will not receive client requests while you restore data.

Procedures

Select and Download the Snapshot

1

Select the Backups tab and then select Replica Set Status.

2

Click the name of the replica set to restore.

Ops Manager displays your selection’s stored snapshots.

3

Select the snapshot from which to restore.

To select a stored snapshot, click the Restore this snapshot link next to the snapshot.

To select a custom snapshot, click the Restore button at the top of the page. In the resulting page, select a snapshot as the starting point. Then select the Use Custom Point In Time checkbox and enter the point in time in the Date and Time fields. Ops Manager includes all operations up to but not including the point in time. For example, if you select 12:00, the last operation in the restore is 11:59:59 or earlier. Click Next.

4

Select HTTP as the delivery method for the snapshot.

In the Delivery Method field, select Pull via Secure HTTP (HTTPS).

Optionally, you can instead choose SCP as the delivery method. See: Retrieve a Snapshot with SCP Delivery for the SCP delivery option’s configuration. If you choose SCP, you must provide the hostname and port of the server to receive the files and provide access to the server through a username and password or though an SSH key. Follow the instructions on the Ops Manager screen.

5

Finalize the request.

Click Finalize Request and confirm your identify via two-factor verification. Then click Finalize Request again.

6

Retrieve the snapshot.

Ops Manager creates a one-time link to a tar file of the snapshot. The link is available for one download and times out after an hour.

To download the snapshot, select the Ops Manager Backup tab and then select Restore Jobs. When the restore job completes, select the download link next to the snapshot.

If you optionally chose SCP as the delivery method, the files are copied to the server directory you specfied. To verify that the files are complete, see the section on how to validate an SCP restore.

7

Copy the snapshot to each server to restore.

Restore the Primary

You must have a copy of the snapshot on the server that provides the primary:

1

Shut down the entire replica set.

Shut down the replica set’s mongod processes using one of the following methods, depending on your configuration:

  • Automated Deployment:

    If you use Ops Manager Automation to manage the replica set, you must shut down through the Ops Manager console. See Shut Down MongoDB Processes.

  • Non-Automated Deployment on MongoDB 2.6 or Later:

    Connect to each member of the set and issue the following:

    use admin
    db.shutdownServer()
    
  • Non-Automated Deployment on MongoDB 2.4 or earlier:

    Connect to each member of the set and issue the following:

    use admin
    db.shutdownServer( { force: true } )
    
2

Restore the snapshot data files to the primary.

Extract the data files to the location where the mongod instance will access them through the dbpath setting. If you are restoring to existing hardware, use a different data directory than used previously. The following are example commands:

tar -xvf <backup-restore-name>.tar.gz
mv <backup-restore-name> /data
3

Start the primary with the new dbpath.

For example:

mongod --dbpath /<path-to-data> --replSet <replica-set-name> --logpath /<path-to-data>/mongodb.log --fork
4

Connect to the primary and initiate the replica set.

For example, first issue the following to connect:

mongo

And then issue rs.initiate():

rs.initiate()
5

Restart the primary as a standalone, without the --replSet option.

Use the following sequence:

  1. Shut down the process using one of the following methods:

    • Automated Deployment:

      Shut down through the Ops Manager console. See Shut Down MongoDB Processes.

    • Non-Automated Deployment on MongoDB 2.6 or Later:

      use admin
      db.shutdownServer()
      
    • Non-Automated Deployment on MongoDB 2.4 or earlier:

      use admin
      db.shutdownServer( { force: true } )
      
  2. Restart the process as a standalone:

    mongod --dbpath /<path-to-data> --logpath /<path-to-data>/mongodb.log --fork
    
6

Connect to the primary and drop the oplog.

For example, first issue the following to connect:

mongo

And then issue rs.drop() to drop the oplog.

use local
db.oplog.rs.drop()
7

Run the seedSecondary.sh script on the primary.

The seedSecondary.sh script re-creates the oplog collection and seeds it with the timestamp of the snapshot’s creation. This will allow the secondary to come back up to time without requiring a full initial sync. This script is customized by Ops Manager for this particular snapshot and is included in the backup restore file.

To run the script, issue the following command at the system prompt, where <mongod-port> is the port of the mongod instance and <oplog-size-in-gigabytes> is the size of the replica set’s oplog:

./seedSecondary.sh <mongod-port> <oplog-size-in-gigabytes>
8

Restart the primary as part of a replica set.

Use the following sequence:

  1. Shut down the process using one of the following methods:

    • Automated Deployment:

      Shut down through the Ops Manager console. See Shut Down MongoDB Processes.

    • Non-Automated Deployment on MongoDB 2.6 or Later:

      use admin
      db.shutdownServer()
      
    • Non-Automated Deployment on MongoDB 2.4 or earlier:

      use admin
      db.shutdownServer( { force: true } )
      
  2. Restart the process as part of a replica set:

    mongod --dbpath /<path-to-data> --replSet <replica-set-name>
    

Restore Each Secondary

After you have restored the primary you can restore all secondaries. You must have a copy of the snapshot on all servers that provide the secondaries:

1

Connect to the server where you will create the new secondary.

2

Restore the snapshot data files to the secondary.

Extract the data files to the location where the mongod instance will access them through the dbpath setting. If you are restoring to existing hardware, use a different data directory than used previously. The following are example commands:

tar -xvf <backup-restore-name>.tar.gz
mv <backup-restore-name> /data
3

Start the secondary as a standalone, without the --replSet option.

Use the following sequence:

  1. Shut down the process using one of the following methods:

    • Automated Deployment:

      Shut down through the Ops Manager console. See Shut Down MongoDB Processes.

    • Non-Automated Deployment on MongoDB 2.6 or Later:

      use admin
      db.shutdownServer()
      
    • Non-Automated Deployment on MongoDB 2.4 or earlier:

      use admin
      db.shutdownServer( { force: true } )
      
  2. Restart the process as a standalone:

    mongod --dbpath /<path-to-data> --logpath /<path-to-data>/mongodb.log --fork
    
4

Run the seedSecondary.sh script on the secondary.

The seedSecondary.sh script re-creates the oplog collection and seeds it with the timestamp of the snapshot’s creation. This will allow the secondary to come back up to time without requiring a full initial sync. This script is customized by Ops Manager for this particular snapshot and is included in the backup restore file.

To run the script, issue the following command at the system prompt, where <mongod-port> is the port of the mongod instance and <oplog-size-in-gigabytes> is the size of the replica set’s oplog:

./seedSecondary.sh <mongod-port> <oplog-size-in-gigabytes>
5

Restart the secondary as part of the replica set.

Use the following sequence:

  1. Shut down the process using one of the following methods:

    • Automated Deployment:

      Shut down through the Ops Manager console. See Shut Down MongoDB Processes.

    • Non-Automated Deployment on MongoDB 2.6 or Later:

      use admin
      db.shutdownServer()
      
    • Non-Automated Deployment on MongoDB 2.4 or earlier:

      use admin
      db.shutdownServer( { force: true } )
      
  2. Restart the process as part of a replica set:

    mongod --dbpath /<path-to-data> --replSet <replica-set-name>
    
6

Connect to the primary and add the secondary to the replica set.

Connect to the primary and use rs.add() to add the secondary to the replica set.

rs.add("<host>:<port>")

Repeat this operation for each member of the set.