- Back Up and Restore Deployments >
- Restore MongoDB Deployments >
- Restore a Sharded Cluster from a Snapshot
Restore a Sharded Cluster from a Snapshot¶
On this page
When you restore a cluster from a snapshot, Ops Manager provides you with restore files for the selected restore point.
To learn about the restore process, see Restore Overview.
Changed in Ops Manager 3.6: Point-in-Time Restores
Prior to 3.6, the Backup Daemon created the complete point- in-time restore on its host. With 3.6, you download a client-side tool along with your snapshot. This tool downloads and applies the oplog to a snapshot on your client system. This reduces network and storage needs for your Ops Manager deployment.
Considerations¶
Review change to BinData
BSON sub-type¶
The BSON specification changed the
default subtype for the BSON binary datatype (BinData
) from 2
to 0
. Some binary data stored in a snapshot may be BinData
subtype 2. The Backup Agent automatically detects and converts snapshot
data in BinData
subtype 2 to BinData
subtype 0. If your
application code expects BinData
subtype 2, you must update your
application code to work with BinData
subtype 0.
See also
The notes on the BSON specification explain the particular specifics of this change.
Restore using settings given in restoreInfo.txt
¶
The backup restore file includes a metadata file named
restoreInfo.txt
. This file captures the options the database used
when the snapshot was taken. The database must be run with the listed
options after it has been restored. This file contains:
Group name
Replica Set name
Cluster ID (if applicable)
Snapshot timestamp (as Timestamp at UTC)
Last Oplog applied (as a BSON Timestamp at UTC)
MongoDB version
Storage engine type
mongod startup options
used on the database when the snapshot was takenEncryption (Only appears if encryption is enabled on the snapshot)
Master Key UUID (Only appears if encryption is enabled on the snapshot)
If restoring from an encrypted backup, you must have a certificate provisioned for this Master Key.
Snapshots when Agent Cannot Stop Balancer¶
Ops Manager displays a warning next to cluster snapshots taken while the balancer is enabled. If you restore from such a snapshot, you run the risk of lost or orphaned data. For more information, see Snapshots when Agent Cannot Stop Balancer.
Secure Copy (SCP
) Delivery¶
Important
Restore delivery via SCP was removed in Ops Manager 4.0.
Prerequisites¶
Restore from Encrypted Backup Requires Same Master Key¶
To restore from an encrypted backup, you need the same master key used to encrypt the backup and either the same certificate as is on the Backup Daemon host or a new certificate provisioned with that key from the KMIP host.
If the snapshot is encrypted, the restore panel displays the KMIP
master key id and the KMIP server information. You can also find
the information when you view the snapshot itself as well as in
the restoreInfo.txt
file.
Disable Client Requests to MongoDB during Restore¶
You must ensure that the MongoDB deployment does not receive client requests during restoration. You must either:
- Restore to new systems with new hostnames and reconfigure your application code once the new deployment is running, or
- Ensure that the MongoDB deployment will not receive client requests while you restore data.
Restore a Snapshot¶
- Automatic Restore
- Manual Restore
To have Ops Manager automatically restore the snapshot:
Click Backup, then the Overview tab.¶
Click the deployment, then click Restore or Download.¶
Select the restore point.¶
Choose the point from which you want to restore your backup.
Restore Type Description Action Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore. Point In Time Allows you to choose a date and time as your restore time objective for your snapshot. By default, the Oplog Store stores 24 hours of data.
Example
If you select
12:00
, the last operation in the restore is11:59:59
or earlier.Important
You must enable cluster checkpoints to perform a PIT restore on a sharded cluster.
If no checkpoints that include your date and time are available, Ops Manager asks you to choose another point in time.
You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.
Select a Date and Time. Click Next.
If you chose Point In Time, a list of Checkpoints closest to the time you selected appears. You may choose one of the listed checkpoints to start your point in time restore, or click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.
Choose to restore the files to another cluster.¶
Click Choose Cluster to Restore to.
Complete the following fields:
Field Action Project Select a project to which you want to restore the snapshot. Cluster to Restore to Select a cluster to which you want to restore the snapshot.
Ops Manager must manage the target sharded cluster.
Warning
Automation removes all existing data from the cluster. All backup data and snapshots for the existing cluster are preserved.
Click Restore.
Ops Manager notes how much storage space the restore requires.
Click Restore.¶
Click Backup, then the Overview tab.¶
Click the deployment, then click Restore or Download.¶
Select the restore point.¶
Choose the point from which you want to restore your backup.
Restore Type Description Action Snapshot Allows you to choose one stored snapshot. Select an existing snapshot to restore. Point In Time Allows you to choose a date and time as your restore time objective for your snapshot. By default, the Oplog Store stores 24 hours of data.
Example
If you select
12:00
, the last operation in the restore is11:59:59
or earlier.Important
You must enable cluster checkpoints to perform a PIT restore on a sharded cluster.
If no checkpoints that include your date and time are available, Ops Manager asks you to choose another point in time.
You cannot perform a PIT restore that covers any time prior to the latest backup resync. For the conditions that cause a resync, see Resync a Backup.
Select a Date and Time. Click Next.
If you chose Point In Time, a list of Checkpoints closest to the time you selected appears. You may choose one of the listed checkpoints to start your point in time restore, or click Choose another point in time to remove the list of checkpoints and select another date and time from the menus.
Once you have selected a checkpoint, apply the oplog to this snapshot to bring your snapshot to the date and time you selected. The oplog is applied for all operations up to but not including the selected time.
Click Download to restore the files manually.¶
Configure the snapshot download.¶
Configure the following download options:
Pull Restore Usage Limit Select how many times the link can be used. If you select No Limit
, the link is re-usable until it expires.Restore Link Expiration (in hours) Select the number of hours until the link expires. The default value is 1
. The maximum value is the number of hours until the selected snapshot expires.Click Finalize Request.
If you use 2FA, Ops Manager prompts you for your 2FA code. Enter your 2FA code, then click Finalize Request.
Retrieve the snapshots.¶
Ops Manager creates links to the snapshot. By default, these links are available for an hour and can be used just once.
To download the snapshots:
- If you closed the restore panel, click Backup, then Restore History.
- When the restore job completes, click (get link) for each shard and for one of the config servers appears.
- Click:
- The copy button to the right of the link to copy the link to use it later, or
- Download to download the snapshot immediately.
Extra step for point-in-time restores
For point-in-time and oplog timestamp restores, additional
instructions are shown. The final step shows the full command
you must run using the mongodb-backup-restore-util
. It
includes all of the necessary options to ensure a full restore.
Select and copy the mongodb-backup-restore-util
command
provided under Run Binary with PIT Options.
Restore the snapshot data files to the destination host.¶
Extract the snapshot archive for the config server and for each shard to a temporary location.
Example
Run the MongoDB Backup Restore Utility (Point-in-Time Restore Only).¶
Download the MongoDB Backup Restore Utility to your host.
Note
If you closed the restore panel, click Backup, then More and then Download MongoDB Backup Restore Utility.
Start a
mongod
instance using the extracted snapshot directory as the data directory.Example
Run the MongoDB Backup Restore Utility on your destination host. Run it once for the config server and each shard.
Pre-configured
mongodb-backup-restore-util
commandOps Manager provides the
mongodb-backup-restore-util
with the appropriate options for your restore on the restore panel under Run Binary with PIT Options.You should copy the
mongodb-backup-restore-util
command provided in the Ops Manager Application.The
mongodb-backup-restore-util
command uses the following options:Option Required Description --https
Optional Use if you need TLS/ SSL to connect to the --oplogSourceAddr
.--host
Required Provide the hostname or IP address for the host that serves the mongod to which the oplog should be applied. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--port
Required Provide the port for the host that serves the mongod to which the oplog should be applied. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--opStart
Required Provide the BSON timestamp for the first oplog entry you want to include in the restore. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--opEnd
Required Provide the BSON timestamp for the last oplog entry you want to include in the restore. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--logFile
Optional Provide a path, including file name, where the MBRU log is written. --oplogSourceAddr
Required Provide the URL for the Ops Manager resource endpoint. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--apiKey
Required Provide your Ops Manager Agent API Key. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--groupId
Required Provide the group ID. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--rsId
Required Provide the replica set ID. If you copied the mongodb-backup-restore-util
command provided in the Ops Manager Application, this field is pre-configured.--whitelist
Optional Provide a list of databases and/or collections to which you want to limit the restore. --blacklist
Optional Provide a list of databases and/or collections to which you want to exclude from the restore. --seedReplSetMember
Optional Use if you need a replica set member to re-create the oplog collection and seed it with the correct timestamp.
Requires
--oplogSizeMB
and--seedTargetPort
.--oplogSizeMB
Conditional Provide the oplog size in MB.
Required if
--seedReplSetMember
is set.--seedTargetPort
Conditional Provide the port for the replica set’s primary. This may be different from the ephemeral port used.
Required if
--seedReplSetMember
is set.--ssl
Optional Use if you need TLS / SSL to apply oplogs to the mongod. Requires --sslCAFile
and--sslPEMKeyFile
.--sslCAFile
Conditional Provide the path to the CA file.
Required if
--ssl
is set.--sslPEMKeyFile
Conditional Provide the path to the PEM certificate file.
Required if
--ssl
is set.--sslPEMKeyFilePwd
Conditional Provide the password for the PEM certificate file specified in
--sslPEMKeyFile
.Required if
--ssl
is set.
Copy the completed snapshots to restore to other hosts.¶
- For the config server, copy the restored config server database to the working database path of each replica set member.
- For each shard, copy the restored shard database to the working database path of each replica set member.
Unmanage the Sharded Cluster.¶
Before attempting to restore the data manually, remove the sharded cluster from Automation.
Restore the Sharded Cluster Manually.¶
Follow the tutorial from the MongoDB Manual to restore the sharded cluster.
Reimport the Sharded Cluster.¶
To manage the sharded cluster with automation again, import the sharded cluster back into Ops Manager.
Start the Sharded Cluster Balancer.¶
Once a restore completes, the sharded cluster balancer is turned off. To start the balancer:
- Click Deployment.
- Click ellipsis h icon on the card for your desired sharded cluster.
- Click Manager Balancer.
- Toggle to Yes.
- Click pencil icon to the right of Set the Balancer State.
- Toggle to Yes.
- Click Save.
- Click Review & Deploy to save the changes.