On this page本页内容
Before you attempt any downgrade, familiarize yourself with the content of this document, particularly the Downgrade Recommendations and Checklist and the procedure for downgrading sharded clusters.
When downgrading, consider the following:
To downgrade, use the latest version in the 3.0-series.
Follow the downgrade procedures:
If you have version 3 text indexes (i.e. the default version for text indexes in MongoDB 3.2), drop the version 3 text indexes before downgrading MongoDB. After the downgrade, recreate the dropped text indexes.
To determine the version of your text
indexes, run db.collection.getIndexes()
to view index specifications. For text indexes, the method returns the version information in the field textIndexVersion
. For example, the following shows that the text
index on the quotes
collection is version 3.
2dsphere
Index Version Check¶If you have version 3 2dsphere
indexes (i.e. the default version for 2dsphere
indexes in MongoDB 3.2), drop the version 3
2dsphere
indexes before downgrading MongoDB. After the downgrade, recreate the 2dsphere
indexes.
To determine the version of your 2dsphere
indexes, run db.collection.getIndexes()
to view index specifications. For 2dsphere
indexes, the method returns the version information in the field 2dsphereIndexVersion
. For example, the following shows that the 2dsphere
index on the locations
collection is version 3.
Before downgrading MongoDB, drop any partial indexes.
mongod
Instance¶The following steps outline the procedure to downgrade a standalone mongod
from version 3.2 to 3.0.
For the downgrade, use the latest release in the 3.0 series.
mongod
instance.¶Important
If your mongod
instance is using the WiredTiger storage engine, you must include the --storageEngine
option (or storage.engine
if using the configuration file) with the 3.0 binary.
Shut down your mongod
instance. Replace the existing binary with the downloaded mongod
binary and restart.
The following steps outline a “rolling” downgrade process for the replica set. The “rolling” downgrade process minimizes downtime by downgrading the members individually while the other members are available:
Downgrade each secondary member of the replica set, one at a time:
mongod
. See Stop mongod Processes for instructions on safely terminating mongod
processes.Important
If your mongod
instance is using the WiredTiger storage engine, you must include the --storageEngine
option (or storage.engine
if using the configuration file) with the 3.0 binary.
SECONDARY
state before downgrading the next secondary. To check the member’s state, use the rs.status()
method in the mongo
shell.Use rs.stepDown()
in the mongo
shell to step down the primary and force the normal failover procedure.
rs.stepDown()
expedites the failover procedure and is preferable to shutting down the primary directly.
mongod
.¶When rs.status()
shows that the primary has stepped down and another member has assumed PRIMARY
state, shut down the previous primary and replace the mongod
binary with the 3.0 binary and start the new instance.
Important
If your mongod
instance is using the WiredTiger storage engine, you must include the --storageEngine
option (or storage.engine
if using the configuration file) with the 3.0 binary.
Replica set failover is not instant but will render the set unavailable to writes and interrupt reads until the failover process completes. Typically this takes 10 seconds or more. You may wish to plan the downgrade during a predetermined maintenance window.
While the downgrade is in progress, you cannot make changes to the collection metadata. For example, during the downgrade, do not do any of the following:
sh.enableSharding()
sh.shardCollection()
sh.addShard()
db.createCollection()
db.collection.drop()
db.dropDatabase()
Turn off the balancer in the sharded cluster, as described in Disable the Balancer.
For each replica set shard:
mongod
secondaries before downgrading the primary.replSetStepDown
and then downgrade.For details on downgrading a replica set, see Downgrade a 3.2 Replica Set.
If the sharded cluster uses 3 mirrored mongod
instances for the config servers, downgrade all three instances in reverse order of their listing in the --configdb
option for mongos
. For example, if mongos
has the following --configdb
listing:
Downgrade first confserver3
, then confserver2
, and lastly, confserver1
. If your mongod
instance is using the WiredTiger storage engine, you must include the --storageEngine
option (or storage.engine
if using the configuration file) with the 3.0 binary.
Once the downgrade of sharded cluster components is complete, re-enable the balancer.
Turn off the balancer in the sharded cluster, as described in Disable the Balancer.
minOpTimeUpdaters
value.¶If the sharded cluster uses CSRS, for each shard, check the minOpTimeUpdaters
value to see if it is zero. A minOpTimeUpdaters
value of zero indicates that there are no migrations in progress. A non-zero value indicates either that a migration is in progress or that a previously completed migration has somehow failed to clear the minOpTimeUpdaters
value and should be cleared.
To check the value, for each shard, connect to the primary member (or if a shard is a standalone, connect to the standalone) and query the system.version
collection in the admin
database for the minOpTimeRecovery
document:
If minOpTimeUpdaters
is non-zero, clear the value by stepping down the current primary. The value is cleared when a new primary gets elected.
If the shard is a standalone, restart the shard to clear the value.
If the sharded cluster uses CSRS:
0
for votes
and priority
.
Connect a mongo
shell to the primary and run:
replSetStepDown
against the admin
database. Ensure enough time for the secondaries to catch up.
Connect a mongo
shell to the primary and run:
mongos
instances, and the shards.--replSet
or, if using a configuration file, replication.replSetName
.mongodump
to dump the config
database, then shutdown the CSRS member.
Include all other options as required by your deployment.
mongod
instance that will run with the MMAPv1 storage engine. mongod
must have read and write permissions for the data directory.
mongod
with MMAPv1 will not start with data files created with a different storage engine.
--storageEngine mmapv1
and without the --replSet
or, if using a configuration file, replication.replSetName
.mongorestore --drop
to restore the config
dump to the new MMAPv1 mongod
.
Optionally, once the sharded cluster is online and working as expected, delete the WiredTiger data directories.
mongod
;
i.e. without the --replSet
or, if using a configuration file, replication.replSetName
.
mongos
instances.¶Important
As the config servers changed from a replica set to three mirrored mongod
instances, update the --configdb
setting. All mongos
must use the same --configdb
string.
Downgrade the binaries and restart.
Downgrade the binaries and restart. Downgrade in reverse order of their listing in the --configdb
option for mongos
.
If your mongod
instance is using the WiredTiger storage engine, you must include the --storageEngine
option (or storage.engine
if using the configuration file) with the 3.0 binary.
For each shard, remove the minOpTimeRecovery
document from the admin.system.version
collection using the following remove operation. If the shard is a replica set, issue the remove operation on the primary of the replica set for each shard:
Note
If the cluster is running with authentication enabled, you must have a user with the proper privileges to remove the minOpTimeRecovery
document from the admin.system.version
collection. The following operation creates a downgrade
user on the admin
database with the proper privileges:
For each replica set shard, downgrade the mongod
binaries and restart. If your mongod
instance is using the WiredTiger storage engine, you must include the --storageEngine
option (or storage.engine
if using the configuration file) with the 3.0 binary.
mongod
secondaries before downgrading the primary.replSetStepDown
and then downgrade.For details on downgrading a replica set, see Downgrade a 3.2 Replica Set.
Optionally, drop the local
database from the SCCC members if it exists.
Once the downgrade of sharded cluster components is complete, re-enable the balancer.