On this page本页内容
Before you attempt any downgrade, familiarize yourself with the content of this document.
Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.
Optional but Recommended. Create a backup of your database.
While the downgrade is in progress, you cannot make changes to the collection metadata. For example, during the downgrade, do not do any of the following:
sh.enableSharding()
sh.shardCollection()
sh.addShard()
db.createCollection()
db.collection.drop()
db.dropDatabase()
Before downgrading the binaries, you must downgrade the feature compatibility version and remove any 3.6 features incompatible with 3.4 or earlier versions as outlined below. These steps are necessary only if featureCompatibilityVersion
has ever been set to "3.6"
.
mongo
shell to the mongos
instance.featureCompatibilityVersion
to "3.4"
.
The setFeatureCompatibilityVersion
command performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on the mongos
instance.
To ensure that all members of the sharded cluster reflect the updated featureCompatibilityVersion
, connect to each shard replica set member and each config server replica set member and check the featureCompatibilityVersion
:
Tip
For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
All members should return a result that includes:
If any member returns a featureCompatibilityVersion
that includes either a version
value of "3.6"
or a targetVersion
field, wait for the member to reflect version "3.4"
before proceeding.
For more information on the returned featureCompatibilityVersion
value, see View FeatureCompatibilityVersion.
Remove all persisted features that are incompatible with 3.4. For example, if you have defined any any view definitions, document validators, and partial index filters that use 3.6 query features such as $jsonSchema
or $expr
, you must remove them.
Warning
Before proceeding with the downgrade procedure, ensure that all members, including delayed replica set members in the sharded cluster, reflect the prerequisite changes. That is, check the featureCompatibilityVersion
and the removal of incompatible features for each node before downgrading.
Using either a package manager or a manual download, get the latest release in the 3.4 series. If using a package manager, add a new repository for the 3.4 binaries, then perform the actual downgrade process.
Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.
Turn off the balancer as described in Disable the Balancer.
mongos
instances.¶Downgrade the binaries and restart.
Downgrade the shards one at a time. If the shards are replica sets, for each shard:
mongod
process.
Note
If you do not perform a clean shut down, errors may result that prevent the mongod
process from starting.
Forcibly terminating the mongod
process may cause inaccurate results for db.collection.count()
and db.stats()
as well as lengthen startup time the next time that the mongod
process is restarted.
This applies whether you attempt to terminate the mongod
process from the command line via kill
or similar, or whether you use your platform’s initialization system to issue a stop
command, like sudo systemctl stop mongod
or sudo service mongod stop
.
--shardsvr
and --port
command line options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip
.
Or if using a configuration file, update the file to include sharding.clusterRole: shardsvr
, net.port
, and any other configuration as appropriate for your deployment, e.g. net.bindIp
, and start:
SECONDARY
state before downgrading the next secondary member. To check the member’s state, you can issue rs.status()
in the mongo
shell.
Repeat for each secondary member.
Connect a mongo
shell to the primary and use rs.stepDown()
to step down the primary and force an election of a new primary:
rs.status()
shows that the primary has stepped down and another member has assumed PRIMARY
state, downgrade the stepped-down primary:
mongod
binary with the 3.4 binary.--shardsvr
and --port
command line options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip
.
Or if using a configuration file, update the file to include sharding.clusterRole: shardsvr
, net.port
, and any other configuration as appropriate for your deployment, e.g. net.bindIp
, and start the 3.4 binary:
If the config servers are replica sets:
mongod
instance and replace the 3.6 binary with the 3.4 binary.--configsvr
and --port
options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip
.
If using a configuration file, update the file to specify sharding.clusterRole: configsvr
, net.port
, and any other configuration as appropriate for your deployment, e.g. net.bindIp
, and start the 3.4 binary:
Include any other configuration as appropriate for your deployment.
SECONDARY
state before downgrading the next secondary member. To check the member’s state, issue rs.status()
in the mongo
shell.
Repeat for each secondary member.
mongo
shell to the primary and use rs.stepDown()
to step down the primary and force an election of a new primary:
rs.status()
shows that the primary has stepped down and another member has assumed PRIMARY
state, shut down the stepped-down primary and replace the mongod
binary with the 3.4 binary.--configsvr
and --port
options. Include any other configuration as appropriate for your deployment, e.g. --bind_ip
.
If using a configuration file, update the file to specify sharding.clusterRole: configsvr
, net.port
, and any other configuration as appropriate for your deployment, e.g. net.bindIp
, and start the 3.4 binary:
Once the downgrade of sharded cluster components is complete, re-enable the balancer.