On this page本页内容
Changed in version 3.4.在版本3.4中更改。The balancer process has moved from the mongos
instances to the primary member of the config server replica set.
This page describes common administrative procedures related to balancing. For an introduction to balancing, see Sharded Cluster Balancer. For lower level information on balancing, see Cluster Balancer.
Important
Use the version of the mongo
shell that corresponds to the version of the sharded cluster. For example, do not use a 3.2 or earlier version of mongo
shell against the 3.4 sharded cluster.
sh.getBalancerState()
checks if the balancer is enabled (i.e. that the balancer is permitted to run). sh.getBalancerState()
does not check if the balancer is actively balancing chunks.
To see if the balancer is enabled in your sharded cluster, issue the following command, which returns a boolean:
You can also see if the balancer is enabled using sh.status()
. The currently-enabled
field indicates whether the balancer is enabled, while the currently-running
field indicates if the balancer is currently running.
To see if the balancer process is active in your cluster:
The default chunk size for a sharded cluster is 64 megabytes. In most situations, the default size is appropriate for splitting and migrating chunks. For information on how chunk size affects deployments, see details, see Chunk Size.
Changing the default chunk size affects chunks that are processes during migrations and auto-splits but does not retroactively affect all chunks.
To configure default chunk size, see Modify Chunk Size in a Sharded Cluster.
In some situations, particularly when your data set grows slowly and a migration can impact performance, it is useful to ensure that the balancer is active only at certain times. The following procedure specifies the activeWindow
, which is the timeframe during which the balancer will be able to migrate chunks:
Issue the following command to switch to the config database.
stopped
.¶The balancer will not activate in the stopped
state. To ensure that the balancer is not stopped
, use sh.startBalancer()
, as in the following:
The balancer will not start if you are outside of the activeWindow
timeframe.
Starting in MongoDB 4.2, sh.startBalancer()
also enables auto-splitting for the sharded cluster.
Set the activeWindow
using update()
, as in the following:
Replace <start-time>
and <end-time>
with time values using two digit hour and minute values (i.e. HH:MM
) that specify the beginning and end boundaries of the balancing window.
HH
values, use hour values ranging from 00
- 23
.MM
value, use minute values ranging from 00
- 59
.MongoDB evaluates the start and stop times relative to the time zone of the member which is serving as a primary in the config server replica set.
Note
The balancer window must be sufficient to complete the migration of all data inserted during the day.
As data insert rates can change based on activity and usage patterns, it is important to ensure that the balancing window you select will be sufficient to support the needs of your deployment.
Do not use the sh.startBalancer()
method when you have set an activeWindow
.
If you have set the balancing window and wish to remove the schedule so that the balancer is always running, use $unset
to clear the activeWindow
, as in the following:
By default, the balancer may run at any time and only moves chunks as needed. To disable the balancer for a short period of time and prevent all migration, use the following procedure:
mongos
in the cluster using the mongo
shell.If a migration is in progress, the system will complete the in-progress migration before stopping.
Starting in MongoDB 4.2, sh.stopBalancer()
also disables auto-splitting for the sharded cluster.
false
if the balancer is disabled:
Optionally, to verify no migrations are in progress after disabling, issue the following operation in the mongo
shell:
Note
To disable the balancer from a driver, use the balancerStop command against the admin
database, as in the following:
Use this procedure if you have disabled the balancer and are ready to re-enable it:
mongos
in the cluster using the mongo
shell.From the mongo
shell, issue:
Note
To enable the balancer from a driver, use the balancerStart
command against the admin
database, as in the following:
Starting in MongoDB 4.2, sh.startBalancer()
also enables auto-splitting for the sharded cluster.
If MongoDB migrates a chunk during a backup, you can end with an inconsistent snapshot of your sharded cluster. Never run a backup while the balancer is active. To ensure that the balancer is inactive during your backup operation:
If you turn the balancer off while it is in the middle of a balancing round, the shut down is not instantaneous. The balancer completes the chunk move in-progress and then ceases all further balancing rounds.
Before starting a backup operation, confirm that the balancer is not active. You can use the following command to determine if the balancer is active:
When the backup procedure is complete you can reactivate the balancer process.
You can disable balancing for a specific collection with the sh.disableBalancing()
method. You may want to disable the balancer for a specific collection to support maintenance operations or atypical workloads, for example, during data ingestions or data exports.
When you disable balancing on a collection, MongoDB will not interrupt in progress migrations.
To disable balancing on a collection, connect to a mongos
with the mongo
shell and call the sh.disableBalancing()
method.
For example:例如:
The sh.disableBalancing()
method accepts as its parameter the full namespace of the collection.
You can enable balancing for a specific collection with the sh.enableBalancing()
method.
When you enable balancing for a collection, MongoDB will not immediately begin balancing data. However, if the data in your sharded collection is not balanced, MongoDB will be able to begin distributing the data more evenly.
To enable balancing on a collection, connect to a mongos
with the mongo
shell and call the sh.enableBalancing()
method.
For example:例如:
The sh.enableBalancing()
method accepts as its parameter the full namespace of the collection.
To confirm whether balancing for a collection is enabled or disabled, query the collections
collection in the config
database for the collection namespace and check the noBalance
field. For example:例如:
This operation will return a null error, true
, false
, or no output:
true
, balancing is disabled.false
, balancing is enabled currently but has been disabled in the past for the collection. Balancing of this collection will begin the next time the balancer runs.You can also see if the balancer is enabled using sh.status()
. The currently-enabled
field indicates if the balancer is enabled.
During chunk migration, the _secondaryThrottle
value determines when the migration proceeds with next document in the chunk.
In the config.settings
collection:
_secondaryThrottle
setting for the balancer is set to a write concern, each document move during chunk migration must receive the requested acknowledgement before proceeding with the next document._secondaryThrottle
setting for the balancer is set to true
, each document move during chunk migration must receive acknowledgement from at least one secondary before the migration proceeds with the next document in the chunk. This is equivalent to a write concern of { w: 2 }
._secondaryThrottle
setting is unset, the migration process does not wait for replication to a secondary and instead continues with the next document.
Default behavior for WiredTiger starting in MongoDB 3.4.
To change the _secondaryThrottle
setting, connect to a mongos
instance and directly update the _secondaryThrottle
value in the settings
collection of the config database. For example, from a mongo
shell connected to a mongos
, issue the following command:
The effects of changing the _secondaryThrottle
setting may not be immediate. To ensure an immediate effect, stop and restart the balancer to enable the selected value of _secondaryThrottle
.
For more information on the replication behavior during various steps of chunk migration, see Chunk Migration and Replication.
For the moveChunk
command, you can use the command’s _secondaryThrottle
and writeConcern
options to specify the behavior during the command. For details, see moveChunk
command.
The _waitForDelete
setting of the balancer and the moveChunk
command affects how the balancer migrates multiple chunks from a shard. By default, the balancer does not wait for the on-going migration’s delete phase to complete before starting the next chunk migration. To have the delete phase block the start of the next chunk migration, you can set the _waitForDelete
to true.
For details on chunk migration, see Chunk Migration. For details on the chunk migration queuing behavior, see Asynchronous Chunk Migration Cleanup.
The _waitForDelete
is generally for internal testing purposes. To change the balancer’s _waitForDelete
value:
mongos
instance._waitForDelete
value in the settings
collection of the config database. Once set to true
, to revert to the default behavior:
mongos
instance._waitForDelete
field in the settings
collection of the config database:
By default, MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size.
Starting in MongoDB 4.4, by specifying the balancer setting attemptToBalanceJumboChunks
to true
, the balancer can migrate these large chunks as long as they have not been labeled as jumbo.
To set the balancer’s attemptToBalanceJumboChunks
setting, connect to a mongos
instance and directly update the config.settings
collection. For example, from a mongo
shell connected to a mongos
instance, issue the following command:
When the balancer attempts to move the chunk, if the queue of writes that modify any documents being migrated surpasses 500MB of memory the migration will fail. For details on the migration procedure, see Chunk Migration Procedure.
If the chunk you want to move is labeled jumbo
, you can manually clear the jumbo flag to have the balancer attempt to migrate the chunk.
Alternatively, you can use the moveChunk
command with forceJumbo: true to manually migrate chunks that exceed the size limit (with or without the jumbo
label). However, when you run moveChunk
with forceJumbo: true, write operations to the collection may block for a long period of time during the migration.
By default shards have no constraints in storage size. However, you can set a maximum storage size for a given shard in the sharded cluster. When selecting potential destination shards, the balancer ignores shards where a migration would exceed the configured maximum storage size.
The shards
collection in the config database stores configuration data related to shards.
To limit the storage size for a given shard, use the db.collection.updateOne()
method with the $set
operator to create the maxSize
field and assign it an integer
value. The maxSize
field represents the maximum storage size for the shard in megabytes
.
The following operation sets a maximum size on a shard of 1024 megabytes
:
This value includes the mapped size of all data files on the shard, including the local
and admin
databases.
By default, maxSize
is not specified, allowing shards to consume the total amount of available space on their machines if necessary.
You can also set maxSize
when adding a shard.
To set maxSize
when adding a shard, set the addShard
command’s maxSize
parameter to the maximum size in megabytes
. The following command run in the mongo
shell adds a shard with a maximum size of 125 megabytes: