On this page本页内容
The following checklist, along with the Development Checklist list, provides recommendations to help you avoid issues in your production MongoDB deployment.
dbPath
. Using NFS drives can result in degraded and unstable performance. See: Remote Filesystems for more information.Changed in version 3.4.在版本3.4中更改。The replication oplog window no longer needs to cover the time needed to restore a replica set member via initial sync as the oplog records are pulled during the data copy. However, the member being restored must have enough disk space in the local database to temporarily store these oplog records for the duration of this data copy stage.
With earlier versions of MongoDB, replication oplog window should cover the time needed to restore a replica set member by initial sync.
w:"majority"
write concern for availability and durability.mongod
instances.mongod
instances have 0
or 1
votes.mongos
routers in accordance with the Production Configuration guidelines.mongod
, mongos
, and config servers.tcp_keepalive_time
) to 100-120. The TCP idle timeout on the Azure load balancer is too slow for MongoDB’s connection pooling behavior. See:
Azure Production Notes for more information.MongoDB commercial support can provide advice and guidance on alternate readahead configurations.
tuned
on RHEL / CentOS, you must customize your tuned
profile. Many of the tuned
profiles that ship with RHEL / CentOS can negatively impact performance with their default settings. Customize your chosen tuned
profile to:
noop
or deadline
disk schedulers for SSD drives.noop
disk scheduler for virtualized drives in guest VMs.mongod
instances with node interleaving. See: MongoDB and NUMA Hardware for more information.ulimit
values on your hardware to suit your use case. If multiple mongod
or mongos
instances are running under the same user, scale the ulimit
values accordingly. See: UNIX ulimit Settings for more information.noatime
for the dbPath
mount point.fs.file-max
), kernel pid limit (kernel.pid_max
), maximum threads per process (kernel.threads-max
), and maximum number of memory map areas per process (vm.max_map_count
) for your deployment. For large systems, the following values provide a good starting point:
fs.file-max
value of 98000,kernel.pid_max
value of 64000,kernel.threads-max
value of 64000, andvm.max_map_count
value of 128000atime
on Unix-like systems.In the absence of disk space monitoring, or as a precaution:
storage.dbPath
drive to ensure available space if the disk becomes full.cron+df
can alert when disk space hits a high-water mark, if no other monitoring tool is available.