On this page本页内容
This document provides a collection of hard and soft limitations of the MongoDB system.本文档提供了MongoDB系统的硬限制和软限制的集合。
BSON Document Size文件大小
¶The maximum BSON document size is 16 megabytes.BSON文档的最大大小为16 MB。
The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. 最大文档大小有助于确保单个文档不会使用过多的RAM,或在传输过程中使用过多的带宽。To store documents larger than the maximum size, MongoDB provides the GridFS API. 为了存储大于最大大小的文档,MongoDB提供了GridFS API。See 有关GridFS的更多信息,请参阅mongofiles
and the documentation for your driver for more information about GridFS.mongofiles
和驱动程序的文档。
Nested Depth for BSON DocumentsBSON文档的嵌套深度
¶MongoDB supports no more than 100 levels of nesting for BSON documents.MongoDB支持的BSON文档嵌套级别不超过100级。
Database Name Case Sensitivity数据库名称区分大小写
¶Since database names are case insensitive in MongoDB, database names cannot differ only by the case of the characters.由于数据库名称在MongoDB中不区分大小写,因此数据库名称不能仅因字符大小写不同而不同。
Restrictions on Database Names for Windows对Windows数据库名称的限制
¶For MongoDB deployments running on Windows, database names cannot contain any of the following characters:对于在Windows上运行的MongoDB部署,数据库名称不能包含以下任何字符:
Also database names cannot contain the null character.此外,数据库名称不能包含空字符。
Restrictions on Database Names for Unix and Linux SystemsUnix和Linux系统的数据库名称限制
¶For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:对于在Unix和Linux系统上运行的MongoDB部署,数据库名称不能包含以下任何字符:
Also database names cannot contain the null character.此外,数据库名称不能包含空字符。
Length of Database Names数据库名称的长度
¶Database names cannot be empty and must have fewer than 64 characters.数据库名称不能为空,且必须少于64个字符。
Restriction on Collection Names对集合名称的限制
¶Collection names should begin with an underscore or a letter character, and cannot:集合名称应以下划线或字母字符开头,不能:
$
.$
。""
).""
)。system.
prefix. system.
前缀开始。If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the 如果集合名称包含特殊字符(如下划线字符)或以数字开头,则要访问集合,请使用mongo shell中的db.getCollection()
method in the mongo
shell or a similar method for your driver.db.getCollection()
方法或驱动程序的类似方法。
Namespace Length:命名空间长度:
"4.4"
or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. .
) separator, and the collection/view name (e.g. <database>.<collection>
),.
)分隔符和集合/视图名称(例如<database><collection>
),"4.2"
or earlier, the maximum length of the collection/view namespace remains 120 bytes.Restrictions on Field Names对字段名的限制
¶null
character.null
字符。$
) character.$
)字符开头。
Otherwise, starting in MongoDB 3.6, the server permits storage of field names that contain dots (i.e. 否则,从MongoDB 3.6开始,服务器允许存储包含点(即.
) and dollar signs (i.e. $
)..
)和美元符号(即$
)的字段名。
Important
The MongoDB Query Language cannot always meaningfully express queries over documents whose field names contain these characters (see SERVER-30575).MongoDB查询语言不能总是有意义地表达对字段名包含这些字符的文档的查询(请参阅SERVER-30575)。
Until support is added in the query language, the use of 在查询语言中添加支持之前,不建议在字段名称中使用$
and .
in field names is not recommended and is not supported by the official MongoDB drivers.$
和.
,官方MongoDB驱动程序也不支持这样做。
MongoDB does not support duplicate field names不支持重复的字段名
The MongoDB Query Language is undefined over documents with duplicate field names. 对于具有重复字段名的文档,MongoDB查询语言未定义。BSON builders may support creating a BSON document with duplicate field names. BSON构建器可能支持创建具有重复字段名的BSON文档。While the BSON builder may not throw an error, inserting these documents into MongoDB is not supported even if the insert succeeds. 虽然BSON builder可能不会抛出错误,但即使插入成功,也不支持将这些文档插入MongoDB。For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion.例如,通过MongoDB驱动程序插入具有重复字段名的BSON文档可能会导致驱动程序在插入之前自动删除重复的值。
Namespace Length名称空间长度
¶"4.4"
or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a collection or a view, the namespace includes the database name, the dot (.
) separator, and the collection/view name (e.g. <database>.<collection>
),"4.2"
or earlier, the maximum length of the collection/view namespace remains 120 bytes.See also参阅
Index Key Limit索引键限制
¶Changed in version 4.2在版本4.2中更改
Starting in version 4.2, MongoDB removes the 从4.2版开始,MongoDB删除了功能兼容性版本(fCV)设置为Index Key Limit
for featureCompatibilityVersion (fCV) set to "4.2"
or greater."4.2"
或更高版本的索引键限制。
For MongoDB 2.6 through MongoDB versions with fCV set to "4.0"
or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than
1024 bytes.
When the 当索引键限制适用时:Index Key Limit
applies:
index key limit
.index key limit
. Reindexing operations occur as part of the compact
command as well as the db.collection.reIndex()
method.
Because these operations drop all the indexes from a collection and then recreate them sequentially, the error from the index key limit
prevents these operations from rebuilding any remaining indexes for the collection.
index key limit
, and instead, will return an error. Previous versions of MongoDB would insert but not index such documents.index key limit
.
If an existing document contains an indexed field whose index entry exceeds the limit, any update that results in the relocation of that document on disk will error.
mongorestore
and mongoimport
will not insert documents that contain an indexed field whose corresponding index entry would exceed the index key limit
.index key limit
on initial sync but will print warnings in the logs.
Secondary members also allow index build and rebuild operations on a collection that contains an indexed field whose corresponding index entry exceeds the index key limit
but with warnings in the logs.
With mixed version replica sets where the secondaries are version 2.6 and the primary is version 2.4, secondaries will replicate documents inserted or updated on the 2.4 primary, but will print error messages in the log if the documents contain an indexed field whose corresponding index entry exceeds the index key limit
.
index key limit
.Number of Indexes per Collection
¶A single collection can have no more than 64 indexes.
Index Name Length索引名长度
¶Changed in version 4.2在版本4.2中更改
Starting in version 4.2, MongoDB removes the Index Name Length Limit
for MongoDB versions with featureCompatibilityVersion (fCV) set to "4.2"
or greater.
In previous versions of MongoDB or MongoDB versions with fCV set to "4.0"
or earlier, fully qualified index names, which include the namespace and the dot separators (i.e. <database name>.<collection name>.$<index name>
), cannot be longer than 127 bytes.
By default, <index name>
is the concatenation of the field names and index type. You can explicitly specify the <index name>
to the createIndex()
method to ensure that the fully qualified index name does not exceed the limit.
Number of Indexed Fields in a Compound Index复合索引中的索引字段数
¶There can be no more than 32 fields in a compound index.复合索引中的字段不能超过32个。
Queries cannot use both text and Geospatial Indexes查询不能同时使用文本索引和地理空间索引
¶You cannot combine the $text
query, which requires a special text index, with a query operator that requires a different type of special index. For example you cannot combine $text
query with the $near
operator.
Fields with 2dsphere Indexes can only hold Geometries具有2dsphere
索引的字段只能保存几何图形
¶Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. If you attempt to insert a document with non-geometry data in a 2dsphere
indexed field, or build a 2dsphere
index on a collection where the indexed field has non-geometry data, the operation will fail.
See also参阅
The unique indexes limit in Sharding Operational Restrictions.中的唯一索引限制。
NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type doubleWiredTiger存储引擎从覆盖查询返回的NaN值始终为double类型
¶If the value of a field returned from a query that is covered by an index is NaN
, the type of that NaN
value is always double
.
Multikey Index多键索引
¶Multikey indexes cannot cover queries over array field(s).
Geospatial Index
¶Geospatial indexes cannot cover a query.
Memory Usage in Index Builds
¶createIndexes
supports building one or more indexes on a collection. createIndexes
uses a combination of memory and temporary files on disk to complete index builds. The default limit on memory usage for createIndexes
is 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a single createIndexes
command. Once the memory limit is reached, createIndexes
uses temporary disk files in a subdirectory named _tmp
within the --dbpath
directory to complete the build.
You can override the memory limit by setting the maxIndexBuildMemoryUsageMegabytes
server parameter. Setting a higher memory limit may result in faster completion of index builds. However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.
Changed in version 4.2.在版本4.2中更改。
"4.2"
, the index build memory limit applies to all index builds."4.0"
, the index build memory limit only applies to foreground index builds.Index builds may be initiated either by a user command such as Create Index or by an administrative process such as an initial sync. Both are subject to the limit set by maxIndexBuildMemoryUsageMegabytes
.
An initial sync operation populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set in maxIndexBuildMemoryUsageMegabytes
.
Tip
To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Rolling Index Builds on Replica Sets.
Collation and Index Types
¶The following index types only support simple binary comparison and do not support collation:
Tip
To create a text
, a 2d
, or a geoHaystack
index on a collection that has a non-simple collation, you must explicitly specify {collation: {locale: "simple"} }
when creating the index.
Hidden Indexes
¶_id
index.hint()
on a hidden index.Maximum Number of Documents in a Capped Collection
¶If you specify a maximum number of documents for a capped collection using the max
parameter to create
, the limit must be less than 232
documents. If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.
Number of Members of a Replica Set
¶Replica sets can have up to 50 members.
Number of Voting Members of a Replica Set
¶Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see Non-Voting Members.
Maximum Size of Auto-Created Oplog
¶If you do not explicitly specify an oplog size (i.e. with oplogSizeMB
or --oplogSize
) MongoDB will create an oplog that is no larger than 50 gigabytes. [1]
[1] | Starting in MongoDB 4.0, the oplog can grow past its configured size limit to avoid deleting the majority commit point . |
Sharded clusters have the restrictions and thresholds described here.
$where
does not permit references to the db
object from the $where
function. This is uncommon in un-sharded collections.
The geoSearch
command is not supported in sharded environments.
Covered Queries in Sharded Clusters
¶Starting in MongoDB 3.0, an index cannot cover a query on a sharded collection when run against a mongos
if the index does not contain the shard key, with the following exception for the _id
index: If a query on a sharded collection only specifies a condition on the _id
field and returns only the _id
field, the _id
index can cover the query when run against a mongos
even if the _id
field is not the shard key.
In previous versions, an index cannot cover a query on a sharded collection when run against a mongos
.
Sharding Existing Collection Data Size
¶An existing collection can only be sharded if its size does not exceed specific limits. These limits can be estimated based on the average size of all shard key values, and the configured chunk size.
Important
These limits only apply for the initial sharding operation. Sharded collections can grow to any size after successfully enabling sharding.
Use the following formulas to calculate the theoretical maximum collection size.
Note
The maximum BSON document size is 16MB or 16777216
bytes.
All conversions should use base-2 scale, e.g. 1024 kilobytes = 1 megabyte.
If maxCollectionSize
is less than or nearly equal to the target collection, increase the chunk size to ensure successful initial sharding. If there is doubt as to whether the result of the calculation is too ‘close’ to the target collection size, it is likely better to increase the chunk size.
After successful initial sharding, you can reduce the chunk size as needed. If you later reduce the chunk size, it may take time for all chunks to split to the new size. See Modify Chunk Size in a Sharded Cluster for instructions on modifying chunk size.
This table illustrates the approximate maximum collection sizes using the formulas described above:
Average Size of Shard Key Values | 512 bytes | 256 bytes | 128 bytes | 64 bytes |
---|---|---|---|---|
Maximum Number of Splits | 32,768 | 65,536 | 131,072 | 262,144 |
Max Collection Size (64 MB Chunk Size) | 1 TB | 2 TB | 4 TB | 8 TB |
Max Collection Size (128 MB Chunk Size) | 2 TB | 4 TB | 8 TB | 16 TB |
Max Collection Size (256 MB Chunk Size) | 4 TB | 8 TB | 16 TB | 32 TB |
Single Document Modification Operations in Sharded Collections
¶All update()
and remove()
operations for a sharded collection that specify the justOne
or multi: false
option must include the shard key or the _id
field in the query specification. update()
and remove()
operations specifying justOne
or multi: false
in a sharded collection which do not contain either the shard key or the _id
field return an error.
Unique Indexes in Sharded Collections
¶MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.
See
Unique Constraints on Arbitrary Fields for an alternate approach.
Maximum Number of Documents Per Chunk to Migrate
¶By default, MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size. db.collection.stats()
includes the avgObjSize
field, which represents the average document size in the collection.
For chunks that are too large to migrate, starting in MongoDB 4.4:
attemptToBalanceJumboChunks
allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Chunks that Exceed Size Limit for details.moveChunk
command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. The chunks may or may not be labeled jumbo.Shard Key Size
¶Starting in version 4.4, MongoDB removes the limit on the shard key size.
For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.
Shard Key Index Type
¶A shard key index can be an ascending index on the shard key, a compound index that start with the shard key and specify ascending order for the shard key, or a hashed index.
A shard key index cannot be an index that specifies a multikey index, a text index or a geospatial index on the shard key fields.
Shard Key Selection is Immutable in MongoDB 4.
2 and Earlier
¶Changed in Version 4.4
Starting in MongoDB 4.4, you can refine a collection’s shard key by adding a suffix field or fields to the existing key. See refineCollectionShardKey
.
In MongoDB 4.2 and earlier, once you shard a collection, the selection of the shard key is immutable; i.e. you cannot select a different shard key for that collection.
If you must change a shard key:
Monotonically Increasing Shard Keys Can Limit Insert Throughput
¶For clusters with high insert volumes, a shard keys with monotonically increasing and decreasing keys can affect insert throughput. If your shard key is the _id
field, be aware that the default values of the _id
fields are ObjectIds which have generally increasing values.
When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a single shard. The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.
If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.
To avoid this constraint, use a hashed shard key or select a field that does not increase or decrease monotonically.
Hashed shard keys and hashed indexes store hashes of keys with ascending values.
Sort Operations
¶If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. The name refers to the requirement that the SORT
stage reads all input documents before returning any output documents, blocking the flow of data for that specific query.
If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error unless
the query specifies cursor.allowDiskUse()
(New in MongoDB 4.4). allowDiskUse()
allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation.
Changed in version 4.4:For MongoDB 4.2 and prior, blocking sort operations could not exceed 32 megabytes of system memory.
For more information on sorts and index use, see Sort and Index Use.
Aggregation Pipeline Operation
¶Pipeline stages have a limit of 100 megabytes of RAM. If a stage exceeds this limit, MongoDB will produce an error. To allow for the handling of large datasets, use the allowDiskUse
option to enable aggregation pipeline stages to write data to temporary files.
Changed in version 3.4.
The $graphLookup
stage must stay within the 100 megabyte memory limit. If allowDiskUse: true
is specified for the aggregate()
operation, the $graphLookup
stage ignores the option. If there are other stages in the aggregate()
operation, allowDiskUse: true
option is in effect for these other stages.
Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk
indicator if any aggregation stage wrote data to temporary files due to memory restrictions.
See also参阅
$sort and Memory Restrictions and $group Operator and Memory.
Aggregation and Read Concern
¶$out
stage cannot be used in conjunction with read concern "linearizable"
. That is, if you specify "linearizable"
read concern for db.collection.aggregate()
, you cannot include the $out
stage in the pipeline.$merge
stage cannot be used in conjunction with read concern "linearizable"
. That is, if you specify "linearizable"
read concern for db.collection.aggregate()
, you cannot include the $merge
stage in the pipeline.2d Geospatial queries cannot use the $or operator
¶See
$or
and 2d Index Internals.
Geospatial Queries
¶For spherical queries, use the 2dsphere
index result.
The use of 2d
index for spherical queries may lead to incorrect results, such as the use of the 2d
index for spherical queries that wrap around the poles.
Geospatial Coordinates
¶-180
and 180
, both inclusive.-90
and 90
, both inclusive.Area of GeoJSON Polygons
¶For $geoIntersects
or $geoWithin
, if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the $geometry
expression; otherwise, $geoIntersects
or $geoWithin
queries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere, $geoIntersects
or $geoWithin
queries for the complementary geometry.
Multi-document Transactions
¶For multi-document transactions:
"4.4"
or greater, you can create collections and indexes in transactions. For details, see Create Collections and Indexes In a TransactionNote
You cannot create new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.
config
, admin
, or local
databases.system.*
collections.explain
).getMore
inside the transaction.getMore
outside the transaction.killCursors
as the first operation in a transaction.Changed in version 4.4.
The following operations are not allowed in transactions:
"4.2"
or lower. With fcv "4.4"
or greater, you can create collections and indexes in transactions unless the transaction is a cross-shard write transaction. For details, see Create Collections and Indexes In a Transaction.db.createCollection()
method, and indexes, e.g. db.collection.createIndexes()
and db.collection.createIndex()
methods, when using a read concern level other than "local"
.listCollections
and listIndexes
commands and their helper methods.createUser
, getParameter
, count
, etc. and their helpers.Transactions have a lifetime limit as specified by transactionLifetimeLimitSeconds
. The default is 60 seconds.
Write Command Batch Limit Size
¶100,000
writes are allowed in a single batch operation, defined by a single request to the server.
Changed in version 3.6:The limit raises from 1,000
to 100,000
writes. This limit also applies to legacy OP_INSERT
messages.
The Bulk()
operations in the mongo
shell and comparable methods in the drivers do not have this limit.
Views
¶The view definition pipeline
cannot include the $out
or the $merge
stage. If the view definition includes nested pipeline (e.g. the view definition includes $lookup
or $facet
stage), this restriction applies to the nested pipelines as well.
Views have the following operation restrictions:
Projection Restrictions
¶$
-Prefixed Field Path RestrictionStarting in MongoDB 4.4, the find
and findAndModify
projection cannot project a field that starts with $
with the exception of the DBRef fields.
For example, starting in MongoDB 4.4, the following operation is invalid:
MongoDB already has a restriction
where top-level field names cannot start with the dollar sign ($
).
In earlier version, MongoDB ignores the $
-prefixed field projections.
$
Positional Operator Placement RestrictionStarting in MongoDB 4.4, the $
projection operator can only appear at the end of the field path; e.g. "field.$"
or "fieldA.fieldB.$"
.
For example, starting in MongoDB 4.4, the following operation is invalid:
To resolve, remove the component of the field path that follows the $
projection operator.
In previous versions, MongoDB ignores the part of the path that follows the $
; i.e. the projection is treated as "instock.$"
.
Starting in MongoDB 4.4, find
and findAndModify
projection cannot include a projection of an empty field name.
For example, starting in MongoDB 4.4, the following operation is invalid:
In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.
Starting in MongoDB 4.4, it is illegal to project an embedded document with any of the embedded document’s fields.
For example, consider a collection inventory
with documents that contain a size
field:
Starting in MongoDB 4.4, the following operation fails with a Path collision
error because it attempts to project both size
document and the size.uom
field:
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
{ "size.uom": 1, size: 1 }
produces the same result as the projection document { size: 1 }
.{ "size.uom": 1, size:
1, "size.h": 1 }
produces the same result as the projection document { "size.uom": 1, "size.h": 1 }
.$slice
of an Array and Embedded FieldsStarting in MongoDB 4.4, find
and findAndModify
projection cannot contain both a $slice
of an array and a field embedded in the array.
For example, consider a collection inventory
that contains an array field instock
:
Starting in MongoDB 4.4, the following operation fails with a Path collision
error:
In previous versions, the projection applies both projections and returns the first element ($slice: 1
) in the instock
array but suppresses the warehouse
field in the projected element. Starting in MongoDB 4.4, to achieve the same result, use the db.collection.aggregate()
method with two separate $project
stages.
$
Positional Operator and $slice
RestrictionStarting in MongoDB 4.4, find
and findAndModify
projection cannot include $slice
projection expression as part of a $
projection expression.
For example, starting in MongoDB 4.4, the following operation is invalid:
MongoDB already has a restriction
where top-level field names cannot start with the dollar sign ($
).
In previous versions, MongoDB returns the first element (instock.$
) in the instock
array that matches the query condition; i.e. the positional projection "instock.$"
takes precedence and the $slice:1
is a no-op. The "instock.$": {
$slice: 1 }
does not exclude any other document field.
Sessions and $external Username Limit
¶Changed in version 3.6.3:To use sessions with $external
authentication users (i.e. Kerberos, LDAP, x.509 users), the usernames cannot be greater than 10k bytes.
Session Idle Timeout
¶Sessions that receive no read or write operations for 30 minutes or
that are not refreshed using refreshSessions
within this threshold are marked as expired and can be closed by the MongoDB server at any time. Closing a session kills any in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout
or a maxTimeMS
greater than 30 minutes.
Consider an application that issues a db.collection.find()
. The server returns a cursor along with a batch of documents defined by the cursor.batchSize()
of the find()
. The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed.
For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Session.startSession()
and periodically refresh the session using the refreshSessions
command. For example:例如:
In the example operation, the db.collection.find()
method is associated with an explicit session. The cursor is configured with noCursorTimeout()
to prevent the server from closing the cursor if idle. The while
loop includes a block that uses refreshSessions
to refresh the session every 5 minutes. Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely.
For MongoDB drivers, defer to the driver documentation for instructions and syntax for creating sessions.