On this page本页内容
Issues fixed:
Issues fixed:
Issues fixed:
$unionWith Stage)¶MongoDB 4.4 adds the $unionWith aggregation stage, providing the ability to combines pipeline results from multiple collections into a single result set.
For details, see $unionWith.
Starting in version 4.4, MongoDB provides the following new operators that allow users to define custom aggregation expressions:
With the addition of these new operators, you can use aggregation to write custom JavaScript expressions instead of relying on mapReduce and $where.
Note
Even before version 4.4, various map-reduce expressions could also be rewritten using other aggregation pipeline operators, such as $group, $merge, etc., without requiring custom functions.
For more information, see Map-Reduce to Aggregation Pipeline.
| Operator | |
|---|---|
$accumulator |
Returns the result of a user-defined accumulator operator. |
$binarySize |
Returns the size of a given string or binary data value’s content in bytes. |
$bsonSize |
Returns the size in bytes of a given document (i.e. bsontype Object) when encoded as BSON. |
$first |
Returns the first element in an array. |
$function |
Defines a custom aggregation expression. |
$last |
Returns the last element in an array. |
$isNumber |
Returns boolean Returns boolean |
$replaceOne |
Replaces the first instance of a matched string in a given input. |
$replaceAll |
Replaces all instances of a matched string in a given input. |
$out¶Starting in MongoDB 4.4, $out can output to a collection in a different database. In earlier versions, $out can output to a collection to the same database where the aggregation is run.
$indexStats¶Starting in MongoDB 4.4 (also available starting in 4.2.4), $indexStats includes the following fields in its output:
$merge¶Starting in MongoDB 4.4, $merge can output to the same collection that is being aggregated. You can also output to a collection which appears in other stages of the pipeline, such as $lookup.
Versions of MongoDB prior to 4.4 did not allow $merge to output to the same collection as the collection being aggregated.
Warning
When $merge outputs to the same collection that is being aggregated, documents may get updated multiple times or the operation may result in an infinite loop. This behavior occurs when the update performed by $merge changes the physical location of documents stored on disk. When the physical location of a document changes, $merge may view it as an entirely new document, resulting in additional updates. For more information on this behavior, see Halloween Problem.
$planCacheStats Changes¶Starting in version 4.4,
$planCacheStats stage can be run on mongos instances as well as on mongod instances. In 4.2, $planCacheStats stage can only run on mongod instances.$planCacheStats includes new fields: the host field and, when run against a mongos, the shard field.mongo shell provides the method PlanCache.list() as a wrapper for $planCacheStats aggregation stage.planCacheListPlans and planCacheListQueryShapes commands, andPlanCache.getPlansByQuery() and PlanCache.listQueryShapes() methods.Use $planCacheStats or PlanCache.list() instead.
$collStats Changes¶Starting in MongoDB 4.4, $collStats accepts the queryExecStats field as an argument document. Providing this field returns the following fields in the output:
The collectionScans field contains an embedded document bearing the following fields:
| Field Name | |
|---|---|
total |
A 64-bit integer giving the total number of queries that performed a collection scan. The total consists of queries that did and did not use a tailable cursor. |
nonTailable |
A 64-bit integer giving the number of queries that performed a collection scan that did not use a tailable cursor. |
explain Changes¶Starting in MongoDB 4.4, when you run the db.collection.explain().aggregate() method in executionStats and allPlansExecution modes, each pipeline stage listed in the explain output includes nReturned and executionTimeMillisEstimate.
Starting in MongoDB 4.4, a secondary performing initial sync can attempt to resume the sync process if interrupted by a transient (i.e. temporary) network error, collection drop, or collection rename. The sync source must also run MongoDB 4.4 to support resumable initial sync. If the sync source runs MongoDB 4.2 or earlier, the secondary must restart the initial sync process as if it encountered a non-transient network error.
By default, the secondary tries to resume initial sync for 24 hours. MongoDB 4.4 adds the initialSyncTransientErrorRetryPeriodSeconds server parameter for controlling the amount of time the secondary attempts to resume initial sync. If the secondary cannot successfully resume the initial sync process during the configured time period, it selects a new healthy source from the replica set and restarts the initial synchronization process from the beginning.
Prior to MongoDB 4.4, the secondary would restart the entire initial sync if it encountered an error during the process.
Starting in MongoDB 4.4, sync from sources send a continuous stream of oplog entries to their syncing secondaries.
Prior to MongoDB 4.4, secondaries fetched batches of oplog entries by issuing a request to their sync from source and waiting for a response. This required a network roundtrip for each batch of oplog entries. MongoDB 4.4 adds the oplogFetcherUsesExhaust startup parameter for disabling streaming replication and using the older replication behavior.
For details, see Streaming Replication.
Starting in Mongo 4.4, the rollback directory for a collection is named after the collection’s UUID rather than the collection namespace; e.g.
For details, see Rollback Data.
Starting in MongoDB 4.4, you can specify the minimum number of hours to preserve an oplog entry. The mongod only removes an oplog entry if:
By default MongoDB does not set a minimum oplog retention period and automatically truncates the oplog starting with the oldest entries to maintain the configured maximum oplog size.
To configure the minimum oplog retention period when starting the mongod, either:
storage.oplogMinRetentionHours setting to the mongod configuration file.
-or-
--oplogMinRetentionHours command line option.To configure the minimum oplog retention period on a running mongod, use replSetResizeOplog. Setting the minimum oplog retention period while the mongod is running overrides any values set on startup. You must update the value of the corresponding configuration file setting or command line option to persist those changes through a server restart.
Important
The oplog can grow without constraint so as to retain oplog entries for the configured number of hours. This may result in reduction or exhaustion of system disk space due to a combination of high write volume and large retention period.
See also参阅
Requires featureCompatibilityVersion 4.4+
Each mongod in the replica set or sharded cluster must have featureCompatibilityVersion set to at least 4.4 to start index builds simultaneously across replica set members.
MongoDB 4.4 running featureCompatibilityVersion: "4.2" builds indexes on the primary before replicating the index build to secondaries.
Starting with MongoDB 4.4, index builds on a replica set or sharded cluster build simultaneously across all data-bearing replica set members. For sharded clusters, the index build occurs only on shards containing data for the collection being indexed. The primary requires a minimum number of data-bearing voting members (i.e commit quorum), including itself, that must complete the build before marking the index as ready for use.
By default, index builds use a commit quorum of all data-bearing voting members. To start an index build with a non-default commit quorum, MongoDB 4.4 adds the commitQuorum parameter to createIndexes or its shell helpers db.collection.createIndex() and db.collection.createIndexes().
To modify the quorum required for an in-progress index build, MongoDB 4.4 introduces the new setIndexCommitQuorum command.
See Index Builds in Replicated Environments for more information.
replSetReconfig¶Starting in MongoDB 4.4, the replSetReconfig command waits until a majority of voting members install the replica configuration before returning success. A voting member is any
replica member where members[n].votes is 1, including arbiters. First, the operation waits until the current
configuration is committed before installing the new configuration on the primary. The operation then waits until a majority of voting members install the new configuration before returning successfully. See Reconfiguration Waits Until a Majority of Members Install the Replica Configuration for more information.
replSetReconfig waits indefinitely for a majority of voting members to install the configuration by default. MongoDB 4.4 also adds the optional maxTimeMS parameter to replSetReconfig for specifying the maximum amount of time to wait for the operation to return successfully.
Starting in MongoDB 4.4, the replSetReconfig command allows adding or removing no more than 1 voting member at a time. To add or remove multiple voting members, issue a series of replSetReconfig or rs.reconfig() operations to add or remove one member at a time. See Reconfiguration Can Add or Remove No More than One Voting Member at a Time for more information.
replSetGetConfig¶replSetGetConfig command can specify a new option commitmentStatus: true when run on the primary. When run with the option, the command includes in the output a commitmentStatus field. This output field indicates whether the replica set’s previous reconfig has been committed, so that the replica set is ready to be reconfigured again. For more information, see the replSetGetConfig command.term field to the replica set configuration document. Replica set members use term and version to achieve consensus on the “newest” replica configuration. Setting featureCompatibilityVersion (fCV) : “4.4” implicitly performs a replSetReconfig to add the term field to the configuration document and blocks until the new configuration propagates to a majority of replica set members. Similarly, downgrading to fCV : "4.2" implicitly performs a reconfiguration to remove the term field.Starting in MongoDB 4.4, you can specify the preferred initial sync source using the initialSyncSourceReadPreference parameter. You can only set this parameter on mongod startup, using either the setParameter configuration file setting or the --setParameter command line option.
initialSyncSourceReadPreference supports following read preference modes:
primaryprimaryPreferred (Default for voting replica set members)secondarysecondaryPreferrednearest (Default for newly added or non-voting replica set members)If the replica set has disabled chaining, the default initialSyncSourceReadPreference read preference mode is primary.
You cannot specify a tag set or maxStalenessSeconds to initialSyncSourceReadPreference.
See also参阅
Starting in version 4.4, MongoDB provides mirrored reads to pre-warm electable secondary members’ cache with the most recently accessed data. With mirrored reads, the primary can mirror a subset of operations that it receives and send them to a subset of electable secondaries. Pre-warming the cache of a secondary can help restore performance more quickly after an election.
Note
The primary’s response to the client is not affected by the mirror reads. The mirrored reads are “fire-and-forget” operations by the primary; i.e., the primary does not await the response for the mirrored reads.
MongoDB 4.4 adds the following mirrored reads parameter. You can set the parameter at startup using the setParameter configuration file setting or the --setParameter command line option or at runtime with setParameter command:
mirrorReads |
Specifies the
A |
The command serverStatus and its corresponding mongo shell method db.serverStatus() return mirroredReads if you specify the field’s inclusion in the operation:
or
Starting in 4.4, MongoDB provides the refineCollectionShardKey command. With the new command, you can refine a collection’s shard key by adding a suffix field or fields to the existing key. Refining a collection’s shard key allows for a more fine-grained data distribution and can address situations where the existing key has led to jumbo (i.e. indivisible)
chunks due to insufficient cardinality.
For example, you may have an existing orders collection with the shard key { customer_id: 1 }. You can change the shard key by adding a suffix order_id field to the shard key so that {
customer_id: 1, order_id: 1 } becomes the new shard key, allowing data distribution by both customer_id and order_id fields.
To use the refineCollectionShardKey command, the sharded cluster must have feature compatibility version (fcv) of 4.4. For more information, see the refineCollectionShardKey command.
Note
After you refine the shard key, it may be that not all documents in the collection have the suffix field(s). To populate the missing shard key field(s), see Missing Shard Key.
Before refining the shard key, ensure that all or most documents in the collection have the suffix fields, if possible, to avoid having to populate the field afterwards.
In earlier versions, once you select a shard key, you cannot modify the shard key.
Missing Shard Keys
To minimize latencies, mongos instances, by default, can use hedge reads. With hedged reads, the mongos instances can route read operations to multiple members per each queried shard and return results from the first respondent per shard. By default, mongos instances support using hedged reads. To turn off a mongos instance’s support for hedged reads, set the readHedgingMode parameter for the mongos.
Hedged reads are specified per operation as part of the read preference. Non-primary read preferences support hedged reads. Read preference nearest specifies hedged read by default.
For more information, see:
readHedgingMode |
Enables or disables mongos instance’s support for hedged reads. |
maxTimeMSForHedgedReads |
Specifies the maximimum time limit (in milliseconds) for the additional read sent to hedge a read operation. |
To specify hedged read for a read preference, MongoDB 4.4 introduces the Hedged Read Option. To set using a MongoDB driver, refer to the driver read preference API documentation.
The following mongo shell methods can accept hedge options to enable hedged read for the specified read preference:
serverStatus and its corresponding mongo shell method db.serverStatus() return hedgingMetrics.balancerCollectionStatus Command¶balancerCollectionStatus and the mongo shell helper method sh.balancerCollectionStatus() that return information about whether the chunks of a sharded collection are balanced (i.e. do not need to be moved) as of the time the command is run or need to be moved. With the command, users can verify that initial chunk creation and migration has finished.mongos Startup Procedure¶Starting with MongoDB 4.4, mongos adds the following new default startup behavior:
mongos will now preload a sharded cluster’s routing table on startup, rather than doing so on-demand for the first incoming client connection.mongos will now prewarm its connection pool to shard hosts on startup, rather than doing so on-demand for incoming client connections.This behavior results in faster servicing of initial client connections after a mongos instance is started or restarted. In particular, this allows sites that employ multiple mongos instances to restart them as necessary, or add new ones, without initial client requests to those instances needing to wait on connection establishment.
Both routing table preloading and connection pool prewarming are enabled by default.
MongoDB 4.4 adds the following parameters for controlling this behavior:
loadRoutingTableOnStartuptrue (Enabled)mongos.warmMinConnectionsInShardingTaskExecutorPoolOnStartuptrue (Enabled)mongos.warmMinConnectionsInShardingTaskExecutorPoolOnStartupWaitMS2000 (2 seconds)mongos are allowed regardless of established connection pool size.Running flushRouterConfig is no longer required after executing the movePrimary or dropDatabase commands. These two commands now automatically refresh a sharded cluster’s routing table as needed when run. Manually issuing the flushRouterConfig command is still recommended in the cases described under flushRouterConfig Considerations.
Starting in MongoDB 4.4, you can shard a collection using a compound shard key with a single hashed field. Prior to 4.4, MongoDB did not support compound shard keys with a hashed field.
Compound hashed sharding supports features like zone sharding, where the prefix (i.e. first) non-hashed field or fields support zone ranges while the hashed field supports more even distribution of the sharded data. For example, the following operation shards a collection on a compound hashed shard key that supports zoned sharding:
Compound hashed sharding also supports shard keys with a hashed prefix for resolving data distribution issues related to monotonically increasing fields For example, the following operation shards a collection on a compound hashed shard key where the hashed field is the shard key prefix:
See also参阅
Starting in MongoDB 4.4, the following changes improve chunk migrations and orphaned document cleanup resiliency during failover:
config.rangeDeletions collection and replicated throughout the shard. In the event of a failover, the shard’s new primary reads the documents in the config.rangeDeletions collection and resumes deleting the corresponding ranges. The document that describes a range awaiting cleanup is deleted from the config.rangeDeletions collection after the range is deleted.cleanupOrphaned command no longer deletes orphaned documents from a shard. Instead, cleanupOrphaned waits for orphaned documents that are scheduled for deletion from a shard to be deleted.Set the disableResumableRangeDeleter parameter to true on a shard’s primary to pause range deletion on the shard.
Starting in MongoDB 4.4, the config server primary, by default, checks for index inconsistencies across the shards for sharded collections. The command serverStatus returns the field shardedIndexConsistency to report on index inconsistencies when run on the config server primary.
To configure the index consistency checks, MongoDB provides the following parameters:
enableShardedIndexConsistencyCheck |
Enable or disable the index consistency checks. |
shardedIndexConsistencyCheckIntervalMS |
The interval at which the config server’s primary checks the index consistency of sharded collections. |
removeShard Operations¶Starting in MongoDB 4.4, you can have more than one removeShard operation in progress.
In earlier versions, removeShard returns an error if another removeShard operation is in progress.
For chunks that are too large to migrate, starting in MongoDB 4.4:
attemptToBalanceJumboChunks allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Chunks that Exceed Size Limit for details.moveChunk command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. The chunks may or may not be labeled jumbo.Starting in 4.4, if there is a stale chunk, the catalog cache is only refreshed when routers access a shard that previously had or currently has that chunk.
Prior to MongoDB 4.4, any stale chunk caused the entire chunk distribution for a collection to be marked as stale and forced all routers who contact the shard to refresh their shard catalog cache. MongoDB 4.4 adds the enableFinerGrainedCatalogCacheRefresh startup parameter for disabling catalog cache refresh for only a targeted shard and using the older catalog cache refresh behavior. The enableFinerGrainedCatalogCacheRefresh parameter defaults to true.
system.sessions Collection¶Starting in version 4.4 (and 4.2.7), MongoDB automatically splits the system.sessions collection into at least 1024 chunks and distributes the chunks uniformly across shards in the cluster.
Starting in MongoDB 4.4, as part of making find and findAndModify projection consistent with aggregation’s $project stage,
find and findAndModify projection can accept aggregation expressions and aggregation syntax, including the use of literals and aggregation variables. With the use of aggregation expressions and syntax, you can project new fields or project existing fields with new values.find and findAndModify projection can specify embedded fields using the nested form; e.g. { field: { nestedfield:
1 } } as well as dot notation. In earlier versions, you can only use the dot notation.For more information, see:
db.collection.find()db.collection.findOneAndDelete()db.collection.findOneAndReplace()db.collection.findOneAndUpdate()db.collection.findAndModify()See also参阅
$meta Operator¶$meta Keyword SupportStarting in MongoDB 4.4, the $meta operator adds support for retrieving the indexKey metadata. The indexKey metadata is for debugging purposes only and not for application logic. See $meta for more information. |
{ $meta: "textScore" } Usage with find()Starting in version 4.4, MongoDB makes the following
For more information, see Text Score Metadata $meta: "textScore". For examples of |
Starting in MongoDB 4.4 with feature compatibility version (fcv) "4.4", you can create collections and indexes inside a multi-document transaction unless the transaction is a cross-shard write transaction.
When creating a collection inside a transaction:
upsert: true against a non-existing collection.create command or its helper db.createCollection().When creating an index inside a transaction:
For more details, see Create Collections and Indexes In a Transaction.
MongoDB 4.4 adds a new parameter shouldMultiDocTxnCreateCollectionAndIndexes which can enable (default) or disable collection and index creation inside a transaction. When setting the parameter for a sharded cluster, set the parameter on all shards.
For explicit creation of a collection or an index inside a transaction, the transaction read concern level must be "local". Explicit creation is through:
| Command | Method |
|---|---|
create |
db.createCollection() |
createIndexes |
See also参阅
$sort Stability Changes¶Starting in MongoDB 4.4, the sort() method now uses the same sort algorithm as the $sort aggregation stage. With this change, queries which perform a sort() on fields that contain duplicate values are much more likely to result in inconsistent sort orders for those values.
To guarantee sort stability when using sort() on duplicate values, include an additional field in your sort that contains exclusively unique values.
This can be accomplished easily by adding the _id field to your sort.
See Sort Stability for more information.
mongokerberos¶MongoDB Enterprise 4.4 provides a new mongokerberos tool for validating your platform’s Kerberos configuration for use with MongoDB, and for testing end-to-end client authentication through Kerberos. When run, mongokerberos returns a report indicating any issues encountered, and provides potential advice for resolving them. mongokerberos is available in MongoDB Enterprise only.
Starting in version 4.4, MongoDB enables, by default, the use of OCSP (Online Certificate Status Protocol) to check for certificate revocation. The use of OCSP eliminates the need to periodically download a Certificate Revocation List (CRL) and restart the mongod/mongos with the updated CRL.
In versions 4.0 and 4.2, the use of OCSP is available only through the use of system certificate store on Windows or macOS.
See also参阅
As part of its OCSP support, MongoDB 4.4 supports the following on Linux:
mongod and mongos instances attach or “staple” the OCSP status response to their certificates when providing these certificates to clients during the TLS/SSL handshake. By including the OCSP status response with the certificates, OCSP stapling obviates the need for clients to make a separate request to retrieve the OCSP status of the provided certificates.MongoDB 4.4 adds the following OCSP parameters. You can set these parameters at startup using the setParameter configuration file setting or the --setParameter command line option:
ocspEnabled |
Enables or disables the OCSP support. |
ocspStaplingRefreshPeriodSecs |
Specifies the number of seconds to wait before refreshing the stapled OCSP status response. Renamed from Changed in version 4.4.1. |
tlsOCSPStaplingTimeoutSecs |
Specifies the maximum number of seconds the mongod/mongos instance should wait to receive the OCSP status response for its certificates. |
tlsOCSPVerifyTimeoutSecs |
Specifies the maximum number of seconds that the mongod/mongos should wait for the OCSP response when verifying client certificates. |
Starting in MongoDB 4.4, mongod/mongos logs a warning on connection if the presented x.509 certificate expires within 30 days of the mongod/mongos system clock. Specifically, the following connections to a mongod or mongos can trigger x.509 certificate expiry warnings:
mongo shell or an application using a MongoDB driver establishing a TLS connection or performing x.509 client authentication with a certificate expiring in less than 30 days. (i.e. the certificate specified to mongo --tlsCertificateKeyFile or tlsCertificateKeyFile).mongod cluster member performing x.509 membership authentication with a certificate expiring in less than 30 days. (i.e. the certificate specified to net.tls.clusterFile, net.tls.clusterCertificateSelector, mongod --tlsClusterFile or mongod --tlsClusterCertificateSelector).mongos cluster member performing x.509 membership authentication with a certificate expiring within 30 days. (i.e. the certificate specified to (i.e. the certificate specified to net.tls.clusterFile, net.tls.clusterCertificateSelector, mongos --tlsClusterFile or mongos --tlsClusterCertificateSelector).The warning log message resembles the following:
Consider proactively renewing client x.509 certificates nearing expiration to ensure continued connectivity to the cluster.
MongoDB 4.4 adds the tlsX509ExpirationWarningThresholdDays parameter for controlling certificate expiration warning threshold. Set the parameter to 0 to disable the warning. For complete documentation, see tlsX509ExpirationWarningThresholdDays.
On CentOS 8 and RHEL 8, MongoDB 4.4 (as well as versions 4.2, 4.0, and 3.6) support TLS1.3.
A mongod, mongos, or mongoldap returns an error if one of the user to Distinguished Name (DN) mappings cannot be evaluated due to networking or authentication failures to the LDAP server.
The mongod, mongos, or mongoldap rejects the connection request and does not check the remaining mappings, if any.
To specify the user to DN mapping, see:
Starting in MongoDB 4.4, mongod / mongos instances now output all log messages in structured JSON format. Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as “severity”, and each corresponding value records the associated logging information for that field type, such as “informational”.
This includes log output sent to the file, syslog, and stdout
(standard out) log destinations, as well as the output of the getLog command.
Previously, log entries were output as plaintext.
The following log messages in JSON format indicate that a mongod is listening and ready for connections:
Structured logging with key-value pairs allows for efficient log analysis by automated tools or log ingestion services, and makes programmatic log parsing easier and more powerful.
When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.
jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.
For more information on structured logging, including a detailed examination of log entry components as well as command-line parsing examples, see Log Messages.
Starting in MongoDB 4.4, the ldapQueryPassword setParameter command accepts either a string or an array of strings. If set to an array, each password is tried until one succeeds. This can be used to perform a rollover of the LDAP account password without downtime for MongoDB.
MongoDB 4.4 adds support for the following platforms:
MongoDB 4.4 removes support for the following platforms:
See Supported Platforms for the full list of platforms and architectures supported in MongoDB 4.4.
Starting in MongoDB 4.4, the mongo shell supports using AWS IAM credentials to authenticate to a MongoDB Atlas cluster that has been configured for AWS IAM authentication.
Authenticating in this manner uses the new MONGODB-AWS authentication mechanism, and requires that you provide an AWS access key ID and a secret access key, which may be specified in the connection string or on the command-line via the --username and --password options.
Additionally, if you are using an AWS session token for authentication with temporary credentials when using an AssumeRole request, or when working with AWS resources that specify this value such as Lambda, you may provide that session token in the connection string using the AWS_SESSION_TOKEN authMechanismProperties value, or on the command-line via the --awsIamSessionToken option.
Alternatively, if the AWS access key ID, secret access key, or session token are defined on your platform using their respective AWS IAM environment variables the mongo shell will use these environment variable values to authenticate; you do not need to specify them in the connection string.
See Connection String Authentication Options for usage, and Connecting to an Atlas Cluster using MONGODB-AWS for examples.
Starting in MongoDB 4.4, the documentation for the following tools have been migrated to the MongoDB Database Tools project:
The MongoDB Database Tools use the Apache License, Version 2.0. See mongodb/mongo-tools for the source code.
Note
For documentation on previous versions of the listed tools, reference that version of the MongoDB server manual.
Quick links to older documentation:
mongokerberos Kerberos Validation Tool¶MongoDB Enterprise 4.4 provides a new mongokerberos tool for validating your platform’s Kerberos configuration for use with MongoDB, and for testing end-to-end client authentication through Kerberos. When run, mongokerberos returns a report indicating any issues encountered, and provides potential advice for resolving them. mongokerberos is available in MongoDB Enterprise only.
See the mongokerberos reference page for more information.
mongoreplay Removed from MongoDB Packaging¶Starting in MongoDB 4.4, mongoreplay is removed from MongoDB packaging. mongoreplay and its related documentation are migrated to the mongodb-labs github project. Projects in mongodb-labs are experimental and not officially supported by MongoDB.
Quick links to older documentation
Starting in version 4.4, the Windows MSI installer for both Community and Enterprise editions does not include the MongoDB Database Tools (mongoimport, mongoexport, etc). To download and install the MongoDB Database Tools on Windows, see Installing the MongoDB Database Tools.
If you were relying on the MongoDB 4.2 or previous MSI installer to install the Database Tools along with the MongoDB Server, you must now download the Database Tools separately.
MongoDB 4.4 adds support for creating compound indexes with a single hashed field. MongoDB 4.2 and earlier only supported single field hashed indexes.
The following operation creates a compound hashed index on country and _id:
Compound hashed indexes require featureCompatibilityVersion set to 4.4.
See also参阅
dropIndexes Can Abort In-Progress Index Builds¶If an index specified to dropIndexes is still building, dropIndexes attempts to abort the in-progress build. Aborting an index build has the same effect as dropping the built index. Prior to MongoDB 4.4, dropIndexes would return an error if the collection had any in-progress index builds. This behavior also applies to the shell helpers db.collection.dropIndex() and db.collection.dropIndexes().
dropIndexes /
dropIndexes() must be the entire set of in-progress builds associated to a given index builder, i.e. the indexes built by a single createIndexes or db.collection.createIndexes() operation.dropIndex() must be the only index associated to the index builder, i.e. the indexes built by a single createIndexes or db.collection.createIndexes() operation.To drop a specific index out of a set of related in-progress builds, wait until the index builds complete and specify that index to dropIndexes or its shell helpers.
For more complete documentation, see:
dropIndexes command.dropIndexes() method.dropIndex() method.drop() Can Abort In-Progress Index Builds¶Starting in MongoDB 4.4, the db.collection.drop() method and drop command abort any in-progress index builds on the target collection before dropping the collection. Prior to MongoDB 4.4, attempting to drop a collection with in-progress index builds results in an error, and the collection is not dropped.
For replica sets or shard replica sets, aborting an index on the primary does not simultaneously abort secondary index builds. MongoDB attempts to abort the in-progress builds for the specified indexes on the primary and if successful creates an associated abort oplog entry. Secondary members with replicated in-progress builds wait for a commit or abort oplog entry from the primary before either committing or aborting the index build.
For replica sets or shard replica sets, aborting an index on the primary does not simultaneously abort secondary index builds. MongoDB attempts to abort the in-progress builds for the specified indexes on the primary and if successful creates an associated abort oplog entry. Secondary members with replicated in-progress builds wait for a commit or abort oplog entry from the primary before either committing or aborting the index build.
dropDatabase Can Abort In-Progress Index Builds¶Starting in MongoDB 4.4, the db.dropDatabase() method and dropDatabase command abort any in-progress index builds on collections in the target database before dropping the database. Aborting an index build has the same effect as dropping the built index. Prior to MongoDB 4.4, attempting to drop a database that contains a collection with an in-progress index build results in an error, and the database is not dropped.
See also参阅
geoHaystack Index and geoSearch Command¶MongoDB 4.4 deprecates the geoHaystack index and the geoSearch command. Use a 2d index with $geoNear or $geoWithin instead.
MongoDB removes the following command(s) and mongo shell helper(s):
| Removed Command | Removed Helper | Alternatives |
|---|---|---|
cloneCollection |
db.cloneCollection() |
|
planCacheListPlans |
PlanCache.getPlansByQuery() |
See also $planCacheStats Changes. |
planCacheListQueryShapes |
PlanCache.listQueryShapes() |
See also $planCacheStats Changes. |
Starting with MongoDB 4.4, mongod and mongos support TCP Fast Open (TFO) connections by default. TFO requires both the client and the mongod/mongos host machines support and enable TFO:
The following Windows operating systems support TFO:
Linux operating systems running Linux Kernel 3.7 or later can support inbound TFO connections.
Linux operating systems running Linux Kernel 4.11 or later can support both inbound and outbound TFO connections.
Set the value of /proc/sys/net/ipv4/tcp_fastopen to enable support for inbound and/or outbound TFO connections:
1 to enable only outbound TFO connections2 to enable only inbound TFO connections3 to enable inbound and outbound TFO connections.MongoDB 4.4 adds the following parameters for controlling TFO:
tcpFastOpenServer |
Default: Enables or disables support for inbound TFO connections to the |
tcpFastOpenClient |
Default: Linux Operating System Only Enables or disables support for outbound TFO connections from the |
tcpFastOpenQueueSize |
Default: Control the size of the queue of pending TFO connections. |
MongoDB 4.4 adds the following counters to the output of serverStatus and db.serverStatus():
| Counter | |
|---|---|
network.tcpFastOpen.kernelSetting |
Linux only Indicates kernel support for TFO. |
network.tcpFastOpen.serverSupported |
Indicates operating system support for incoming TFO connections. |
network.tcpFastOpen.clientSupported |
Indicates operating system support for outgoing TFO connections. |
network.tcpFastOpen.accepted |
Indicates the total number of accepted incoming TFO connections to the mongod/mongos since the mongod/mongos last started. |
A complete discussion of TFO is outside the scope of this documentation. For more information on TFO, start with the following external resources:
If MongoDB cannot use an index or indexes to obtain the sort order for a given cursor.sort() operation, MongoDB must perform a blocking sort on the data. A blocking sort indicates that MongoDB must consume and process all input documents to the sort before returning results. Blocking sorts do not block concurrent operations on the collection or database.
Prior to MongoDB 4.4, MongoDB returned an error if a blocking sort operations required more than 32 megabytes of system memory. Starting in MongoDB 4.4, blocking sort operations increase the limit on system memory to use for the sort operation to 100 megabytes. For blocking sort operations which require more than 100 megabytes of system memory, MongoDB returns an error unless the query specifies cursor.allowDiskUse() (New in MongoDB 4.4).
For more information on sorting and index use, see Sort and Index Use.
find Can Use Temporary Files To Support Large Non-Indexed Sorts¶MongoDB 4.4 adds a new option allowDiskUse to the find command. With allowDiskUse: true, the operation can use temporary files on disk when processing a non-indexed (“blocking”)
sort operation that exceeds the 100 megabyte memory limit. Prior to MongoDB 4.4, a find operation with a blocking sort failed if it exceeded the memory limit while processing the sort.
For the db.collection.find() shell method with cursor.sort(), MongoDB 4.4 adds the cursor.allowDiskUse() cursor modifier.
allowDiskUse and cursor.allowDiskUse() have no effect if MongoDB can satisfy the sort using an index, or if the blocking sort requires less than 100 megabytes of memory.
For instructions on enabling allowDiskUse for queries issued through a MongoDB driver, defer to the documentation for your preferred MongoDB 4.4-compatible driver.
Starting in MongoDB 4.4,
"4.4" or greater, MongoDB raises the limit on collection/view namespace to 255 bytes. For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),"4.2" or earlier, the maximum length of the collection/view namespace remains 120 bytes.Starting in version MongoDB 4.4,
$currentOp and the currentOp command include dataThroughputAverage and dataThroughputLastSecond information for validate operations in progress.dataThroughputAverage and dataThroughputLastSecond information.See also参阅
compact Behavior Change¶Starting in MongoDB 4.4, compact only blocks the following metadata operations:
db.collection.dropdb.collection.createIndex and db.collection.createIndexesdb.collection.dropIndex and db.collection.dropIndexescompact does not block MongoDB CRUD Operations for the database it is currently operating on.
Previously, compact blocked all operations for the database it was operating on, including MongoDB CRUD Operations, and was therefore only appropriate for use during scheduled maintenance periods.
force Option¶Starting in MongoDB 4.4, the force flag forces compact to run on the primary in a replica set.
Previously, the force option, when set to true enabled compact to run on the primary in a replica set and if set to false, returned an error when run on a primary.
See also参阅
mongod --repair Behavior Change¶Starting in MongoDB 4.4, the mongod --repair rebuilds all indexes for the following:
In earlier versions of MongoDB, the mongod --repair option rebuilt all indexes for all collections.
serverStatus Output Change¶serverStatus returns flowControl.locksPerKiloOp instead of flowControl.locksPerOp.
serverStatus includes the following new fields in its output:
metrics.aggStageCounters (Also available in 4.2.6+ and 4.0.19+) |
replSetGetStatus Output Change¶replSetGetStatus returns the following new fields:
Starting in MongoDB 4.4, the mongo shell method db.auth(<username>, <password>) prompts for the password if you do not pass in the password or the passwordPrompt() method for the <password>.
$natural Sort on Views¶Starting in MongoDB 4.4, you can specify a $natural sort when running a find operation against a view.
Starting in MongoDB 4.4, mongod and mongos processes running on Linux will now log a backtrace for each of their running threads upon receipt of a SIGUSR2 signal. This backtrace can be analyzed for diagnostic information or provided to MongoDB support as needed. This functionality is currently available only on the x86_64 architecture. For more information on using this feature, see Generate a Backtrace.
Starting in MongoDB 4.4, FTDC now reports utilization data for a mongod running in a container from the perspective of the container, as opposed to the host operating system. See Full Time Diagnostic Data Capture for more information.
ulimit Startup Warning¶Starting in MongoDB 4.4, mongod will log a startup warning if a platform’s configured ulimit value for number of open files is under 64000. Previously, a warning would only be logged if this value was under 1000. See Recommended ulimit Settings for more information.
replanReason Database Profiler Output Field¶MongoDB 4.4 adds the replanReason field to database profiler output and diagnostic log messages. The replanReason field contains the reason the query system evicted a cached plan.
dbStats and collStats Output¶The dbStats command and its mongo shell helper db.stats() return:
totalSize, which is the sum of storageSize and indexSize.The collStats command, its mongo shell helper db.collection.stats(), and the $collStats aggregation stage return:
totalSize, which is the sum of storageSize and totalIndexSize.freeStorageSize, which is the amount of storage available for reuse.Starting in MongoDB 4.4, the following database commands can accept a hint argument to specify the index to use:
delete command and the associated mongo shell methods db.collection.deleteOne() and db.collection.deleteMany().findAndModify command and the associated mongo shell methods:See:
mongos¶Starting in MongoDB 4.4, MongoDB allows JavaScript execution on mongos instances. To disable JavaScript execution on a mongos instance:
security.javascriptEnabled configuration option to false, or--noscripting command-line option.Earlier versions of MongoDB do not allow JavaScript execution on mongos instances.
Requires featureCompatibilityVersion 4.4+
Each mongod in the replica set or sharded cluster must have featureCompatibilityVersion set to at least 4.4 to configure global default read and write concern.
Starting in MongoDB 4.4, replica sets and sharded clusters support configuring global default read and write concern settings. Clients which do not explicitly specify a given read or write concern setting inherit the corresponding global default setting.
To configure default global default read or write concern, MongoDB adds the setDefaultRWConcern administrative command. For replica sets, issue the command against the primary member. For sharded clusters, issue the command from a mongos.
To retrieve the global default read or write concern settings, MongoDB adds the getDefaultRWConcern administrative command.
Starting in MongoDB 4.4, read concern objects may include a provenance field, indicating where the read concern originated.
The following table shows the possible read concern provenance values and their significance:
| Provenance | |
|---|---|
clientSupplied |
The read concern was specified in the application. |
customDefault |
The read concern originated from a custom defined default value. See setDefaultRWConcern. |
implicitDefault |
The read concern originated from the server in absence of all other read concern specifications. |
If a read operation is logged or profiled, the operation entry contains the read concern object, including the provenance field.
MongoDB does not recommended specifying the provenance field in requests to the server. This field should only be used for diagnostic purposes.
Starting in MongoDB 4.4, write concern objects may include a provenance field, indicating where the write concern originated.
The following table shows the possible write concern provenance values and their significance:
| Provenance | |
|---|---|
clientSupplied |
The write concern was specified in the application. |
customDefault |
The write concern originated from a custom defined default value. See setDefaultRWConcern. |
getLastErrorDefaults |
The write concern originated from the replica set’s settings.getLastErrorDefaults field. |
implicitDefault |
The write concern originated from the server in absence of all other write concern specifications. |
If a write operation is logged or profiled, the operation entry contains the write concern object, including the provenance field.
MongoDB does not recommended specifying the provenance field in requests to the server. This field should only be used for diagnostic purposes.
currentOp Output¶$currentOp includes dataThroughputAverage and dataThroughputLastSecond information when reporting on validate operations in progress.currentOp command includes dataThroughputAverage and dataThroughputLastSecond information when reporting on validate operations in progress.mongod¶MongoDB 4.4 Enterprise introduces two new configuration settings to enhance the initial connection to a KMIP server, as part of Kerberos authentication:
To control the number of times the mongod retries a failed initial connection to the KMIP server:
security.kmip.connectRetries configuration option, ormongod --kmipConnectRetries command-line option.To control the timeout, in milliseconds, to wait for the initial response from the KMIP server before giving up, or retrying:
security.kmip.connectTimeoutMS configuration option, ormongod --kmipConnectTimeoutMS command-line option.These settings are available in MongoDB Enterprise only.
mongod¶The new processUmask startup option for mongod allows you to set permissions through umask for groups and other users when honorSystemUmask is set to false.
mapReduce Ignores verbose Option¶Starting with MongoDB 4.4, the mapReduce command and the db.collection.mapReduce() shell method ignore the verbose option.
explain Support for mapReduce¶Starting with MongoDB 4.4, you can use the explain command or the db.collection.explain() shell method to preview the results of mapReduce or db.collection.mapReduce().
explain Results¶Starting in version 4.4:
mongos in addition to the serverInfo objects returned for each shard. This is also available in versions 4.2.2, 4.0.14, and 3.6.16.optimizedPipeline is true. In previous versions of MongoDB, explain results would occasionally not include the serverInfo object when optimizedPipeline was true. This is also available in versions 4.2.2, 4.0.14, and 3.6.16.nReturned and executionTimeMillisEstimate fields for each pipeline stage when you run db.collection.explain().aggregate() method in executionStats and allPlansExecution modes.See also参阅
comment Option Available to all Database Commands¶Starting in MongoDB 4.4, all database commands support specifying a comment field, in the following fashion:
Example
Once set, the comment appears alongside records of the command in the following locations:
attr.command.cursor.comment field.command.comment field.currentOp output, in the command.comment field.A comment must be a valid BSON object (string, integer, array, etc).
$ Operator¶Starting in MongoDB 4.4, when using the positional $ operator, you can specify different array fields between the query document and projection document.
For example, if you insert the following document into a collection:
Starting in MongoDB 4.4, you can use the following query to project only the first element from field b for a document that matches the query specified on field a:
Important
To ensure expected behavior, the arrays used in the query document and the projection document must be the same length. If the arrays are different lenghts, the operation may error in certain scenarios.
In previous versions of MongoDB, this operation fails because the array field being limited must appear in the query document.
Some changes can affect compatibility and may require user actions. For a detailed list of compatibility changes, see Compatibility Changes in MongoDB 4.4.
Feature Compatibility Version
To upgrade from 4.2 deployment, the 4.2 deployment must have featureCompatibilityVersion set to 4.2. To check the version:
For specific details on verifying and setting the featureCompatibilityVersion as well as information on other prerequisites/considerations for upgrades, refer to the individual upgrade instructions:
If you need guidance on upgrading to 4.4, MongoDB offers major version upgrade services to help ensure a smooth transition without interruption to your MongoDB application.
| In Version | Issues | Status |
|---|---|---|
| 4.4.0 | SERVER-45042: MongoDB Server Installation MSI for both Community and Enterprise no longer contain binaries for the MongoDB Database Tools. For more information, see Tools Changes. | Unresolved |
| 4.4.0 | SERVER-49694: On a sharded cluster, nearest reads or hedged reads may not be routed to a near shard replica. |
Fixed in 4.1.1 |
| 4.4.0 | WT-6623: Set the connection level file id in recovery file scan | Unresolved |
To report an issue, see https://github.com/mongodb/mongo/wiki/Submit-Bug-Reports for instructions on how to file a JIRA ticket for the MongoDB server or one of the related projects.