On this page本页内容
The following page lists some production considerations for running transactions. These apply whether you run transactions on replica sets or sharded clusters. For running transactions on sharded clusters, see also the Production Considerations (Sharded Clusters) for additional considerations that are specific to sharded clusters.
To use transactions on MongoDB 4.2 deployments(replica sets and sharded clusters), clients must use MongoDB drivers updated for MongoDB 4.2.
Distributed Transactions and Multi-Document Transactions
Starting in MongoDB 4.2, the two terms are synonymous. Distributed transactions refer to multi-document transactions on sharded clusters and replica sets. Multi-document transactions (whether on sharded clusters or replica sets) are also known as distributed transactions starting in MongoDB 4.2.
To use transactions, the featureCompatibilityVersion for all members of the deployment must be at least:
Deployment | Minimum featureCompatibilityVersion |
---|---|
Replica Set | 4.0 |
Sharded Cluster | 4.2 |
To check the fCV for a member, connect to the member and run the following command:
For more information, see the setFeatureCompatibilityVersion
reference page.
By default, a transaction must have a runtime of less than one minute. You can modify this limit using transactionLifetimeLimitSeconds
for the mongod
instances. For sharded clusters, the parameter must be modified for all shard replica set members. Transactions that exceeds this limit are considered expired and will be aborted by a periodic cleanup process.
For sharded clusters, you can also specify a maxTimeMS
limit on commitTransaction
. For more information, see Sharded Clusters Transactions Time Limit.
To prevent storage cache pressure from negatively impacting the performance:
The transactionLifetimeLimitSeconds
also ensures that expired transactions are aborted periodically to relieve storage cache pressure.
You cannot run transactions on a sharded cluster that has a shard with writeConcernMajorityJournalDefault
set to false
(such as a shard with a voting member that uses the in-memory storage engine).
Transactions whose write operations span multiple shards will error and abort if any transaction operation reads from or writes to a shard that contains an arbiter.
See also 3-Member Primary-Secondary-Arbiter Architecture for transaction restrictions on shards that have disabled read concern majority.
For a three-member replica set with a primary-secondary-arbiter (PSA) architecture or a sharded cluster with a three-member PSA shards, you may have disabled read concern “majority” to avoid cache pressure.
"snapshot"
for the transaction. You can only use read concern "local"
or "majority"
for the transaction. If you use read concern "snapshot"
, the transaction errors and aborts.
"majority"
.You can specify read concern "local"
or "majority"
or "snapshot"
even in the replica set has disabled read concern “majority”.
However, if you are planning to transition to a sharded cluster with disabled read concern majority shards, you may wish to avoid using read concern "snapshot"
.
Tip
To check if read concern “majority” is disabled, You can run db.serverStatus()
on the mongod
instances and check the storageEngine.supportsCommittedReads
field. If false
, read concern “majority” is disabled.
By default, transactions wait up to 5
milliseconds to acquire locks required by the operations in the transaction. If the transaction cannot acquire its required locks within the 5
milliseconds, the transaction aborts.
Transactions release all locks upon abort or commit.
Tip
When creating or dropping a collection immediately before starting a transaction, if the collection is accessed within the transaction, issue the create or drop operation with write concern "majority"
to ensure that the transaction can acquire the required locks.
You can use the maxTransactionLockRequestTimeoutMillis
parameter to adjust how long transactions wait to acquire locks. Increasing maxTransactionLockRequestTimeoutMillis
allows operations in the transactions to wait the specified time to acquire the required locks. This can help obviate transaction aborts on momentary concurrent lock acquisitions, like fast-running metadata operations. However, this could possibly delay the abort of deadlocked transaction operations.
You can also use operation-specific timeout by setting maxTransactionLockRequestTimeoutMillis
to -1
.
If a multi-document transaction is in progress, new DDL operations that affect the same database(s) or collection(s) wait behind the transaction. While these pending DDL operations exist, new transactions that access the same database(s) or collection(s) as the pending DDL operations cannot obtain the required locks and and will abort after waiting maxTransactionLockRequestTimeoutMillis
. In addition, new non-transaction operations that access the same database(s) or collection(s) will block until they reach their maxTimeMS
limit.
Consider the following scenarios:
While an in-progress transaction is performing various CRUD operations on the employees
collection in the hr
database, an administrator issues the db.collection.createIndex()
DDL operation against the employees
collection. createIndex()
requires an exclusive collection lock on the collection.
Until the in-progress transaction completes, the createIndex()
operation must wait to obtain the lock. Any new transaction that affects the employees
collection and starts while the createIndex()
is pending must wait until after createIndex()
completes.
The pending createIndex()
DDL operation does not affect transactions on other collections in the hr
database. For example, a new transaction on the contractors
collection in the hr
database can start and complete as normal.
While an in-progress transaction is performing various CRUD operations on the employees
collection in the hr
database, an administrator issues the collMod
DDL operation against the contractors
collection in the same database. collMod
requires a database lock on the parent hr
database.
Until the in-progress transaction completes, the collMod
operation must wait to obtain the lock. Any new transaction that affects the hr
database or any of its collections and starts while the collMod
is pending must wait until after collMod
completes.
In either scenario, if the DDL operation remains pending for more than maxTransactionLockRequestTimeoutMillis
, pending transactions waiting behind that operation abort. That is, the value of maxTransactionLockRequestTimeoutMillis
must at least cover the time required for the in-progress transaction and the pending DDL operation to complete.
If a transaction is in progress and a write outside the transaction modifies a document that an operation in the transaction later tries to modify, the transaction aborts because of a write conflict.
If a transaction is in progress and has taken a lock to modify a document, when a write outside the transaction tries to modify the same document, the write waits until the transaction ends.
Read operations inside a transaction can return stale data. That is, read operations inside a transaction are not guaranteed to see writes performed by other committed transactions or non-transactional writes. For example, consider the following sequence: 1) a transaction is in-progress 2) a write outside the transaction deletes a document 3) a read operation inside the transaction is able to read the now-deleted document since the operation is using a snapshot from before the write.
To avoid stale reads inside transactions for a single document, you can use the db.collection.findOneAndUpdate()
method. For example:例如:
Chunk migration acquires exclusive collection locks during certain stages.
If an ongoing transaction has a lock on a collection and a chunk migration that involves that collection starts, these migration stages must wait for the transaction to release the locks on the collection, thereby impacting the performance of chunk migrations.
If a chunk migration interleaves with a transaction (for instance, if a transaction starts while a chunk migration is already in progress and the migration completes before the transaction takes a lock on the collection), the transaction errors during the commit and aborts.
Depending on how the two operations interleave, some sample errors include (the error messages have been abbreviated):
an error from cluster data placement change ... migration commit in progress for <namespace>
Cannot find shardId the chunk belonged to at cluster time ...
During the commit for a transaction, outside read operations may try to read the same documents that will be modified by the transaction. If the transaction writes to multiple shards, then during the commit attempt across the shards
snapshot
or "linearizable"
, or are part of causally consistent sessions (i.e. include afterClusterTime)
wait for all writes of a transaction to be visible.To use transactions on MongoDB 4.2 deployments(replica sets and sharded clusters), clients must use MongoDB drivers updated for MongoDB 4.2.
On sharded clusters with multiple mongos
instances, performing transactions with drivers updated for MongoDB 4.0 (instead of MongoDB 4.2) will fail and can result in errors, including:
Note
Your driver may return a different error. Refer to your driver’s documentation for details.
Error Code | Error Message |
---|---|
251 | cannot continue txnId -1 for session ... with txnId 1 |
50940 | cannot commit with no participants |
See also参阅