Read Isolation, Consistency, and Recency读隔离、一致性和最近性

On this page本页内容

Isolation Guarantees隔离保证

Read Uncommitted未提交读

Depending on the read concern, clients can see the results of writes before the writes are durable:根据读关注点的不同,客户端可以在写入持久之前查看写入的结果:

  • Regardless of a write’s write concern, other clients using "local" or "available" read concern can see the result of a write operation before the write operation is acknowledged to the issuing client.无论写操作的写关注点是什么,使用"local""available"读关注点的其他客户端都可以在向发出请求的客户端确认写操作之前看到写操作的结果。
  • Clients using "local" or "available" read concern can read data which may be subsequently rolled back during replica set failovers.使用"local""available"读取关注点的客户端可以读取数据,这些数据可能会在副本集故障切换期间随后回滚

For operations in a multi-document transaction, when a transaction commits, all data changes made in the transaction are saved and visible outside the transaction. That is, a transaction will not commit some of its changes while rolling back others.

Until a transaction commits, the data changes made in the transaction are not visible outside the transaction.在事务提交之前,事务中所做的数据更改在事务外部不可见。

However, when a transaction writes to multiple shards, not all outside read operations need to wait for the result of the committed transaction to be visible across the shards. 但是,当事务写入多个分片时,并非所有外部读取操作都需要等待提交的事务的结果在分片中可见。For example, if a transaction is committed and write 1 is visible on shard A but write 2 is not yet visible on shard B, an outside read at read concern "local" can read the results of write 1 without seeing write 2.例如,如果事务已提交,且写入1在碎片a上可见,但写入2在碎片B上尚不可见,则外部读取-读取关注点"local"可以读取写入1的结果,而不查看写入2。

Read uncommitted is the default isolation level and applies to mongod standalone instances as well as to replica sets and sharded clusters.读取未提交是默认隔离级别,适用于mongod独立实例以及副本集和分片集群。

Read Uncommitted And Single Document Atomicity读取未提交和单文档原子性

Write operations are atomic with respect to a single document; i.e. if a write is updating multiple fields in the document, a read operation will never see the document with only some of the fields updated. 写操作是针对单个文档的原子操作;亦即,如果写入操作正在更新文档中的多个字段,则读取操作将永远不会看到仅更新了部分字段的文档。However, although a client may not see a partially updated document, read uncommitted means that concurrent read operations may still see the updated document before the changes are made durable.但是,尽管客户端可能看不到部分更新的文档,但read uncommitted意味着并发读取操作仍可能在更改变得持久之前看到更新的文档。

With a standalone mongod instance, a set of read and write operations to a single document is serializable. 对于独立的mongod实例,对单个文档的一组读写操作是可序列化的。With a replica set, a set of read and write operations to a single document is serializable only in the absence of a rollback.对于副本集,对单个文档的一组读写操作只能在没有回滚的情况下序列化。

Read Uncommitted And Multiple Document Write读取未提交和多文档写入

When a single write operation (e.g. db.collection.updateMany()) modifies multiple documents, the modification of each document is atomic, but the operation as a whole is not atomic.当单个写入操作(例如db.collection.updateMany())修改多个文档时,每个文档的修改都是原子的,但整个操作不是原子的。

When performing multi-document write operations, whether through a single write operation or multiple write operations, other operations may interleave.执行多文档写入操作时,无论是通过单个写入操作还是通过多个写入操作,其他操作都可能交错。

For situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports multi-document transactions:对于需要对多个文档(在单个或多个集合中)进行原子性读写的情况,MongoDB支持多文档事务:

  • In version 4.0, MongoDB supports multi-document transactions on replica sets.在版本4.0中,MongoDB支持副本集上的多文档事务。
  • In version 4.2, MongoDB introduces distributed transactions, which adds support for multi-document transactions on sharded clusters and incorporates the existing support for multi-document transactions on replica sets.在版本4.2中,MongoDB引入了分布式事务,它增加了对分片集群上多文档事务的支持,并合并了对副本集上多文档事务的现有支持。

For details regarding transactions in MongoDB, see the Transactions page.有关MongoDB中事务的详细信息,请参阅事务页面。

Important重要的

In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transactions should not be a replacement for effective schema design. 在大多数情况下,多文档事务比单文档写入带来更大的性能成本,并且多文档事务的可用性不应取代有效的模式设计。For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. 对于许多场景,非规范化数据模型(嵌入式文档和数组)将继续适合您的数据和用例。That is, for many scenarios, modeling your data appropriately will minimize the need for multi-document transactions.也就是说,对于许多场景,适当地建模数据将最大限度地减少对多文档事务的需要。

For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.有关其他事务使用注意事项(如运行时限制和oplog大小限制),请参阅生产注意事项

Without isolating the multi-document write operations, MongoDB exhibits the following behavior:在不隔离多文档写入操作的情况下,MongoDB表现出以下行为:

  1. Non-point-in-time read operations. 非时间点读取操作。Suppose a read operation begins at time t1 and starts reading documents. 假设读取操作从时间t1开始并开始读取文档。A write operation then commits an update to one of the documents at some later time t2. 然后,写入操作在稍后的某个时间t2提交对其中一个文档的更新。The reader may see the updated version of the document, and therefore does not see a point-in-time snapshot of the data.读者可能会看到文档的更新版本,因此不会看到数据的时间点快照。
  2. Non-serializable operations. 不可序列化的操作。Suppose a read operation reads a document d1 at time t1 and a write operation updates d1 at some later time t3. This introduces a read-write dependency such that, if the operations were to be serialized, the read operation must precede the write operation. But also suppose that the write operation updates document d2 at time t2 and the read operation subsequently reads d2 at some later time t4. This introduces a write-read dependency which would instead require the read operation to come after the write operation in a serializable schedule. There is a dependency cycle which makes serializability impossible.存在一个依赖循环,使得序列化不可能。
  3. Reads may miss matching documents that are updated during the course of the read operation.读取操作可能会丢失在读取操作过程中更新的匹配文档。

Cursor Snapshot游标快照

MongoDB cursors can return the same document more than once in some situations. 在某些情况下,MongoDB游标可以多次返回同一文档。As a cursor returns documents, other operations may interleave with the query. 当游标返回文档时,其他操作可能会与查询交错。If one of these operations changes the indexed field on the index used by the query, then the cursor could return the same document more than once.如果其中一个操作更改了查询使用的索引上的索引字段,则游标可以多次返回同一文档。

If your collection has a field or fields that are never modified, you can use a unique index on this field or these fields so that the query will return each document no more than once. 如果集合中有一个或多个从未修改过的字段,则可以在此字段或这些字段上使用唯一索引,以便查询只返回每个文档一次。Query with hint() to explicitly force the query to use that index.使用hint()查询显式强制查询使用该索引。

Monotonic Writes单调书写

MongoDB provides monotonic write guarantees, by default, for standalone mongod instances and replica set.默认情况下,MongoDB为独立mongod实例和副本集提供单调写入保证。

For monotonic writes and sharded clusters, see Causal Consistency.有关分片一致性,请参阅因果一致性

Real Time Order实时订单

New in version 3.4.版本3.4中的新功能。

For read and write operations on the primary, issuing read operations with "linearizable" read concern and write operations with "majority" write concern enables multiple threads to perform reads and writes on a single document as if a single thread performed these operations in real time; that is, the corresponding schedule for these reads and writes is considered linearizable.对于主线程上的读写操作,发出具有"linearizable"读关注点的读操作和具有"majority"写关注点的写操作,使多个线程能够对单个文档执行读写操作,就像单个线程实时执行这些操作一样;也就是说,这些读写的相应调度被认为是可线性化的。

Causal Consistency因果一致性

New in version 3.6.版本3.6中的新功能。

If an operation logically depends on a preceding operation, there is a causal relationship between the operations. 如果一个操作在逻辑上依赖于前面的操作,那么两个操作之间存在因果关系。For example, a write operation that deletes all documents based on a specified condition and a subsequent read operation that verifies the delete operation have a causal relationship.例如,基于指定条件删除所有文档的写入操作和验证删除操作是否存在因果关系的后续读取操作。

With causally consistent sessions, MongoDB executes causal operations in an order that respect their causal relationships, and clients observe results that are consistent with the causal relationships.通过因果一致性会话,MongoDB按照尊重因果关系的顺序执行因果操作,客户观察到的结果与因果关系一致。

Client Sessions and Causal Consistency Guarantees客户会话和因果一致性保证

To provide causal consistency, MongoDB 3.6 enables causal consistency in client sessions. 为了提供因果一致性,MongoDB 3.6在客户端会话中启用因果一致性。A causally consistent session denotes that the associated sequence of read operations with "majority" read concern and write operations with "majority" write concern have a causal relationship that is reflected by their ordering. Applications must ensure that only one thread at a time executes these operations in a client session.

For causally related operations:对于因果相关的操作:

  1. A client starts a client session.客户端启动客户端会话。

    Important重要的

    Client sessions only guarantee causal consistency for:客户端会话仅保证以下情况的因果一致性:

    • Read operations with "majority"; i.e. the return data has been acknowledged by a majority of the replica set members and is durable."majority"读取操作;亦即,返回数据已被大多数副本集成员确认,并且是持久的。
    • Write operations with "majority" write concern; i.e. the write operations that request acknowledgement that the operation has been applied to a majority of the replica set’s voting members.具有"majority"写入关注点的写入操作;亦即,请求确认操作已应用于副本集的大多数投票成员的写入操作。

    For more information on causal consistency and various read and write concerns, see Causal Consistency and Read and Write Concerns.有关因果一致性和各种读写关注点的更多信息,请参阅因果一致性和读写关注点

  2. As the client issues a sequence of read with "majority" read concern and write operations (with "majority" write concern), the client includes the session information with each operation.
  3. For each read operation with "majority" read concern and write operation with "majority" write concern associated with the session, MongoDB returns the operation time and the cluster time, even if the operation errors. The client session keeps track of the operation time and the cluster time.客户端会话跟踪操作时间和群集时间。

    Note

    MongoDB does not return the operation time and the cluster time for unacknowledged (w: 0) write operations. MongoDB不会返回未确认w: 0)写入操作的操作时间和群集时间。Unacknowledged writes do not imply any causal relationship.未确认的书写并不意味着任何因果关系。

    Although, MongoDB returns the operation time and the cluster time for read operations and acknowledged write operations in a client session, only the read operations with "majority" read concern and write operations with "majority" write concern can guarantee causal consistency. For details, see Causal Consistency and Read and Write Concerns.

  4. The associated client session tracks these two time fields.关联的客户端会话跟踪这两个时间字段。

    Note

    Operations can be causally consistent across different sessions. 不同会话之间的操作可以是因果一致的。MongoDB drivers and the mongo shell provide the methods to advance the operation time and the cluster time for a client session. MongoDB驱动程序和mongoShell提供了提前客户端会话的操作时间和集群时间的方法。So, a client can advance the cluster time and the operation time of one client session to be consistent with the operations of another client session.因此,客户机可以提前一个客户机会话的集群时间和操作时间,以与另一个客户机会话的操作一致。

Causal Consistency Guarantees因果一致性保证

The following table lists the causal consistency guarantees provided by causally consistent sessions for read operations with "majority" read concern and write operations with "majority" write concern.下表列出了因果一致性会话为具有"majority"读关注点的读操作和具有"majority"写关注点的写操作提供的因果一致性保证。

GuaranteesDescription描述
Read your writes读你的文章 Read operations reflect the results of write operations that precede them.读取操作反映在它们之前的写入操作的结果。
Monotonic reads单调阅读

Read operations do not return results that correspond to an earlier state of the data than a preceding read operation.读取操作不会返回和先前读取操作之前的数据状态相对应的结果。

For example, if in a session:例如,如果在会话中:

  • write1 precedes write2,
  • read1 precedes read2, and
  • read1 returns results that reflect write2

then read2 cannot return results of write1.

Monotonic writes单调写入

Write operations that must precede other writes are executed before those other writes.必须在其他写入之前执行的写入操作将在其他写入之前执行。

For example, if write1 must precede write2 in a session, the state of the data at the time of write2 must reflect the state of the data post write1. Other writes can interleave between write1 and write write2, but write2 cannot occur before write1.

Writes follow reads先写后读 Write operations that must occur after read operations are executed after those read operations. 必须在读操作之后执行的写操作在读操作之后执行。That is, the state of the data at the time of the write must incorporate the state of the data of the preceding read operations.也就是说,写入时的数据状态必须包含前面读取操作的数据状态。

Read Preference阅读偏好

These guarantees hold across all members of the MongoDB deployment. For example, if, in a causally consistent session, you issue a write with "majority" write concern followed by a read that reads from a secondary (i.e. read preference secondary) with "majority" read concern, the read operation will reflect the state of the database after the write operation.

Isolation隔离

Operations within a causally consistent session are not isolated from operations outside the session. 因果一致会话内的操作不会与会话外的操作隔离。If a concurrent write operation interleaves between the session’s write and read operations, the session’s read operation may return results that reflect a write operation that occurred after the session’s write operation.如果并发写操作在会话的写操作和读操作之间交错,则会话的读操作可能返回反映在会话写操作之后发生的写操作的结果。

MongoDB DriversMongoDB驱动程序

Tip

Applications must ensure that only one thread at a time executes these operations in a client session.应用程序必须确保在客户端会话中一次只有一个线程执行这些操作。

Clients require MongoDB drivers updated for MongoDB 3.6 or later:客户端需要为MongoDB 3.6或更高版本更新MongoDB驱动程序:

Java 3.6+

Python 3.6+

C 1.9+

C# 2.5+

Node 3.0+

Ruby 2.5+

Perl 2.0+

PHPC 1.4+

Scala 2.2+

Examples

Important

Causally consistent sessions can only guarantee causal consistency for reads with "majority" read concern and writes with "majority" write concern.因果一致性会话只能保证具有"majority"读关注点的读取和具有"majority"写关注点的写入的因果一致性。

Consider a collection items that maintains the current and historical data for various items. 考虑一个集合items,该项保存各种项的当前和历史数据。Only the historical data has a non-null end date. 只有历史数据具有非空的end日期。If the sku value for an item changes, the document with the old sku value needs to be updated with the end date, after which the new document is inserted with the current sku value. 如果某个物料的sku值发生更改,则需要使用结束日期更新具有旧sku值的文档,然后使用当前sku值插入新文档。The client can use a causally consistent session to ensure that the update occurs before the insert.客户端可以使用因果一致的会话来确保在插入之前进行更新。

    with client.start_session(causal_consistency=True) as s1:
        current_date = datetime.datetime.today()
        items = client.get_database(
            'test', read_concern=ReadConcern('majority'),
            write_concern=WriteConcern('majority', wtimeout=1000)).items
        items.update_one(
            {'sku': "111", 'end': None},
            {'$set': {'end': current_date}}, session=s1)
        items.insert_one(
            {'sku': "nuts-111", 'name': "Pecans",
             'start': current_date}, session=s1)
    // Example 1: Use a causally consistent session to ensure that the update occurs before the insert.
    ClientSession session1 = client.startSession(ClientSessionOptions.builder().causallyConsistent(true).build());
    Date currentDate = new Date();
    MongoCollection<Document> items = client.getDatabase("test")
            .withReadConcern(ReadConcern.MAJORITY)
            .withWriteConcern(WriteConcern.MAJORITY.withWTimeout(1000, TimeUnit.MILLISECONDS))
            .getCollection("test");
    
    items.updateOne(session1, eq("sku", "111"), set("end", currentDate));
    
    Document document = new Document("sku", "nuts-111")
            .append("name", "Pecans")
            .append("start", currentDate);
    items.insertOne(session1, document);
    $items = $client->selectDatabase(
        'test',
        [
            'readConcern' => new \MongoDB\Driver\ReadConcern(\MongoDB\Driver\ReadConcern::MAJORITY),
            'writeConcern' => new \MongoDB\Driver\WriteConcern(\MongoDB\Driver\WriteConcern::MAJORITY, 1000),
        ]
    )->items;
    
    $s1 = $client->startSession(
        [ 'causalConsistency' => true ]
    );
    
    $currentDate = new \MongoDB\BSON\UTCDateTime();
    
    $items->updateOne(
        [ 'sku' => '111', 'end' => [ '$exists' => false ] ],
        [ '$set' => [ 'end' => $currentDate ] ],
        [ 'session' => $s1 ]
    );
    $items->insertOne(
        [ 'sku' => '111-nuts', 'name' => 'Pecans', 'start' => $currentDate ],
        [ 'session' => $s1 ]
    );
      async with await client.start_session(causal_consistency=True) as s1:
          current_date = datetime.datetime.today()
          items = client.get_database(
              'test', read_concern=ReadConcern('majority'),
              write_concern=WriteConcern('majority', wtimeout=1000)).items
          await items.update_one(
              {'sku': "111", 'end': None},
              {'$set': {'end': current_date}}, session=s1)
          await items.insert_one(
              {'sku': "nuts-111", 'name': "Pecans",
               'start': current_date}, session=s1)
    
     /* Use a causally-consistent session to run some operations. */
    
     wc = mongoc_write_concern_new ();
     mongoc_write_concern_set_wmajority (wc, 1000);
     mongoc_collection_set_write_concern (coll, wc);
    
     rc = mongoc_read_concern_new ();
     mongoc_read_concern_set_level (rc, MONGOC_READ_CONCERN_LEVEL_MAJORITY);
     mongoc_collection_set_read_concern (coll, rc);
    
     session_opts = mongoc_session_opts_new ();
     mongoc_session_opts_set_causal_consistency (session_opts, true);
    
     session1 = mongoc_client_start_session (client, session_opts, &error);
     if (!session1) {
        fprintf (stderr, "couldn't start session: %s\n", error.message);
        goto cleanup;
     }
    
     /* Run an update_one with our causally-consistent session. */
     update_opts = bson_new ();
     res = mongoc_client_session_append (session1, update_opts, &error);
     if (!res) {
        fprintf (stderr, "couldn't add session to opts: %s\n", error.message);
        goto cleanup;
     }
    
     query = BCON_NEW ("sku", "111");
     update = BCON_NEW ("$set", "{", "end",
          BCON_DATE_TIME (bson_get_monotonic_time ()), "}");
     res = mongoc_collection_update_one (coll,
    		       query,
    		       update,
    		       update_opts,
    		       NULL, /* reply */
    		       &error);
    
     if (!res) {
        fprintf (stderr, "update failed: %s\n", error.message);
        goto cleanup;
     }
    
     /* Run an insert with our causally-consistent session */
     insert_opts = bson_new ();
     res = mongoc_client_session_append (session1, insert_opts, &error);
     if (!res) {
        fprintf (stderr, "couldn't add session to opts: %s\n", error.message);
        goto cleanup;
     }
    
     insert = BCON_NEW ("sku", "nuts-111", "name", "Pecans",
          "start", BCON_DATE_TIME (bson_get_monotonic_time ()));
     res = mongoc_collection_insert_one (coll, insert, insert_opts, NULL, &error);
     if (!res) {
        fprintf (stderr, "insert failed: %s\n", error.message);
        goto cleanup;
     }
    using (var session1 = client.StartSession(new ClientSessionOptions { CausalConsistency = true }))
    {
        var currentDate = DateTime.UtcNow.Date;
        var items = client.GetDatabase(
            "test",
            new MongoDatabaseSettings
            {
                ReadConcern = ReadConcern.Majority,
                WriteConcern = new WriteConcern(
                        WriteConcern.WMode.Majority,
                        TimeSpan.FromMilliseconds(1000))
            })
            .GetCollection<BsonDocument>("items");
    
        items.UpdateOne(session1,
            Builders<BsonDocument>.Filter.And(
                Builders<BsonDocument>.Filter.Eq("sku", "111"),
                Builders<BsonDocument>.Filter.Eq("end", BsonNull.Value)),
            Builders<BsonDocument>.Update.Set("end", currentDate));
    
        items.InsertOne(session1, new BsonDocument
        {
            {"sku", "nuts-111"},
            {"name", "Pecans"},
            {"start", currentDate}
        });
    }
    my $s1 = $conn->start_session({ causalConsistency => 1 });
    $items = $conn->get_database(
        "test", {
            read_concern => { level => 'majority' },
            write_concern => { w => 'majority', wtimeout => 10000 },
        }
    )->get_collection("items");
    $items->update_one(
        {
            sku => 111,
            end  => undef
        },
        {
            '$set' => { end => $current_date}
        },
        {
            session => $s1
        }
    );
    $items->insert_one(
        {
            sku => "nuts-111",
            name  => "Pecans",
            start => $current_date
        },
        {
            session => $s1
        }
    );
    let s1 = client1.startSession(options: ClientSessionOptions(causalConsistency: true))
    let currentDate = Date()
    var dbOptions = MongoDatabaseOptions(
        readConcern: .majority,
        writeConcern: try .majority(wtimeoutMS: 1000)
    )
    let items = client1.db("test", options: dbOptions).collection("items")
    try items.updateOne(
        filter: ["sku": "111", "end": .null],
        update: ["$set": ["end": .datetime(currentDate)]],
        session: s1
    )
    try items.insertOne(["sku": "nuts-111", "name": "Pecans", "start": .datetime(currentDate)], session: s1)
    let s1 = client1.startSession(options: ClientSessionOptions(causalConsistency: true))
    let currentDate = Date()
    var dbOptions = MongoDatabaseOptions(
        readConcern: .majority,
        writeConcern: try .majority(wtimeoutMS: 1000)
    )
    let items = client1.db("test", options: dbOptions).collection("items")
    let result1 = items.updateOne(
        filter: ["sku": "111", "end": .null],
        update: ["$set": ["end": .datetime(currentDate)]],
        session: s1
    ).flatMap { _ in
        items.insertOne(["sku": "nuts-111", "name": "Pecans", "start": .datetime(currentDate)], session: s1)
    }

    If another client needs to read all current sku values, you can advance the cluster time and the operation time to that of the other session to ensure that this client is causally consistent with the other session and read after the two writes:如果另一个客户端需要读取所有当前sku值,您可以将群集时间和操作时间提前到另一个会话的时间,以确保此客户端与另一个会话因果一致,并在两次写入后读取:

    with client.start_session(causal_consistency=True) as s2:
        s2.advance_cluster_time(s1.cluster_time)
        s2.advance_operation_time(s1.operation_time)
    
        items = client.get_database(
            'test', read_preference=ReadPreference.SECONDARY,
            read_concern=ReadConcern('majority'),
            write_concern=WriteConcern('majority', wtimeout=1000)).items
        for item in items.find({'end': None}, session=s2):
            print(item)
    // Example 2: Advance the cluster time and the operation time to that of the other session to ensure that
    // this client is causally consistent with the other session and read after the two writes.
    ClientSession session2 = client.startSession(ClientSessionOptions.builder().causallyConsistent(true).build());
    session2.advanceClusterTime(session1.getClusterTime());
    session2.advanceOperationTime(session1.getOperationTime());
    
    items = client.getDatabase("test")
            .withReadPreference(ReadPreference.secondary())
            .withReadConcern(ReadConcern.MAJORITY)
            .withWriteConcern(WriteConcern.MAJORITY.withWTimeout(1000, TimeUnit.MILLISECONDS))
            .getCollection("items");
    
    for (Document item: items.find(session2, eq("end", BsonNull.VALUE))) {
        System.out.println(item);
    }
    $s2 = $client->startSession(
        [ 'causalConsistency' => true ]
    );
    $s2->advanceClusterTime($s1->getClusterTime());
    $s2->advanceOperationTime($s1->getOperationTime());
    
    $items = $client->selectDatabase(
        'test',
        [
            'readPreference' => new \MongoDB\Driver\ReadPreference(\MongoDB\Driver\ReadPreference::RP_SECONDARY),
            'readConcern' => new \MongoDB\Driver\ReadConcern(\MongoDB\Driver\ReadConcern::MAJORITY),
            'writeConcern' => new \MongoDB\Driver\WriteConcern(\MongoDB\Driver\WriteConcern::MAJORITY, 1000),
        ]
    )->items;
    
    $result = $items->find(
        [ 'end' => [ '$exists' => false ] ],
        [ 'session' => $s2 ]
    );
    foreach ($result as $item) {
        var_dump($item);
    }
      async with await client.start_session(causal_consistency=True) as s2:
          s2.advance_cluster_time(s1.cluster_time)
          s2.advance_operation_time(s1.operation_time)
    
          items = client.get_database(
              'test', read_preference=ReadPreference.SECONDARY,
              read_concern=ReadConcern('majority'),
              write_concern=WriteConcern('majority', wtimeout=1000)).items
          async for item in items.find({'end': None}, session=s2):
              print(item)
    
     /* Make a new session, session2, and make it causally-consistent
    * with session1, so that session2 will read session1's writes. */
     session2 = mongoc_client_start_session (client, session_opts, &error);
     if (!session2) {
        fprintf (stderr, "couldn't start session: %s\n", error.message);
        goto cleanup;
     }
    
     /* Set the cluster time for session2 to session1's cluster time */
     cluster_time = mongoc_client_session_get_cluster_time (session1);
     mongoc_client_session_advance_cluster_time (session2, cluster_time);
    
     /* Set the operation time for session2 to session2's operation time */
     mongoc_client_session_get_operation_time (session1, &timestamp, &increment);
     mongoc_client_session_advance_operation_time (session2,
    				 timestamp,
    				 increment);
    
     /* Run a find on session2, which should now find all writes done
    * inside of session1 */
     find_opts = bson_new ();
     res = mongoc_client_session_append (session2, find_opts, &error);
     if (!res) {
        fprintf (stderr, "couldn't add session to opts: %s\n", error.message);
        goto cleanup;
     }
    
     find_query = BCON_NEW ("end", BCON_NULL);
     read_prefs = mongoc_read_prefs_new (MONGOC_READ_SECONDARY);
     cursor = mongoc_collection_find_with_opts (coll,
    			      query,
    			      find_opts,
    			      read_prefs);
    
     while (mongoc_cursor_next (cursor, &result)) {
        json = bson_as_json (result, NULL);
        fprintf (stdout, "Document: %s\n", json);
        bson_free (json);
     }
    
     if (mongoc_cursor_error (cursor, &error)) {
        fprintf (stderr, "cursor failure: %s\n", error.message);
        goto cleanup;
     }
    using (var session2 = client.StartSession(new ClientSessionOptions { CausalConsistency = true }))
    {
        session2.AdvanceClusterTime(session1.ClusterTime);
        session2.AdvanceOperationTime(session1.OperationTime);
    
        var items = client.GetDatabase(
            "test",
            new MongoDatabaseSettings
            {
                ReadPreference = ReadPreference.Secondary,
                ReadConcern = ReadConcern.Majority,
                WriteConcern = new WriteConcern(WriteConcern.WMode.Majority, TimeSpan.FromMilliseconds(1000))
            })
            .GetCollection<BsonDocument>("items");
    
        var filter = Builders<BsonDocument>.Filter.Eq("end", BsonNull.Value);
        foreach (var item in items.Find(session2, filter).ToEnumerable())
        {
            // process item
        }
    }
    my $s2 = $conn->start_session({ causalConsistency => 1 });
    $s2->advance_cluster_time( $s1->cluster_time );
    $s2->advance_operation_time( $s1->operation_time );
    
    $items = $conn->get_database(
        "test", {
            read_preference => 'secondary',
            read_concern => { level => 'majority' },
            write_concern => { w => 'majority', wtimeout => 10000 },
        }
    )->get_collection("items");
    $cursor = $items->find( { end => undef }, { session => $s2 } );
    
    for my $item ( $cursor->all ) {
        say join(" ", %$item);
    }
    try client2.withSession(options: ClientSessionOptions(causalConsistency: true)) { s2 in
        // The cluster and operation times are guaranteed to be non-nil since we already used s1 for operations above.
        s2.advanceClusterTime(to: s1.clusterTime!)
        s2.advanceOperationTime(to: s1.operationTime!)
    
        dbOptions.readPreference = .secondary
        let items2 = client2.db("test", options: dbOptions).collection("items")
        for item in try items2.find(["end": .null], session: s2) {
            print(item)
        }
    }
    let options = ClientSessionOptions(causalConsistency: true)
    let result2: EventLoopFuture<Void> = client2.withSession(options: options) { s2 in
        // The cluster and operation times are guaranteed to be non-nil since we already used s1 for operations above.
        s2.advanceClusterTime(to: s1.clusterTime!)
        s2.advanceOperationTime(to: s1.operationTime!)
    
        dbOptions.readPreference = .secondary
        let items2 = client2.db("test", options: dbOptions).collection("items")
    
        return items2.find(["end": .null], session: s2).flatMap { cursor in
            cursor.forEach { item in
                print(item)
            }
        }
    }

    Limitations局限性

    The following operations that build in-memory structures are not causally consistent:以下内置内存结构的操作在原因上不一致:

    Operation操作Notes备注
    collStats  
    $collStats with latencyStats option.  
    $currentOp Returns an error if the operation is associated with a causally consistent client session.如果操作与因果一致的客户端会话关联,则返回错误。
    createIndexes  
    dbHash Starting in MongoDB 4.2从MongoDB 4.2开始
    dbStats  
    getMore Returns an error if the operation is associated with a causally consistent client session.如果操作与因果一致的客户端会话关联,则返回错误。
    $indexStats  
    mapReduce Starting in MongoDB 4.2从MongoDB 4.2开始
    ping Returns an error if the operation is associated with a causally consistent client session.如果操作与因果一致的客户端会话关联,则返回错误。
    serverStatus Returns an error if the operation is associated with a causally consistent client session.如果操作与因果一致的客户端会话关联,则返回错误。
    validate Starting in MongoDB 4.2从MongoDB 4.2开始