On this page本页内容
$group
¶Groups input documents by the specified 按指定的_id
expression and for each distinct grouping, outputs a document. _id
表达式对输入文档进行分组,对于每个不同的分组,输出一个文档。The 每个输出文档的_id
field of each output document contains the unique group by value. _id
字段包含唯一的分组依据值。The output documents can also contain computed fields that hold the values of some accumulator expression.输出文档还可以包含保存某些累加器表达式的值的计算字段。
Note
$group
does not order its output documents.$group
不为其输出文档排序。
The $group
stage has the following prototype form:$group
阶段具有以下原型形式:
_id |
_id value of null, or any other constant value, the $group stage calculates accumulated values for all the input documents as a whole. _id 值指定为null 或任何其他常量值,$group 阶段将计算所有输入文档作为一个整体的累积值。 |
field |
The _id
and the accumulator operators can accept any valid expression
. _id
和累加器运算符可以接受任何有效表达式。For more information on expressions, see Expressions.有关表达式的详细信息,请参阅表达式。
The 这个<accumulator>
operator must be one of the following accumulator operators:<accumulator>
运算符必须是下列累加器运算符之一:
$accumulator |
|
$addToSet |
|
$avg |
|
$first |
|
$last |
|
$max |
|
$mergeObjects |
|
$min |
|
$push |
|
$stdDevPop |
|
$stdDevSamp |
|
$sum |
$group
The $group
stage has a limit of 100 megabytes of RAM. $group
阶段的RAM限制为100 MB。By default, if the stage exceeds this limit, 默认情况下,如果阶段超过此限制,$group
returns an error. $group
将返回一个错误。To allow for the handling of large datasets, set the 要允许处理大型数据集,请将allowDiskUse
option to true
. allowDiskUse
选项设置为true
。This flag enables 此标志允许$group
operations to write to temporary files. $group
操作写入临时文件。For more information, see the 有关更多信息,请参阅db.collection.aggregate()
method and the aggregate
command.db.collection.aggregate()
方法和aggregate
命令。
If a pipeline 如果管道按相同的字段排序和分组,而sorts
and groups
by the same field and the $group
stage only uses the $first
accumulator operator, consider adding an index on the grouped field which matches the sort order. $group
阶段只使用$first
累加器运算符,则考虑在与排序顺序相匹配的分组字段中添加索引。In some cases, the 在某些情况下,$group
stage can use the index to quickly find the first document of each group.$group
阶段可以使用索引快速查找每个组的第一个文档。
Example例子
If a collection named 如果名为foo
contains an index { x: 1, y: 1 }
, the following pipeline can use that index to find the first document of each group:foo
的集合包含索引{ x: 1, y: 1 }
,则以下管道可以使用该索引查找每个组的第一个文档:
From the 从mongo
shell, create a sample collection named sales
with the following documents:mongo
shell中,创建一个名为sales
的样本集合,其中包含以下文档:
The following aggregation operation uses the 以下聚合操作使用$group
stage to count the number of documents in the sales
collection:$group
阶段统计sales
集合中的文档数:
The operation returns the following result:操作返回以下结果:
This aggregation operation is equivalent to the following SQL statement:此聚合操作相当于以下SQL语句:
See also参阅
The following aggregation operation uses the 以下聚合操作使用$group
stage to retrieve the distinct item values from the sales
collection:$group
阶段从sales
集合检索不同的项目值:
The operation returns the following result:操作返回以下结果:
The following aggregation operation groups documents by the 以下聚合操作按item
field, calculating the total sale amount per item and returning only the items with total sale amount greater than or equal to 100:item
字段对文档进行分组,计算每个项目的总销售金额,并仅返回总销售金额大于或等于100的项目:
$group
stage groups the documents by item
to retrieve the distinct item values. $group
阶段按item
对文档进行分组,以检索不同的项值。totalSaleAmount
for each item.totalSaleAmount
。$match
stage filters the resulting documents to only return items with a totalSaleAmount
greater than or equal to 100.$match
阶段筛选生成的文档,以仅返回totalSaleAmount
大于或等于100
的项目。The operation returns the following result:操作返回以下结果:
This aggregation operation is equivalent to the following SQL statement:此聚合操作相当于以下SQL语句:
See also参阅
From the 从mongo
shell, create a sample collection named sales
with the following documents:mongo
shell中,创建一个名为sales
的样本集合,其中包含以下文档:
The following pipeline calculates the total sales amount, average sales quantity, and sale count for each day in the year 2014:以下管道计算2014年每天的总销售额、平均销售额和销售额:
$match
stage filters the documents to only pass documents from the year 2014 to the next stage.$match
阶段筛选文档,仅将2014年的文档传递到下一阶段。$group
stage groups the documents by date and calculates the total sale amount, average quantity, and total count of the documents in each group.$group
阶段按日期对文档进行分组,并计算每组文档的总销售额、平均数量和总计数。$sort
stage sorts the results by the total sale amount for each group in descending order.$sort
阶段按每组的总销售额降序对结果进行排序。The operation returns the following results:操作返回以下结果:
This aggregation operation is equivalent to the following SQL statement:此聚合操作相当于以下SQL语句:
null
null
分组¶The following aggregation operation specifies a group 以下聚合操作指定组_id
of null
, calculating the total sale amount, average quantity, and count of all documents in the collection._id
为null
,用于计算集合中所有文档的总销售额、平均数量和计数。
The operation returns the following result:操作返回以下结果:
This aggregation operation is equivalent to the following SQL statement:此聚合操作相当于以下SQL语句:
See also参阅
$count
db.collection.countDocuments()
$group
aggregation stage with a $sum
expression.$sum
表达式包装$group
聚合阶段。From the 从mongo
shell, create a sample collection named books
with the following documents:mongo
shell创建一个名为books
的示例集合,其中包含以下文档:
title
by author
author
分组title
¶The following aggregation operation pivots the data in the 下面的聚合操作将books
collection to have titles grouped by authors.books
集合中的数据透视为按作者分组的题名。
The operation returns the following documents:该操作将返回以下文档:
author
author
分组文档¶The following aggregation operation groups documents by 以下聚合操作按author
:author
对文档进行分组:
$group
uses the 使用$$ROOT
system variable to group the entire documents by authors. $$ROOT
系统变量按作者对整个文档进行分组。This stage passes the following documents to the next stage:此阶段将以下文档传递到下一阶段:
$addFields
adds a field to the output containing the total copies of books for each author.向输出中添加一个字段,其中包含每个作者的图书总拷贝数。
Note
The resulting documents must not exceed the 生成的文档不得超过BSON文档大小限制(16 MB)。BSON Document Size
limit of 16 megabytes.
The operation returns the following documents:该操作将返回以下文档:
See also参阅
The Aggregation with the Zip Code Data Set tutorial provides an extensive example of the “Zip编码数据集聚合”教程提供了一个常见用例中$group
operator in a common use case.$group
运算符的广泛示例。