当前位置: 首页 > 知识库问答 >
问题:

df.repartition不带列参数的分区是什么?

索锐藻
2023-03-14

我的问题是,当没有密钥时,Spark如何重新分区?我无法进一步深入源代码,以找到它通过Spark本身的位置。

def repartition(self, numPartitions, *cols):
    """
    Returns a new :class:`DataFrame` partitioned by the given partitioning expressions. The
    resulting DataFrame is hash partitioned.

    :param numPartitions:
        can be an int to specify the target number of partitions or a Column.
        If it is a Column, it will be used as the first partitioning column. If not specified,
        the default number of partitions is used.

    .. versionchanged:: 1.6
       Added optional arguments to specify the partitioning columns. Also made numPartitions
       optional if partitioning columns are specified.

    >>> df.repartition(10).rdd.getNumPartitions()
    10
    >>> data = df.union(df).repartition("age")
    >>> data.show()
    +---+-----+
    |age| name|
    +---+-----+
    |  5|  Bob|
    |  5|  Bob|
    |  2|Alice|
    |  2|Alice|
    +---+-----+
    >>> data = data.repartition(7, "age")
    >>> data.show()
    +---+-----+
    |age| name|
    +---+-----+
    |  2|Alice|
    |  5|  Bob|
    |  2|Alice|
    |  5|  Bob|
    +---+-----+
    >>> data.rdd.getNumPartitions()
    7
    """
    if isinstance(numPartitions, int):
        if len(cols) == 0:
            return DataFrame(self._jdf.repartition(numPartitions), self.sql_ctx)
        else:
            return DataFrame(
                self._jdf.repartition(numPartitions, self._jcols(*cols)), self.sql_ctx)
    elif isinstance(numPartitions, (basestring, Column)):
        cols = (numPartitions, ) + cols
        return DataFrame(self._jdf.repartition(self._jcols(*cols)), self.sql_ctx)
    else:
        raise TypeError("numPartitions should be an int or Column")

例如:调用这些行完全可以,但我不知道它实际上在做什么。是整行的散列吗?也许是DataFrame中的第一列?

df_2 = df_1\
       .where(sf.col('some_column') == 1)\
       .repartition(32)\
       .alias('df_2')

共有1个答案

漆雕欣德
2023-03-14

默认情况下,如果没有指定分区器,则不是根据数据的特性进行分区,而是以随机的、均匀的方式在节点间进行分区。

df.repartition后面的重新分区算法执行完整的数据洗牌,并在分区之间平均分配数据。为了减少洗牌,最好使用df.coalesce

以下是如何使用dataframehttps://medium.com/@mrpowers/management-spark-partitions-with-coalesce-and-repartition-4050C57AD5C4重新分区的一些很好的解释

 类似资料: