在ADLS中创建一个包含分区的Parquet文件:
from pyspark.sql.functions import col
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("ParquetPartition").getOrCreate()
schema = StructType([
StructField("name", StringType(), True),
StructField("age", IntegerType(), True)])
data = [('Alice', 25), ('Bob', 30), ('Charlie', 35), ('David', 40)]
df = spark.createDataFrame(data=data, schema=schema)
df.write.option("compression","snappy")\
.option("path","/mnt/ADLS/parquet/partition/")\
.partitionBy('age').format('parquet').saveAsTable("parqtbl")
然后,读取该文件并检查分区数量是否相同:
df = spark.read.format('parquet').load('/mnt/ADLS/parquet/partition/')
num_partitions_adls = df.rdd.getNumPartitions()
df = spark.table("parqtbl")
num_partitions_df = df.rdd.getNumPartitions()
if num_partitions_adls == num_partitions_df:
print("The number of partitions in ADLS is same as number of partitions after reading as dataframe.")
else:
print("The number of partitions in ADLS is different from number of partitions after reading as dataframe.")
运行上述代码后,将会判断ADLS中的分区文件数量与读取后的DataFrame分区数是否相同,并将结果输出。
下一篇:ADL未找到隐藏的友元模板函数