Skip to content

[SPARK-56154][SQL] Preserve ENUM logical annotation through Parquet read-write#55047

Open
xiaoxuandev wants to merge 1 commit intoapache:masterfrom
xiaoxuandev:fix-56154
Open

[SPARK-56154][SQL] Preserve ENUM logical annotation through Parquet read-write#55047
xiaoxuandev wants to merge 1 commit intoapache:masterfrom
xiaoxuandev:fix-56154

Conversation

@xiaoxuandev
Copy link
Copy Markdown
Contributor

What changes were proposed in this pull request?

When Spark reads a Parquet file containing BINARY(ENUM) columns, the ENUM logical type annotation is lost and written back as BINARY(STRING). This patch preserves the ENUM annotation through read-write roundtrips by storing it in StructField metadata.

The changes:

  1. In ParquetToSparkSchemaConverter (read path): detect EnumLogicalTypeAnnotation on PrimitiveColumnIO and store a metadata tag on the StructField. This applies to OPTIONAL and REQUIRED fields. REPEATED ENUM is intentionally not supported because the write path cannot propagate element-level metadata through ArrayType conversion.
  2. In SparkToParquetSchemaConverter (write path): when writing StringType columns, check for the ENUM metadata tag and emit LogicalTypeAnnotation.enumType() instead of stringType().
  3. Extract hardcoded metadata key/value strings into constants (PARQUET_LOGICAL_TYPE_KEY, PARQUET_ENUM_TYPE) in the ParquetSchemaConverter companion object. The metadata key uses the __ prefix convention for internal metadata.

Why are the changes needed?

Without this fix, Parquet files with ENUM-annotated columns lose their annotation after a Spark read-write cycle. This breaks downstream systems that depend on the ENUM annotation (e.g., Avro ecosystem tooling), causes schema inconsistency in data pipelines, and fails schema validation in schema registry scenarios.

Does this PR introduce any user-facing change?

Yes. Parquet ENUM logical type annotations are now preserved when reading and writing Parquet files through Spark. Previously, all StringType columns were written as BINARY(STRING) regardless of the original annotation.

How was this patch tested?

  • Roundtrip test: creates a Parquet file with BINARY(ENUM) schema, reads with Spark, verifies metadata tag, writes back, and verifies the output file retains the ENUM annotation.
  • REQUIRED ENUM test: verifies roundtrip preservation for required (non-nullable) ENUM fields.
  • Mixed columns test: verifies ENUM and STRING columns coexist correctly in the same schema without interference.
  • Negative test: verifies that plain StringType columns without ENUM metadata continue to write as BINARY(STRING).

Was this patch authored or co-authored using generative AI tooling?

Yes, co-authored with Kiro.

…ead-write

### What changes were proposed in this pull request?

When Spark reads a Parquet file containing BINARY(ENUM) columns, the ENUM logical type annotation is lost and written back as BINARY(STRING). This patch preserves the ENUM annotation through read-write roundtrips by storing it in StructField metadata.

The changes:
1. In ParquetToSparkSchemaConverter (read path): detect EnumLogicalTypeAnnotation on PrimitiveColumnIO and store a metadata tag on the StructField. This applies to OPTIONAL and REQUIRED fields. REPEATED ENUM is intentionally not supported because the write path cannot propagate element-level metadata through ArrayType conversion.
2. In SparkToParquetSchemaConverter (write path): when writing StringType columns, check for the ENUM metadata tag and emit LogicalTypeAnnotation.enumType() instead of stringType().
3. Extract hardcoded metadata key/value strings into constants (PARQUET_LOGICAL_TYPE_KEY, PARQUET_ENUM_TYPE) in the ParquetSchemaConverter companion object. The metadata key uses the __ prefix convention for internal metadata.

### Why are the changes needed?

Without this fix, Parquet files with ENUM-annotated columns lose their annotation after a Spark read-write cycle. This breaks downstream systems that depend on the ENUM annotation (e.g., Avro ecosystem tooling), causes schema inconsistency in data pipelines, and fails schema validation in schema registry scenarios.

### Does this PR introduce _any_ user-facing change?

Yes. Parquet ENUM logical type annotations are now preserved when reading and writing Parquet files through Spark. Previously, all StringType columns were written as BINARY(STRING) regardless of the original annotation.

### How was this patch tested?

- Roundtrip test: creates a Parquet file with BINARY(ENUM) schema, reads with Spark, verifies metadata tag, writes back, and verifies the output file retains the ENUM annotation.
- REQUIRED ENUM test: verifies roundtrip preservation for required (non-nullable) ENUM fields.
- Mixed columns test: verifies ENUM and STRING columns coexist correctly in the same schema without interference.
- Negative test: verifies that plain StringType columns without ENUM metadata continue to write as BINARY(STRING).

### Was this patch authored or co-authored using generative AI tooling?

Yes, co-authored with Kiro.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant