Skip to content

Commit f154d35

Browse files
authored
Fix linter formatting in lakeflow-pipelines template (#3171)
## Changes Replaced single-line return statements with multi-line format in two transformation templates to fix linter warnings. - `sample_trips_{{.project_name}}.py.tmpl`: Multi-line Spark DataFrame operation - `sample_zones_{{.project_name}}.py.tmpl`: Multi-line Spark aggregation operation Removed from `ruff.toml`. Now matches cli-pipelines format. ## Why Single-line statements were triggering linter warnings due to excessive line length. Multi-line format resolves these issues while maintaining identical functionality and improving readability. ## Tests - Verified functional equivalence between old and new formats - Modified output to match the new format
1 parent 9138476 commit f154d35

File tree

5 files changed

+5
-24
lines changed

5 files changed

+5
-24
lines changed

acceptance/bundle/templates/lakeflow-pipelines/python/output/my_lakeflow_pipelines/resources/my_lakeflow_pipelines_pipeline/transformations/sample_trips_my_lakeflow_pipelines.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,4 @@
1010

1111
@dlt.table
1212
def sample_trips_my_lakeflow_pipelines():
13-
return (
14-
spark.read.table("samples.nyctaxi.trips")
15-
.withColumn("trip_distance_km", utils.distance_km(col("trip_distance")))
16-
)
13+
return spark.read.table("samples.nyctaxi.trips").withColumn("trip_distance_km", utils.distance_km(col("trip_distance")))

acceptance/bundle/templates/lakeflow-pipelines/python/output/my_lakeflow_pipelines/resources/my_lakeflow_pipelines_pipeline/transformations/sample_zones_my_lakeflow_pipelines.py

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,4 @@
1010
@dlt.table
1111
def sample_zones_my_lakeflow_pipelines():
1212
# Read from the "sample_trips" table, then sum all the fares
13-
return (
14-
spark.read.table("sample_trips_my_lakeflow_pipelines")
15-
.groupBy(col("pickup_zip"))
16-
.agg(
17-
sum("fare_amount").alias("total_fare")
18-
)
19-
)
13+
return spark.read.table("sample_trips_my_lakeflow_pipelines").groupBy(col("pickup_zip")).agg(sum("fare_amount").alias("total_fare"))

libs/template/templates/lakeflow-pipelines/template/{{.project_name}}/resources/{{.project_name}}_pipeline/transformations/sample_trips_{{.project_name}}.py.tmpl

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,4 @@ from utilities import utils
1010

1111
@dlt.table
1212
def sample_trips_{{ .project_name }}():
13-
return (
14-
spark.read.table("samples.nyctaxi.trips")
15-
.withColumn("trip_distance_km", utils.distance_km(col("trip_distance")))
16-
)
13+
return spark.read.table("samples.nyctaxi.trips").withColumn("trip_distance_km", utils.distance_km(col("trip_distance")))

libs/template/templates/lakeflow-pipelines/template/{{.project_name}}/resources/{{.project_name}}_pipeline/transformations/sample_zones_{{.project_name}}.py.tmpl

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,4 @@ from pyspark.sql.functions import col, sum
1010
@dlt.table
1111
def sample_zones_{{ .project_name }}():
1212
# Read from the "sample_trips" table, then sum all the fares
13-
return (
14-
spark.read.table("sample_trips_{{ .project_name }}")
15-
.groupBy(col("pickup_zip"))
16-
.agg(
17-
sum("fare_amount").alias("total_fare")
18-
)
19-
)
13+
return spark.read.table("sample_trips_{{ .project_name }}").groupBy(col("pickup_zip")).agg(sum("fare_amount").alias("total_fare"))

ruff.toml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,5 @@ line-length = 150
22

33

44
exclude = [
5-
"tagging.py", # tagging.py is synced from universe in the `openapi/tagging` directory and follows different format rules.
6-
"acceptance/bundle/templates/lakeflow-pipelines/**/*.py" # files are manually formatted
5+
"tagging.py" # tagging.py is synced from universe in the `openapi/tagging` directory and follows different format rules.
76
]

0 commit comments

Comments
 (0)