You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to understand how schema registration works using the Glue.py plugin
You run your DBT logic e.g.
{{ config(materialized='external',location='s3://mybucket/hello, glue_register=true, ... )}}
select 1,2,3 from source
After which it it looks like the table is being materialized -> parquet file is written to s3 -> a view is built on top of this location using select * from s3://mybucket/*/*.parquet -> columns are extracted from this view -> this schema is registered in Glue.
This works neatly when all files have the same columns.
However, when adding new columns to your materialized parquet files (which Glue/Athena supports) and save those in a new partition, the next time the you run this model, it will still register the old schema (likely because of the */*.parquet of the s3 location), and seems to ignore the new schema.
Wouldn't it be better to register the schema of your last-run model? Is this a matter of reordering/adapting the macros in materializations/external.sql? Or would this be undesirable?
The text was updated successfully, but these errors were encountered:
The external file will be written to the proper location, with the proper latest schema. However, after writing the file it will create a view where read_location is used (in our case: s3://mybucket/ * / * / * /hello.parquet) and pick up the first (I think, but certainly not the last) file in the hive structure, in our case an old partition of 2018. Based off this view, the glue table will be registered with an old 2018 schema.
I'm trying to understand how schema registration works using the Glue.py plugin
You run your DBT logic e.g.
{{ config(materialized='external',location='s3://mybucket/hello, glue_register=true, ... )}}
select 1,2,3 from source
After which it it looks like the table is being materialized -> parquet file is written to s3 -> a view is built on top of this location using
select * from s3://mybucket/*/*.parquet
-> columns are extracted from this view -> this schema is registered in Glue.This works neatly when all files have the same columns.
However, when adding new columns to your materialized parquet files (which Glue/Athena supports) and save those in a new partition, the next time the you run this model, it will still register the old schema (likely because of the
*/*.parquet
of the s3 location), and seems to ignore the new schema.Wouldn't it be better to register the schema of your last-run model? Is this a matter of reordering/adapting the macros in materializations/external.sql? Or would this be undesirable?
The text was updated successfully, but these errors were encountered: