AWS Glue is a serverless, scalable information integration service that makes it easy to find, put together, transfer, and combine information from a number of sources. AWS not too long ago introduced Glue 5.1, a brand new model of AWS Glue that accelerates information integration workloads in AWS. AWS Glue 5.1 upgrades the Spark engines to Apache Spark 3.5.6, supplying you with newer Spark launch together with the newer dependent libraries so you’ll be able to develop, run, and scale your information integration workloads and get insights sooner.
On this publish, we describe what’s new in AWS Glue 5.1, key highlights on Spark and associated libraries, and tips on how to get began on AWS Glue 5.1.
What’s new in AWS Glue 5.1
The next updates are in AWS Glue 5.1:
Runtime and library upgrades
AWS Glue 5.1 upgrades the runtime to Spark 3.5.6, Python 3.11, and Scala 2.12.18 with new enhancements from the open supply model. AWS Glue 5.1 additionally updates assist for open desk format libraries to Apache Hudi 1.0.2, Apache Iceberg 1.10.0, and Delta Lake 3.3.2 so you’ll be able to clear up superior use instances round efficiency, value, governance, and privateness in your information lakes.
Assist for brand new Apache Iceberg options
AWS Glue 5.1 provides assist for Apache Iceberg Materialized View, and Apache Iceberg format model 3.0. AWS Glue 5.1 additionally provides assist for information writes into Iceberg and Hive tables with Spark-native fine-grained entry management with AWS Lake Formation.
Apache Iceberg Materialized View is particularly helpful in instances the place it’s essential speed up ceaselessly run queries on massive information units by pre-computing costly aggregations. If you want to be taught extra about Apache Iceberg materialized views, confer with Introducing Apache Iceberg materialized views in AWS Glue Information Catalog.
Apache Iceberg format model 3.0 is the most recent Iceberg format model outlined in Iceberg Desk Spec. Following options are supported:
Create an Iceberg V3 format desk
To create an Iceberg V3 format desk, specify the format-version to 3 when creating the desk. The next is a pattern PySpark script: (substitute amzn-s3-demo-bucket together with your S3 bucket title):
Emigrate from V2 format to V3, use ALTER TABLE ... SET TBLPROPERTIES to replace the format-version. The next is a pattern PySpark script:
You can’t rollback from V3 to V2, so it’s essential watch out to confirm that each one your Iceberg purchasers assist Iceberg V3 format model. As soon as upgraded, older variations can’t accurately learn newer format variations, as Iceberg desk format variations are usually not forward-compatible.
Create a desk with Row Lineage monitoring enabled
To create a desk with Row Lineage monitoring enabled, set the desk property row-lineage to true. The next is a pattern PySpark script:
In tables with Row Lineage monitoring enabled, row IDs are managed on the metadata degree for monitoring row modifications over time and auditing.
Prolonged assist for AWS Lake Formation permissions
Superb-grained entry management with Lake Formation has been supported by native Spark DataFrames and Spark SQL in Glue 5.0 for learn operations. Glue 5.1 extends fine-grained entry management for write operations.
Full-Desk Entry (FTA) management in Apache Spark have been launched for Apache Hive and Iceberg tables in Glue 5.0. Glue 5.1 extends FTA assist for Apache Hudi tables and Delta Lake tables.
S3A by default
AWS Glue 5.1 makes use of S3A because the default S3 connector. This modification aligns with the current Amazon EMR adoption of S3A because the default connector and brings enhanced efficiency and superior options to Glue workloads. For extra particulars concerning the S3A connector’s capabilities and optimizations, see Optimize Amazon EMR runtime for Apache Spark with EMR S3A.
Word when migrating from Glue 5.0 to Glue 5.1, If each spark.hadoop.fs.s3a.endpoint and spark.hadoop.fs.s3a.endpoint.area are usually not set, the default area utilized by S3A is us-east-2. This will trigger points. To mitigate the problems attributable to this transformation, set the spark.hadoop.fs.s3a.endpoint.area Spark configuration when utilizing the S3A file system in AWS Glue 5.1.
Dependent library upgrades
AWS Glue 5.1 upgrades the runtime to Spark 3.5.6, Python 3.11, and Scala 2.12.18 with upgraded dependent libraries.
The next desk lists dependency upgrades:
| Dependency | Model in AWS Glue 5.0 | Model in AWS Glue 5.1 |
| Spark | 3.5.4 | 3.5.6 |
| Hadoop | 3.4.1 | 3.4.1 |
| Scala | 2.12.18 | 2.12.18 |
| Hive | 2.3.9 | 2.3.9 |
| EMRFS | 2.69.0 | 2.73.0 |
| Arrow | 12.0.1 | 12.0.1 |
| Iceberg | 1.7.1 | 1.10.0 |
| Hudi | 0.15.0 | 1.0.2 |
| Delta Lake | 3.3.0 | 3.3.2 |
| Java | 17 | 17 |
| Python | 3.11 | 3.11.14 |
| boto3 | 1.34.131 | 1.40.61 |
| AWS SDK for Java | 2.29.52 | 2.35.5 |
| AWS Glue Information Catalog Consumer | 4.5.0 | 4.9.0 |
| EMR DynamoDB Connector | 5.6.0 | 5.7.0 |
The next are database connector (JDBC driver) upgrades:
| Driver | Connector model in AWS Glue 5.0 | Connector model in AWS Glue 5.1 |
| MySQL | 8.0.33 | 8.0.33 |
| Microsoft SQL Server | 10.2.0 | 10.2.0 |
| Oracle Databases | 23.3.0.23.09 | 23.3.0.23.09 |
| PostgreSQL | 42.7.3 | 42.7.3 |
| Amazon Redshift | redshift-jdbc42-2.1.0.29 | redshift-jdbc42-2.1.0.29 |
The next are Spark connector upgrades:
| Driver | Connector model in AWS Glue 5.0 | Connector model in AWS Glue 5.1 |
| Amazon Redshift | 6.4.0 | 6.4.2 |
| OpenSearch | 1.2.0 | 1.2.0 |
| MongoDB | 10.3.0 | 10.3.0 |
| Snowflake | 3.0.0 | 3.1.1 |
| BigQuery | 0.32.2 | 0.32.2 |
| AzureCosmos | 4.33.0 | 4.33.0 |
| AzureSQL | 1.3.0 | 1.3.0 |
| Vertica | 3.3.5 | 3.3.5 |
Get began with AWS Glue 5.1
You can begin utilizing AWS Glue 5.1 by AWS Glue Studio, the AWS Glue console, the most recent AWS SDK, and the AWS Command Line Interface (AWS CLI).
To start out utilizing AWS Glue 5.1 jobs in AWS Glue Studio, open the AWS Glue job and on the Job Particulars tab, select the model Glue 5.1 – Helps Spark 3.5, Scala 2, Python 3.
To start out utilizing AWS Glue 5.1 on an AWS Glue Studio pocket book or an interactive session by a Jupyter pocket book, set 5.1 within the %glue_version magic:
The next output exhibits that the session is about to make use of AWS Glue 5.1:
Spark Troubleshooting with Glue 5.1
To speed up Apache Spark troubleshooting and job efficiency optimization on your Glue 5.1 ETL jobs, you should use the newly launched Apache Spark troubleshooting agent. Conventional Spark troubleshooting requires intensive guide evaluation of logs, efficiency metrics, and error patterns to determine root causes and optimization alternatives. The agent simplifies this course of by pure language prompts, automated workload evaluation, and clever code suggestions. The agent has three most important elements: an MCP-compatible AI assistant in your growth setting for interplay, the MCP proxy for AWS that handles safe communication between your consumer and the MCP server, and an Amazon SageMaker Unified Studio managed MCP Server (preview) that gives specialised Spark troubleshooting and improve instruments for Glue 5.1 jobs.
To arrange the agent, comply with the directions to arrange the sources and MCP configuration:Â Setup for Apache Spark Troubleshooting agent. Then, you’ll be able to launch your most well-liked MCP consumer and use dialog to work together with the instruments for troubleshooting.
The next is an illustration on how you should use the Apache Spark troubleshooting agent with Kiro CLI to debug a Glue 5.1 job run.
For extra data and video walkthroughs for tips on how to use the Apache Spark troubleshooting agent, please confer with Apache Spark Troubleshooting agent for Amazon EMR.
Conclusion
On this publish, we mentioned the important thing options and advantages of AWS Glue 5.1. You’ll be able to create new AWS Glue jobs on AWS Glue 5.1 or migrate your present AWS Glue jobs to profit from the enhancements.
We wish to thank the assist of quite a few engineers and leaders who helped construct Glue 5.1 to assist prospects with a efficiency optimized Spark runtime and ship new capabilities.
Concerning the authors
