At AWS re:Invent 2024, we launched a no code zero-ETL integration between Amazon DynamoDB and Amazon SageMaker Lakehouse, simplifying how organizations deal with information analytics and AI workflows. This integration alleviates the normal challenges of constructing and sustaining complicated extract, rework, and cargo (ETL) pipelines for reworking NoSQL information into analytics-ready codecs, which beforehand required vital time and sources whereas introducing potential system vulnerabilities. Organizations can now seamlessly mix the energy of DynamoDB in dealing with speedy, concurrent transactions with speedy analytical processing via the zero-ETL integration. For instance, an ecommerce platform storing consumer session information and cart data in DynamoDB can now analyze this information in close to actual time with out constructing customized pipelines. Gaming corporations utilizing DynamoDB for participant information can immediately analyze consumer conduct as occasions happen, enabling real-time insights into recreation stability and participant engagement patterns.
The zero-ETL functionality makes use of built-in change information seize (CDC) to robotically synchronize information updates and schema modifications between DynamoDB and SageMaker Lakehouse tables. By utilizing Apache Iceberg format, the combination gives dependable efficiency with ACID transaction help and environment friendly large-scale information dealing with. Knowledge scientists can prepare ML fashions on contemporary information and information analysts can generate studies utilizing present data, with typical synchronization latency in minutes somewhat than hours.
On this put up, we share how one can arrange this zero-ETL integration from DynamoDB to your SageMaker Lakehouse atmosphere.
Answer overview
We use a SageMaker Lakehouse catalog, AWS Lake Formation, Amazon Athena, AWS Glue, and Amazon SageMaker Unified Studio for this integration. The next is the reference information stream diagram for the zero-ETL integration.
The workflow consists of the next parts:
- The just lately launched zero-ETL integration functionality throughout the AWS Glue console permits direct integration between DynamoDB and SageMaker Lakehouse, storing information in Iceberg format. This streamlined strategy opens up new potentialities for information groups by making a large-scale open and safe information ecosystem with out conventional ETL processing overhead.
- When constructing a SageMaker Lakehouse structure, you should utilize an Amazon Easy Storage Service (Amazon S3) based mostly managed catalog as your zero-ETL goal, offering seamless information integration with out transformation overhead. This strategy creates a strong basis on your SageMaker Lakehouse implementation whereas sustaining the cost-effectiveness and scalability inherent to Amazon S3 storage, enabling environment friendly analytics and machine studying workflows.
- Organizations can use a Redshift Managed Storage (RMS) based mostly managed catalog once they want high-performance SQL analytics and multi-table transactions. This strategy makes use of RMS for storage whereas sustaining information within the Iceberg format, offering an optimum stability of efficiency and suppleness.
- After you identify your Lakehouse infrastructure, you possibly can entry it via numerous analytics engines, together with AWS companies like Athena, Amazon Redshift, AWS Glue, and Amazon EMR as unbiased companies. For a extra streamlined expertise, SageMaker Unified Studio affords centralized analytics administration, the place you possibly can question your information from a single unified interface.
Conditions
On this part, we stroll via the steps to arrange your resolution sources and ensure your permission settings.
Create a SageMaker Unified Studio area, venture, and IAM function
Earlier than you start, you want an AWS Id and Entry Administration (IAM) function for enabling the zero-ETL integration. On this put up, we use SageMaker Unified Studio, which affords a unified information platform expertise. It robotically manages required Lake Formation permissions on information and catalogs for you.
It’s a must to first create a SageMaker Unified Studio area, an administrative entity that controls consumer entry, permissions, and sources for groups working throughout the SageMaker Unified Studio atmosphere. Notice down the SageMaker Unified Studio URL after you create the area. You may be utilizing this URL later to log in to the SageMaker Unified Studio portal and question our information throughout a number of engines.
Then, you create a SageMaker Unified Studio venture, an built-in growth atmosphere (IDE) that gives a unified expertise for information processing, analytics, and AI growth. As a part of venture creation, an IAM function is robotically generated. This function shall be used if you entry SageMaker Unified Studio later. For extra particulars on how one can create a SageMaker Unified Studio venture and area, consult with An built-in expertise for all of your information and AI with Amazon SageMaker Unified Studio.
Put together a pattern dataset inside DynamoDB
To implement this resolution, you want a DynamoDB desk that may both be used out of your present sources, or created utilizing the pattern information file which you can import from an S3 bucket. For this put up, we information you thru importing pattern information from an S3 bucket into a brand new DynamoDB desk, offering a sensible basis for the ideas mentioned.
To create a pattern desk in DynamoDB, full the next steps:
- Obtain the fictional ecommerce_customer_behavior.csv dataset. This dataset captures buyer conduct and interactions on an ecommerce platform.
- On the Amazon S3 console, open the S3 bucket utilized by the SageMaker Unified Studio venture.
- Add the CSV file you downloaded.

- Choose the uploaded file to view its particulars web page.

- Copy the worth for S3 URI and make a remark of it; you’ll use this path for the following DynamoDB desk creation step.

Create a Dynamo DB desk
Full the next steps to create a DynamoDB desk from a file from Amazon S3, utilizing the import from Amazon S3 performance. Then you possibly can allow the settings on the DynamoDB desk required to allow zero-ETL integration.
- On the DynamoDB console, choose Imports from S3 within the navigation pane.
- Choose Import from S3.

- Enter the S3 URI from earlier step for Supply S3 URL, choose CSV for Import file format, and choose Subsequent.

- Present the desk identify as
ecommerce_customer_behavior, the partition key as customer_id, and the type key as product_id, then choose Subsequent.

- Use the default desk settings, then choose Subsequent to evaluation the small print.

- Overview the settings and choose Import.

It should take a couple of minutes for the import standing to vary from Importing to Accomplished.


When the import is full, you must have the ability to see the desk created on the Tables web page.

- Choose the
ecommerce_customer_behavior desk and choose Edit PTIR.

- Choose Activate cut-off date restoration and choose Save modifications.
That is required for organising zero-ETL utilizing DynamoDB as supply.
On the Backups tab, you must see the standing for PITR as On.

- Moreover, it’s essential to use a desk coverage to allow entry for zero-ETL integration. On the Permissions tab, and duplicate the next code beneath Useful resource-based coverage for desk:
{
"Model": "2012-10-17",
"Assertion": [
{
"Sid": "TablePolicy01",
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": [
"dynamodb:ExportTableToPointInTime",
"dynamodb:DescribeExport",
"dynamodb:DescribeTable"
],
"Useful resource": "*"
}
]
}
This coverage makes use of all of the sources, which shouldn’t be utilized in manufacturing workload. To deploy this setup in manufacturing, prohibit it to solely particular zero-ETL integration sources by including a situation to the resource-based coverage.
Now that you’ve got used the Amazon S3 import methodology to load a CSV file to create a DynamoDB desk, you possibly can allow zero-ETL integration on the desk.
Validate permission settings
To validate if the catalog permission setting is suitable, full the next steps:
- On the AWS Glue console, choose Databases within the navigation pane.

- Examine for the database
salesmarketing_XXX.

- Choose Catalog settings within the navigation pane, and save the permissions.
The next code is an instance of permissions for catalog settings:
{
"Model": "2012-10-17",
"Assertion": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam:::root"
},
"Action": "glue:CreateInboundIntegration",
"Resource": "arn:aws:glue:::database/salesmarketing_XXX"
},
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "glue:AuthorizeInboundIntegration",
"Resource": "arn:aws:glue:::database/salesmarketing_XXX"
}
]
}
Now you’re able to create your zero-ETL integration.
Create a zero-ETL integration
Full the next steps to create a zero-ETL integration:
- On the AWS Glue console, choose Zero-ETL integrations within the navigation pane.

- Choose “Create zero-ETL integration” to create a brand new configuration.

- Choose Amazon DynamoDB because the supply sort.

- Beneath Supply particulars, choose
ecommerce_customer_behavior for DynamoDB desk.


- Beneath Goal particulars, present the next data:
- For AWS account, choose Use the present account.
- For Knowledge warehouse or catalog, enter the account ID of your default catalog.
- For Goal database, enter
salesmarketing_XXX.
- For Goal IAM function, enter
datazone_usr_role_XXX.

- Beneath Output settings, choose Unnest all fields and Use main keys from DynamoDB tables, depart Configure goal desk identify because the default worth (
ecommerce_customer_behavior), then choose Subsequent.

- Enter zetl-ecommerce-customer-behavior for Title beneath Integration particulars, then choose Subsequent.

- Choose Create and launch integration to launch the combination.

The standing ought to be Creating after the combination is efficiently initiated.
The standing will change to Energetic in roughly a minute.
Confirm that the SageMaker Lakehouse desk exists. This course of may take as much as quarter-hour to finish, as a result of the default refresh interval from DynamoDB is ready to fifteen minutes.

Validate the SageMaker Lakehouse desk
Now you can question your SageMaker Lakehouse desk, created via zero-ETL integration, utilizing varied question engines. Full the next steps to confirm you possibly can you see the desk in SageMaker Unified Studio:
- Log in to the SageMaker Unified Studio portal utilizing the only sign-on (SSO) possibility.

- Choose your venture to view its particulars web page.

- Choose Knowledge within the navigation pane.

- Confirm which you can see the Iceberg desk within the SageMaker Lakehouse catalog.

Question with Athena
On this part, we present how one can use Athena to question the SageMaker Lakehouse desk from SageMaker Unified Studio. On the venture web page, find the ecommerce_customer_behavior desk within the catalog, and on the choices menu (three dots), choose Question with Athena.
This creates a SELECT question in opposition to the SageMaker Lakehouse desk in a brand new window, and you must see the question outcomes as proven within the following screenshot.
Question with Amazon Redshift
It’s also possible to question the SageMaker Lakehouse desk from SageMaker Unified Studio utilizing Amazon Redshift. Full the next steps:
- Choose the connection on the highest proper.
- Choose Redshift (Lakehouse) from the record of connections.
- Choose the
awsdatacatalog database.
- Choose the
salesmarketing schema.
- Choose Select button.

The outcomes shall be proven within the Amazon Redshift Question Editor.
Question with Amazon EMR Serverless
You may question the Lakehouse desk utilizing Amazon EMR Serverless, which makes use of Apache Spark’s processing capabilities. Full the next steps:
- On the venture web page, choose Compute within the navigation pane.
- Choose Add compute on the Knowledge processing tab to create an EMR Serverless compute related to the venture.

- You may create new compute sources or connect with present sources. For this instance, choose Create new compute sources.

- Choose EMR Serverless.

- Enter a compute identify (for instance, Gross sales-Advertising and marketing), choose the newest launch of EMR Serverless, and choose Add compute.
It should take a while to create the compute.
You need to see the standing as Began for the compute. Now it’s prepared for use as your compute possibility for querying via a Jupyter pocket book.
- Choose the Construct menu and choose JupyterLab.
It should take a while to arrange the workspace for working JupyterLab.
After the Jupyter Lab area is ready up, you must see a web page just like the next screenshot.
- Choose the brand new folder icon to create a brand new folder.

- Title the folder
lakehouse_zetl_lab.

- Navigate to the folder you simply created and create a pocket book beneath this folder.
- Choose the pocket book Python3 (ipykernel) on the Launcher tab, and rename the pocket book to
query_lakehouse_table.

You may observe that the pocket book is exhibiting native Python as default language and compute. The 2 drop down menus present the connection sort and compute for the chosen connection sort, simply above the primary cell throughout the Jupyter pocket book.
- Choose PySpark because the connection, and choose the EMR Serverless software as compute.

- Enter the next pattern code to question the desk utilizing Spark SQL:
import sys
from pyspark.sql import SparkSession
from pyspark.sql.capabilities import *
# Set the present database
spark.catalog.setCurrentDatabase("salesmarketing_XXX")
# Execute SQL question and retailer ends in DataFrame
df = spark.sql("choose * from ecommerce_customer_behavior restrict 10")
# Show the outcomes
df.present()

You may see the Spark DataFrame outcomes.
Clear up
To keep away from incurring future expenses, delete the SageMaker area, DynamoDB desk, AWS Glue sources, and different objects created from this put up.
Conclusion
This put up demonstrated how one can set up a zero-ETL connection from DynamoDB to SageMaker Lakehouse, making your information accessible in Iceberg format with out constructing customized information pipelines. We confirmed how one can analyze this DynamoDB information via varied compute engines inside SageMaker Unified Studio. This streamlined strategy alleviates conventional information motion complexities, and permits extra environment friendly information evaluation workflows straight out of your DynamoDB tables.
Check out this resolution on your personal use case, and share your suggestions within the feedback.
Concerning the authors
Narayani Ambashta is an Analytics Specialist Options Architect at AWS, specializing in the automotive and manufacturing sector, the place she guides strategic prospects in growing trendy information and AI methods. With over 15 years of cross-industry expertise, she makes a speciality of large information structure, real-time analytics, and AI/ML applied sciences, serving to organizations implement trendy information architectures. Her experience spans throughout lakehouse, generative AI, and IoT platforms, enabling prospects to drive digital transformation initiatives. When not architecting trendy options, she enjoys staying lively via sports activities and yoga.
Raj Ramasubbu is a Senior Analytics Specialist Options Architect centered on large information and analytics and AI/ML with AWS. He helps prospects architect and construct extremely scalable, performant, and safe cloud-based options on AWS. Raj supplied technical experience and management in constructing information engineering, large information analytics, enterprise intelligence, and information science options for over 18 years previous to becoming a member of AWS. He helped prospects in varied {industry} verticals like healthcare, medical gadgets, life sciences, retail, asset administration, automobile insurance coverage, residential REIT, agriculture, title insurance coverage, provide chain, doc administration, and actual property.
Yadgiri Pottabhathini is a Senior Analytics Specialist Options Architect within the media and leisure sector. He makes a speciality of aiding enterprise prospects with their information and analytics cloud transformation initiatives, whereas offering steerage on accelerating their Generative AI adoption via the event of information foundations and trendy information methods that leverage open-source frameworks and applied sciences.
Junpei Ozono is a Sr. Go-to-market (GTM) Knowledge & AI options architect at AWS in Japan. He drives technical market creation for information and AI options whereas collaborating with international groups to develop scalable GTM motions. He guides organizations in designing and implementing modern data-driven architectures powered by AWS companies, serving to prospects speed up their cloud transformation journey via trendy information and AI options. His experience spans throughout trendy information architectures together with Knowledge Mesh, Knowledge Lakehouse, and Generative AI, enabling prospects to construct scalable and modern options on AWS.