Analyzing your information catalog: Question SageMaker Catalog metadata with SQL


As your information and machine studying (ML) property develop, monitoring which property lack documentation or monitoring asset registration tendencies turns into difficult with out customized reporting infrastructure. You want visibility into your catalog’s well being, with out the overhead of managing ETL jobs. The metadata function of Amazon SageMaker supplies this functionality to customers. Changing catalog asset metadata into Apache Iceberg tables saved in Amazon S3 Tables removes the necessity to construct and keep customized ETL pipelines. Your group can then question asset metadata instantly utilizing normal SQL instruments. Now you can reply governance questions like asset registration tendencies, classification standing, and metadata completeness utilizing normal SQL queries via instruments like Amazon Athena, Amazon SageMaker Unified Studio notebooks, and BIsystems.

This automated strategy reduces ETL improvement time and offers your group visibility into catalog well being, compliance gaps, and asset lifecycle patterns. The exported tables embody technical metadata, enterprise metadata, venture possession particulars, and timestamps, partitioned by snapshot date to allow time journey queries and historic evaluation. Groups can use this functionality to proactively monitor catalog well being, establish gaps in documentation, observe asset lifecycle patterns, and make it possible for governance insurance policies are constantly utilized.

How metadata export works

After you allow the metadata export function, it runs robotically on a each day schedule:

  1. SageMaker Catalog creates the infrastructure — An Amazon Easy Storage Service (Amazon S3) desk bucket named aws-sagemaker-catalog is created with an asset_metadata namespace and an empty asset desk.
  2. Day by day snapshots are captured — A scheduled job runs as soon as per day round midnight (native time per AWS Area) to export up to date asset metadata.
  3. Metadata is structured and partitioned — The export captures technical metadata (resource_id, resource_type), enterprise metadata (asset_name, business_description), venture possession particulars, and timestamps, partitioned by snapshot_date for question efficiency.
  4. Knowledge turns into queryable — Inside 24 hours, the asset desk seems in Amazon SageMaker Unified Studio below the aws-sagemaker-catalog bucket and turns into accessible via Amazon Athena, Studio notebooks, or exterior BI instruments.
  5. Groups question utilizing normal SQL — Knowledge groups can now reply questions like “What number of property have been registered final month?” or “Which property lack enterprise descriptions?” with out constructing customized ETL pipelines.

The export evaluates catalog property and their metadata properties within the area, changing them into Apache Iceberg desk format. The information flows into downstream analytics operations instantly, with no separate ETL or batch processes to keep up. The exported metadata turns into a part of a queryable information lake that helps time-travel queries and historic evaluation.

On this submit, we display find out how to use the metadata export functionality in Amazon SageMaker Catalog and carry out analytics on these tables. We discover the next particular use-cases.

  • Audit historic modifications to research what an asset seemed like at a selected time limit.
  • Monitor asset progress view how the info catalog has grown during the last 30 days.
  • Monitor metadata enhancements to see which property gained descriptions or possession over time.

Answer overview

Determine 1 – SageMaker catalog export to S3 Tables

The structure consists of three key elements:

  1. Amazon SageMaker Catalog exports asset metadata each day to Amazon S3.
  2. S3 Tables shops metadata as Apache Iceberg tables within the aws-sagemaker-catalog bucket with ACID compliance and time journey.
  3. Question engines (Amazon Athena, Amazon Redshift, and Apache Spark) entry metadata utilizing normal SQL from the asset_metadata.asset desk.

What metadata is uncovered?

SageMaker Catalog exports metadata within the asset_metadata.asset desk:

Metadata Sort Fields Description
Technical metadata resource_id, resource_type_enum, account_id, area Useful resource identifiers (ARN), sorts (GlueTable, RedshiftTable, S3Collection), and site
Namespace hierarchy catalog, namespace, resource_name Organizational construction for property
Enterprise metadata asset_name, business_description Human-readable names and descriptions
Possession extended_metadata['owningEntityId'] Asset possession info
Timestamps asset_created_time, asset_updated_time, snapshot_time Creation
Customized metadata extended_metadata['form-name.field-name'] Consumer-defined metadata types as key-value pairs

The snapshot_time column helps point-in-time evaluation and question of historic catalog states.

Stipulations

To comply with together with this submit, you will need to have the next:

For SageMaker Unified Studio area setup directions, confer with the SageMaker Unified Studio Getting began information.

After you full the stipulations, full the next steps.

  1. Add this coverage to our IAM person or function to allow metadata export. If utilizing SageMaker Unified Studio to question the catalog, add this coverage to the AmazonSageMakerAdminIAMExecutionRole managed function.
{ "Model": "2012-10-17", 
"Assertion": [ 
{
 "Effect": "Allow",
 "Action": [ "datazone:GetDataExportConfiguration",
 "datazone:PutDataExportConfiguration"
 ],
 "Useful resource": "*"
 },
 {
 "Impact": "Permit",
 "Motion": [
 "s3tables:CreateTableBucket",
 "s3tables:PutTableBucketPolicy"
 ],
 "Useful resource": "arn:aws:s3tables:*:*:bucket/aws-sagemaker-catalog" 
} 
]
}
  1. Grant describe and choose permissions for SageMaker Catalog with AWS Lake Formation. This step might be carried out within the AWS Lake Formation console.
    1. Choose Permissions -> Knowledge permissions and select Grant.

      AWS Lake Formation Grant Permissions interface showing principal type selection with IAM users and roles option selected and AmazonSageMakerAdminIAMExecutionRole assigned

      Determine 2 – AWS Lake Formation grant permission

    2. Beneath Principal kind, choose Principals, IAM customers and roles and the AWS managed AmazonSageMakerAdminIAMExecutionRole execution function.
    3. Select Named Knowledge Catalog sources.
    4. Beneath Catalogs, seek for and choose :s3tablecatalog/aws-sagemaker-catalog.
    5. Beneath Databases, choose asset_metadata database.
      AWS Lake Formation Grant Permissions page showing Named Data Catalog resources method with s3tablescatalog/aws-sagemaker-catalog selected, asset_metadata database, and asset table configured

      Determine 3 – AWS Lake Formation catalog, database, and desk

      AWS Lake Formation Grant Permissions interface showing table permissions with Select and Describe checked, grantable permissions section, and All data access radio button selected

      Determine 4 – AWS Lake Formation grant permission

    6. For Desk, choose asset.
    7. Beneath Desk permissions, examine Choose and Describe.
    8. Select Grant to avoid wasting the permissions.

Allow information export utilizing the AWS CLI

Configure metadata export utilizing the PutDataExportConfiguration API. The Amazon DataZone service robotically creates an S3 desk bucket named aws-sagemaker-catalog with an asset_metadata namespace, and schedules a each day export job. Asset metadata is exported as soon as each day round midnight native time per AWS Area.

The SageMaker Area identifier is offered on area element web page within the AWS Administration Console. Accessing the asset desk via the S3 Tables console or the Knowledge tab in SageMaker Unified Studio can require as much as 24 hours.

AWS CLI command to allow SageMaker catalog export:

aws datazone put-data-export-configuration --domain-identifier  --region  --enable-export

Use this AWS CLI command to validate the configuration is enabled:

aws datazone get-data-export-configuration --domain-identifier  --region 
{
    "isExportEnabled": true,
    "standing": "COMPLETED",
    "s3TableBucketArn": "arn:aws:s3tables:::bucket/aws-sagemaker-catalog",
    "createdAt": "2025-11-26T18:24:02.150000+00:00",
    "updatedAt": "2026-02-23T19:33:40.987000+00:00"
}

Entry the exported asset desk

  1. Navigate to Amazon SageMaker Domains within the AWS Administration Console.
  2. Choose your area and choose Open.
    Amazon SageMaker Domains management page showing an Identity Center based domain with Available status, created February 26, 2026, with Open unified studio button highlighted

    Determine 5 – Open Amazon SageMaker Unified Studio

  3. In SageMaker Unified Studio, select a venture from the Choose a venture dropdown record.
  4. To question SageMaker catalog information, choose Construct within the menu bar after which select Question Editor. To create a brand new venture, comply with the directions within the Amazon SageMaker Unified Studio Consumer Information.
    SageMaker Unified Studio project overview dashboard showing IDE and Applications, Data Analysis and Integration with Query Editor highlighted, Orchestration, and Machine Learning and Generative AI categories

    Determine 6 – Open SageMaker Unified Studio Question Editor

The asset_metadata.asset desk is offered in Knowledge explorer. Use Knowledge explorer to view the schema and question information to carry out analytics from.

  1. Increase Catalogs in Knowledge explorer. Then, choose and increase s3tablecatalog, aws-sagemaker-catalog, asset_metadata, and asset.
  2. Take a look at querying the catalog with SELECT * FROM asset_metadata.asset LIMIT 10;.
SageMaker Unified Studio Query Editor with Data Explorer showing Lakehouse hierarchy including s3tablescatalog, aws-sagemaker-catalog, asset_metadata database, and asset table schema with SQL SELECT query

Determine 7 – Question SageMaker catalog

Queries for observability and analytics

With setup full, execute queries to achieve insights on catalog utilization and modifications. To watch asset progress, and think about how the info catalog has grown during the last 5 days:

SELECT 
    DATE (snapshot_time) as date,
    COUNT (*) as total_assets
FROM asset_metadata.asset
WHERE 
     DATE (snapshot_time) >= CURRENT_DATE - INTERVAL '5' DAY
GROUP BY DATE (snapshot_time)
ORDER BY date DESC;

SageMaker Unified Studio Query Editor showing SQL aggregation query on asset_metadata.asset table with results displaying date and total_assets columns, returning 42 assets for March 7-8, 2026"

Determine 8 – Question asset progress

Use the catalog to trace metadata modifications to find out which property gained descriptions or possession over time. Use this question to establish property that gained enterprise descriptions over the previous 5 days by evaluating at present’s snapshot with the sooner snapshot.

SELECT
    t.asset_id,
    t.resource_name,
    p.business_description as description_before,
    t.business_description as description_now
FROM asset_metadata.asset t
JOIN asset_metadata.asset p ON t.asset_id = p.asset_id
WHERE DATE(t.snapshot_time) = CURRENT_DATE
    AND DATE(p.snapshot_time) = CURRENT_DATE - INTERVAL '5' DAY
    AND p.business_description IS NULL
    AND t.business_description IS NOT NULL;

Examine asset values at a selected time limit utilizing this question to retrieve metadata from any snapshot date.

SELECT
     asset_id,
     resource_name,
     business_description,
     extended_metadata['owningEntityId'] as proprietor,
     snapshot_time
FROM asset_metadata.asset
WHERE asset_id = 'your-asset-id'
     AND DATE(snapshot_time) = DATE('2025-11-26');

Clear up sources

To keep away from ongoing prices, clear up the sources created on this walkthrough:

  1. Disable metadata export:

Disable the each day metadata export to cease new snapshots:

aws datazone put-data-export-configuration 
  --domain-identifier 

  1. Delete S3 Tables sources:

Optionally, delete the S3 Tables namespace containing the exported metadata to take away historic snapshots and cease storage prices. For directions on find out how to delete S3 tables, see Deleting an Amazon S3 desk within the Amazon Easy Storage Service Consumer Information.

Conclusion

On this submit, you enabled the metadata export function of SageMaker Catalog and used SQL queries to achieve visibility into your asset stock. The function converts asset metadata into Apache Iceberg tables partitioned by snapshot date, so you possibly can carry out time-travel queries, monitor catalog progress, observe metadata completeness, and audit historic asset states. This supplies a repeatable, low-overhead approach to keep catalog well being and meet governance necessities over time.

To study extra about Amazon SageMaker Catalog, see the Amazon SageMaker Catalog documentation. To discover Apache Iceberg desk codecs and time-travel queries, see the Amazon S3 Tables documentation.


In regards to the Authors

Photo of Author Ramesh Singh

Ramesh is a Senior Product Supervisor Technical (Exterior Companies) at AWS in Seattle, Washington, at present with the Amazon SageMaker group. He’s enthusiastic about constructing high-performance ML/AI and analytics merchandise that assist enterprise clients obtain their important targets utilizing cutting-edge know-how.

Photo of Author Pradeep Misra

Pradeep is a Principal Analytics and Utilized AI Options Architect at AWS. He’s enthusiastic about fixing buyer challenges utilizing information, analytics, and Utilized AI. Outdoors of labor, he likes exploring new locations and enjoying badminton along with his household. He additionally likes doing science experiments, constructing LEGOs, and watching anime along with his daughters.

Photo of Author - Rohith Kayathi

Rohith is a Senior Software program Engineer at Amazon Net Companies (AWS) working with Amazon SageMaker group. He leads enterprise information catalog, generative AI–powered metadata curation, and lineage options. He’s enthusiastic about constructing large-scale distributed techniques, fixing advanced issues, and setting the bar for engineering excellence for his group.

Photo of AUthor - Steve Phillips

Steve is a Principal Technical Account Supervisor and Analytics specialist at AWS within the North America area. Steve at present focuses on information warehouse architectural design, information lakes, information ingestion pipelines, and cloud distributed architectures.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles