This publish is co-written with Srinivasa Are, Principal Cloud Architect, and Karthick Shanmugam, Head of Structure Verisk EES (Excessive Occasion Options).
Verisk, a disaster modeling SaaS supplier serving insurance coverage and reinsurance corporations worldwide, reduce processing time from hours to minutes-level aggregations whereas decreasing storage prices by implementing a lakehouse structure with Amazon Redshift and Apache Iceberg. In the event you’re managing billions of disaster modeling information throughout hurricanes, earthquakes, and wildfires, this method eliminates the normal compute-versus-cost trade-off by separating storage from processing energy.
On this publish, we study Verisk’s lakehouse implementation, specializing in 4 architectural choices that delivered measurable enhancements:
- Execution efficiency: Sub-hour aggregations throughout billions of information changed lengthy batch course of
- Storage effectivity: Columnar Parquet compression decreased prices with out sacrificing response time
- Multi-tenant safety: Schema-level isolation enforced full knowledge separation between insurance coverage shoppers
- Schema flexibility: Apache Iceberg assist column additions and historic knowledge entry with out downtime
The structure separates compute (Amazon Redshift) from storage (Amazon S3), demonstrating find out how to scale from billions to trillions of information with out proportional value will increase.
Present state and challenges
In Verisk’s world of threat analytics, knowledge volumes develop at exponential charges. Every single day, threat modeling techniques generate billions of rows of structured and semi-structured knowledge. Every document captures a micro-slice of publicity, occasion chance, or loss correlation. To transform this uncooked data into actionable insights at scale, consultants want a knowledge engine designed for high-volume analytical workloads.
Every Verisk mannequin run produces detailed, high-granularity outputs that embody billions of simulated threat components and event-level outcomes, multi-year loss projections throughout 1000’s of perils, and deep relational joins throughout publicity, coverage, and claims datasets.
Working significant aggregations (corresponding to, loss by area, peril, or occupancy sort) over such excessive volumes created efficiency challenges.
Verisk wanted to construct a SQL service that might combination at scale within the quickest time attainable and combine into their broader AWS options, requiring a serverless, open, and performant SQL engine able to dealing with billions of information effectively.
Previous to this cloud-based launch, Verisk’s threat analytics infrastructure operated on an on-premises structure centered round relational database clusters. Processing nodes shared entry to centralized storage volumes by devoted interconnect networks. This structure required capital funding in server {hardware}, storage arrays, and networking gear. The deployment mannequin required handbook capability planning and provisioning cycles, limiting the group’s potential to answer fluctuating workload calls for. Database operations relied on batch-oriented processing home windows, with analytical queries competing for shared compute sources.
Amazon Redshift and lakehouse structure
Lakehouse structure on AWS combines knowledge lake storage scalability with knowledge warehouse analytical efficiency in a unified structure. This structure shops huge quantities of structured and semi-structured knowledge in cost-effective Amazon S3 storage whereas sustaining Amazon Redshift’s massively parallel SQL analytics.
Amazon Redshift is a completely managed, petabyte-scale cloud knowledge warehouse service that delivers quick question efficiency utilizing massively parallel processing (MPP) and columnar storage. Amazon Redshift eliminates the complexity of provisioning {hardware}, putting in software program, and managing infrastructure, preserving give attention to deriving insights from their knowledge relatively than sustaining techniques.
To fulfill their problem, Verisk designed a hybrid knowledge lakehouse structure that mixes the storage scalability of Amazon S3 with the compute energy of Amazon Redshift. The next diagram reveals the foundational compute and storage structure that powers Verisk’s analytical resolution.
Structure Overview
The structure processes threat and loss knowledge by three distinct levels throughout the lakehouse structure, with complete multi-tenant supply capabilities to keep up isolation between insurance coverage shoppers.
Amazon Redshift permits retrieving knowledge immediately from S3 utilizing customary SQL for background processing. This resolution collects detailed outcome outputs, be part of them with inner reference knowledge, and executes aggregations over billions of rows. Concurrency scaling ensures that tons of of background analyses utilizing a number of serverless clusters can run simultaneous aggregation queries.
The next diagram reveals the structure designed by Verisk
Knowledge ingestion and storage basis
Verisk shops threat mannequin outputs, location degree losses, publicity tables, and mannequin knowledge in columnar Parquet format inside Amazon S3. An AWS Glue crawler extracts metadata from S3 and feeds it into the lakehouse processing pipeline.
For versioned datasets like publicity tables, Verisk adopted Apache Iceberg, an open desk format that addresses schema evolution and historic versioning necessities. Apache Iceberg gives transactional consistency by atomicity, consistency, isolation, sturdiness ACID-compliant operations that preserve constant snapshots throughout concurrent updates. Snapshot-based time journey permits knowledge retrieval at earlier time limits for regulatory compliance, audit trails, and mannequin comparability with rollback capabilities. Schema evolution helps including, dropping, or renaming columns with out downtime or dataset rewrites. Incremental processing makes use of metadata monitoring to course of solely modified knowledge, decreasing refresh occasions. Hidden partitioning and file-level statistics scale back I/O operations, bettering aggregation efficiency. Engine interoperability permits accessing the identical tables throughout Amazon Redshift, Amazon Athena, Spark, and different engines with out knowledge duplication.
Verisk constructed a basis that mixes S3’s cost-effectiveness with knowledge administration by adopting Apache Iceberg as open desk format for this resolution.
Three-stage processing pipeline
This pipeline orchestrates knowledge circulate from uncooked inputs to analytical outputs by three sequential levels. Pre-processing prepares and cleanses knowledge, modeling applies threat calculations and analytics, and post-processing aggregates outcomes for supply.
- Stage 1: Pre-processing transforms uncooked knowledge into structured codecs utilizing Iceberg Tables and Parquet information, then processes it by Amazon Redshift Serverless for preliminary knowledge cleansing and transformation.
- Stage 2: Modeling takes place with a course of constructed on AWS Batch the pre-processed knowledge and applies superior analytics and have engineering. Outcomes are saved in Iceberg Tables and Parquet information.
- Stage 3: Aggregated Outcomes are obtained throughout post-processing utilizing Amazon Redshift Serverless, it produces the ultimate analytical outputs in Parquet information, prepared for consumption by finish customers.
Multi-tenant supply system
The structure delivers outcomes to a number of insurance coverage shoppers (tenants) by a safe, remoted supply system that features:
- Amazon Fast Sight dashboards for visualization and enterprise intelligence
- Amazon Redshift as the information warehouse for querying aggregated outcomes
- AWS Batch for modelling processing.
- AWS Secrets and techniques Supervisor to handle tenant-specific credentials
- Tenant Roles implementing role-based entry management to offer knowledge isolation between shoppers
Summarized outcomes are uncovered by Amazon Fast Sight dashboards or downstream APIs to underwriting groups.
Multi-tenant safety structure
A essential requirement for Verisk’s SaaS resolution was supporting complete knowledge and compute isolation between totally different insurance coverage and reinsurance shoppers. Verisk carried out a complete multi-tenant safety mannequin that gives isolation whereas sustaining operational effectivity.
Our resolution implements an isolation technique in two layers combining logical and bodily separation. On the logical layer, every shopper’s knowledge resides in devoted schemas with entry controls that stop cross-tenant operations. Amazon Redshift Metadata safety restricts tenants from discovering or accessing different shoppers’ schemas, tables, or database objects by system catalogs. On the bodily layer, for bigger deployments, devoted Amazon Redshift clusters present workload separation on the compute degree, stopping one tenant’s analytical operations from impacting one other’s efficiency. This twin method meets regulatory necessities for knowledge isolation within the insurance coverage business by schema-level isolation inside clusters for normal deployments and full compute separation throughout devoted clusters for larger-scale implementations.
The implementation makes use of saved procedures to automate safety configuration, sustaining constant utility of entry controls throughout tenants. This defense-in-depth method combines schema-level isolation, system catalog lockdown, and selective permission grants to create a safety mannequin.
For knowledge architects focused on implementing related multi-tenant architectures, evaluation Implementing Metadata Safety for Multi-Tenant Amazon Redshift Setting.
Implementation concerns
Verisk’s structure reveals three determination factors for corporations constructing related techniques.
When to undertake open desk codecs
Apache Iceberg proved important for datasets requiring schema evolution and historic versioning. Knowledge engineers ought to consider open desk codecs when analytical workloads span a number of engines (Amazon Redshift, Amazon Athena, Spark) or when regulatory necessities demand point-in-time knowledge reconstruction.
Multi-tenant isolation technique
Schema-level separation mixed with metadata safety prevented cross-tenant knowledge discovery with out efficiency overhead. This method scales extra effectively than database-per-tenant architectures whereas assembly insurance coverage business compliance necessities. Safety consultants ought to implement isolation controls throughout preliminary deployment relatively than retrofitting them later.
Saved procedures or utility logic
Redshift saved procedures standardized aggregation calculations throughout groups and constructed dynamic SQL queries. This method works finest when enterprise logic modifications often or when a number of groups want totally different aggregation dimensions on the identical datasets.
Conclusion
Verisk’s implementation of Amazon Redshift Serverless with Apache Iceberg and lakehouse structure reveals how separating compute from storage addresses enterprise analytics challenges at billion-record scale. By combining cost-effective Amazon S3 storage with Redshift’s massively parallel SQL compute, Verisk achieved aggregations throughout billions of disaster modeling information, decreased storage prices by environment friendly parquet compression, and eradicated ingestion delays. Now underwriting groups can run ad-hoc analyses throughout enterprise hours relatively than ready for long-running batch jobs. The mix of open requirements like Apache Iceberg, serverless compute with Amazon Redshift, and multi-tenant safety gives the scalability, efficiency, and price effectivity wanted for contemporary analytics workloads.
Verisk’s journey has positioned them to scale confidently into the long run, processing not simply billions, however probably trillions of information as their mannequin decision will increase.
Concerning the authors

