Technical Approaches and Sensible Tradeoffs


(Chuysang/Shutterstock)

On the earth of monitoring software program, the way you course of telemetry information can considerably impression your capability to derive insights, troubleshoot points, and handle prices.

There are 2 main use instances for a way telemetry information is leveraged:

  • Radar (Monitoring of techniques) normally falls into the bucket of recognized knowns and recognized unknowns. This results in eventualities the place some information is nearly ‘pre-determined’ to behave, be plotted in a sure method – as a result of we all know what we’re searching for.
  • Blackbox (Debugging, RCA and many others.) ones alternatively are extra to do with unknown unknowns. Which entails to what we don’t know and will have to hunt for to construct an understanding of the system.

Understanding Telemetry Knowledge Challenges

Earlier than diving into processing approaches, it’s necessary to know the distinctive challenges of telemetry information:

  • Quantity: Fashionable techniques generate huge quantities of telemetry information
  • Velocity: Knowledge arrives in steady, high-throughput streams
  • Selection: A number of codecs throughout metrics, logs, traces, profiles and occasions
  • Time-sensitivity: Worth typically decreases with age
  • Correlation wants: Knowledge from totally different sources have to be linked collectively

These traits create particular concerns when selecting between ETL and ELT approaches.

 

ETL for Telemetry: Remodel-First Structure

Technical Structure

In an ETL strategy, telemetry information undergoes transformation earlier than reaching its last vacation spot:

Fig. 1 — ETL for Telemetry

A typical implementation stack may embody:

  • Assortment: OpenTelemetry, Prometheus, Fluent Bit
  • Transport: Kafka or Kinesis or in reminiscence because the buffering layer
  • Transformation: Stream processing
  • Storage: Time-series databases (Prometheus) or specialised indices or Object Storage (s3)

Key Technical Parts

  1. Aggregation Strategies

Pre-aggregation considerably reduces information quantity and question complexity. A typical pre-aggregation circulate seems to be like this:

Fig. 2 — Aggregation Strategies

This transformation condenses uncooked information into 5-minute summaries, dramatically lowering storage necessities and enhancing question efficiency.

Instance: For a gaming utility dealing with tens of millions of requests per day, uncooked request latency metrics (doubtlessly billions of knowledge factors) might be grouped by service and endpoint, then aggregated into 5-minute (or 1-minute) home windows. A single API name that generates 100 latency information factors per second (8.64 million per day) is lowered to only 288 aggregated entries per day (one per 5-minute window), whereas nonetheless preserving essential p50/p90/p99 percentiles wanted for SLA monitoring.

  1. Cardinality Administration

Excessive-cardinality metrics can break time-series databases. The cardinality administration course of follows this sample:

Fig. 3 — Cardinality-Administration

Efficient methods embody:

  • Label filtering and normalization
  • Strategic aggregation of particular dimensions
  • Hashing methods for high-cardinality values whereas preserving question patterns

Instance: A microservice monitoring HTTP requests consists of consumer IDs and request paths in its metrics. With 50,000 every day energetic customers and hundreds of distinctive URL paths, this creates tens of millions of distinctive label mixtures. The cardinality administration system filters out consumer IDs fully (configurable, too excessive cardinality), normalizes URL paths by changing dynamic segments with placeholders (e.g., /customers/123/profilebecomes /customers/{id}/profile), and applies constant categorization to errors. This reduces distinctive time collection from tens of millions to a whole lot, permitting the time-series database to perform effectively.

Fig. 4 — Actual-time Enrichment

  1. Actual-time Enrichment

Including context to metrics through the transformation section entails integrating exterior information sources:

This course of provides essential enterprise and operational context to uncooked telemetry information, enabling extra significant evaluation and alerting primarily based on service significance, buyer impression, and different components past pure technical metrics.

Instance: A cost processing service emits primary metrics like request counts, latencies, and error charges. The enrichment pipeline joins this telemetry with service registry information so as to add metadata concerning the service tier (essential), SLO targets (99.99% availability), and group possession (payments-team). It then incorporates enterprise context to tag transactions with their sort (subscription renewal, one-time buy, refund) and estimated income impression. When an incident happens, alerts are mechanically prioritized primarily based on enterprise impression reasonably than simply technical severity, and routed to the suitable group with wealthy context.

Technical Benefits

  • Question efficiency: Pre-calculated aggregates remove computation at question time
  • Predictable useful resource utilization: Each storage and question compute are managed
  • Schema enforcement: Knowledge conformity is assured earlier than storage
  • Optimized storage codecs: Knowledge might be saved in codecs optimized for particular entry patterns

Technical Limitations

  • Lack of granularity: Some element is completely misplaced
  • Schema rigidity: Adapting to new necessities requires pipeline adjustments
  • Processing overhead: Actual-time transformation provides complexity and useful resource calls for
  • Transformation-time selections: Evaluation paths have to be recognized prematurely

ELT for Telemetry: Uncooked Storage with Versatile Transformation

Technical Structure

ELT structure prioritizes getting uncooked information into storage, with transformations carried out at question time:

Fig. 5 — ELT for Telemetry

A typical implementation may embody:

  • Assortment: OpenTelemetry, Prometheus, Fluent Bit
  • Transport: Direct ingestion with out advanced processing
  • Storage: Object storage (S3, GCS) or information lakes in Parquet format
  • Transformation: SQL engines (Presto, Athena), Spark jobs, or specialised OLAP techniques

Key Technical Parts

Fig. 6 — Environment friendly-Uncooked-Storage

  1. Environment friendly Uncooked Storage

Optimizing for long-term storage of uncooked telemetry requires cautious consideration of file codecs and storage group:

This strategy leverages columnar storage codecs like Parquet with applicable compression (ZSTD for traces, Snappy for metrics), dictionary encoding, and optimized column indexing primarily based on frequent question patterns (trace_id, service, time ranges).

Instance: A cloud-native utility generates 10TB of hint information every day throughout its distributed companies. As a substitute of discarding or closely sampling this information, the entire hint info is captured utilizing OpenTelemetry collectors and transformed to Parquet format with ZSTD compression. Key fields like trace_id, service identify, and timestamp are listed for environment friendly querying. This strategy reduces the storage footprint by 85% in comparison with uncooked JSON whereas sustaining question efficiency. When a essential customer-impacting concern occurred, engineers have been capable of entry full hint information from 3 months prior, figuring out a refined sample of intermittent failures that may have been misplaced with conventional sampling.

  1. Partitioning Methods

Efficient partitioning is essential for question efficiency towards uncooked telemetry. A well-designed partitioning technique follows this hierarchy:

Fig. 7 — Partitioning-Methods

This partitioning strategy permits environment friendly time-range queries whereas additionally permitting filtering by service and tenant, that are frequent question dimensions. The partitioning technique is designed to:

  • Optimize for time-based retrieval (commonest question sample)
  • Allow environment friendly tenant isolation for multi-tenant techniques
  • Enable service-specific queries with out scanning all information
  • Separate telemetry sorts for optimized storage codecs per sort

Instance: A SaaS platform with 200+ enterprise prospects makes use of this partitioning technique for its observability information lake. When a high-priority buyer reviews a problem that occurred final Tuesday between 2-4pm, engineers can instantly question simply these particular partitions: /12 months=2023/month=11/day=07/hour=1[4-5]/tenant=enterprise-x/*. This strategy reduces the scan dimension from doubtlessly petabytes to only a few gigabytes, enabling responses in seconds reasonably than hours. When evaluating present efficiency towards historic baselines, the time-based partitioning permits environment friendly month-over-month comparisons by scanning solely the related time partitions.

  1. Question-time Transformations

SQL and analytical engines present highly effective query-time transformations. The question processing circulate for on-the-fly evaluation seems to be like this (See Fig. 8).

This question circulate demonstrates how advanced evaluation like calculating service latency percentiles, error charges, and utilization patterns might be carried out fully at question time with no need pre-computation. The analytical engine applies optimizations like predicate pushdown, parallel execution, and columnar processing to realize affordable efficiency even towards massive uncooked datasets.

Fig. 8 — Question-time-Transformations

Instance: A DevOps group investigating a efficiency regression found it solely affected premium prospects utilizing a particular characteristic. Utilizing query-time transformations towards the ELT information lake, they wrote a single question that first filtered to the affected time interval, joined buyer tier info, extracted related attributes about characteristic utilization, calculated percentile response instances grouped by buyer phase, and recognized that premium prospects with excessive transaction volumes have been experiencing degraded efficiency solely when a particular non-obligatory characteristic flag was enabled. This evaluation would have been unattainable with pre-aggregated information because the buyer phase + characteristic flag dimension hadn’t been beforehand recognized as necessary for monitoring.

Technical Benefits

  • Schema flexibility: New dimensions might be analyzed with out pipeline adjustments
  • Value-effective storage: Object storage is considerably cheaper than specialised DBs
  • Retroactive evaluation: Historic information might be examined with new views

Technical Limitations

  • Question efficiency challenges: Interactive evaluation could also be gradual on massive datasets
  • Useful resource-intensive evaluation: Compute prices might be excessive for advanced queries
  • Implementation complexity: Requires extra subtle question tooling
  • Storage overhead: Uncooked information consumes considerably extra space

Technical Implementation: The Hybrid Strategy

Core Structure Parts

Implementation Technique

  1. Twin-path processing

    Fig. 10 — -Twin-path-processing

Instance: A world ride-sharing platform applied a dual-path telemetry system that routes service well being metrics and buyer expertise indicators (experience wait instances, ETA accuracy) by way of the ETL path for real-time dashboards and alerting. In the meantime, all uncooked information together with detailed consumer journeys, driver actions, and utility logs flows by way of the ELT path to cost-effective storage. When a regional outage occurred, operations groups used the real-time dashboards to rapidly establish and mitigate the instant concern. Later, information scientists used the preserved uncooked information to carry out a complete root trigger evaluation, correlating a number of components that wouldn’t have been seen in pre-aggregated information alone.

  1. Good information routing

Fig. 11 — Good Knowledge Routing

Instance: A monetary companies firm deployed a sensible routing system for his or her telemetry information. All information is preserved within the information lake, however essential metrics like transaction success charges, fraud detection alerts, and authentication service well being metrics are instantly routed to the real-time processing pipeline. Moreover, any security-related occasions corresponding to failed login makes an attempt, permission adjustments, or uncommon entry patterns are instantly despatched to a devoted safety evaluation pipeline. Throughout a current safety incident, this routing enabled the safety group to detect and reply to an uncommon sample of authentication makes an attempt inside minutes, whereas the entire context of consumer journeys and utility conduct was preserved within the information lake for subsequent forensic evaluation.

  1. Unified question interface

Actual-world Implementation Instance

A particular engineering implementation at last9.io demonstrates how this hybrid strategy works in follow:

For a large-scale Kubernetes platform with a whole lot of clusters and hundreds of companies, we applied a hybrid telemetry pipeline with:

  • Important-path metrics processed by way of a pipeline that:

    Fig. 12 — Unified question interface

    • Performs dimensional discount (limiting label mixtures)
    • Pre-calculates service-level aggregations
    • Computes derived metrics like success charges and latency percentiles
  • Uncooked telemetry saved in a cheap information lake:
    • Partitioned by time, information sort, and tenant
    • Optimized for typical question patterns
    • Compressed with applicable codecs (Zstd for traces, Snappy for metrics)
  • Unified question layer that:
    • Routes dashboard and alerting queries to pre-aggregated storage
    • Redirects exploratory and ad-hoc evaluation to the information lake
    • Manages correlation queries throughout each techniques

This strategy delivered each the question efficiency wanted for real-time operations and the analytical depth required for advanced troubleshooting.

Choice Framework

When architecting telemetry pipelines, these technical concerns ought to information your strategy:

Choice Issue Use ETL Use ELT
Question latency necessities Can wait minutes
Knowledge retention wants Days/Weeks Months/Years
Cardinality Low/Medium Very excessive
Evaluation patterns Properly-defined Exploratory
Price range precedence Compute Storage

Conclusion

The technical realities of telemetry information processing demand considering past easy ETL vs. ELT paradigms. Engineering groups ought to architect tiered techniques that leverage the strengths of each approaches:

  • ETL-processed information for operational use instances requiring instant insights
  • ELT-processed information for deeper evaluation, troubleshooting, and historic patterns
  • Metadata-driven routing to intelligently direct queries to the suitable tier

This engineering-centric strategy balances efficiency necessities with price concerns whereas sustaining the flexibleness required in fashionable observability techniques.

In regards to the writer: Nishant Modak is the founder and CEO of Last9, a excessive cardinality observability platform firm backed by Sequoia India (now PeakXV). He’s been an entrepreneur and dealing with massive scale firms for almost twenty years.

Associated Objects:

From ETL to ELT: The Subsequent Era of Knowledge Integration Success

Can We Cease Doing ETL But?

50 Years Of ETL: Can SQL For ETL Be Changed?

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles