Problem Statement
We are developing a metric store builder for our FP&A tool, Mantys, to enhance our customers’ data analysis capabilities. While we have successfully implemented simple metric aggregations, such as sums and averages, using cube.dev as our semantic layer, we face challenges when scaling for more complex formula metrics.
- Simple Metrics
- Formula Metrics
Example Formula Metric:
CAC Metric = OFFSET((S&M Expenses – Customer Marketing Expenses), -1)/ New Customers
Solution
Our simple metrics involve straightforward aggregations from column values, which work well with cube.dev. However, for more advanced requirements, we need formula metrics that are derived from other metrics rather than directly from tables or columns. These formula metrics incorporate various analytical functions such as Offset, If, Equals, Min, Max, Sum, BoMonth, and EoMonth.
Challenges
Scalability: Cube.dev handles simple metrics efficiently but struggles with the complexity and performance requirements of formula metrics.
Complexity: Formula metrics require combining multiple metrics and applying advanced analytical functions, which is beyond the current capabilities of our cube.dev setup.
Performance: Ensuring that these complex calculations perform well and return results in a timely manner is a significant challenge.
Flexibility: We need a solution that allows for the dynamic creation and adjustment of formula metrics without extensive rework or customization.
Approaches:
- Compose SQL and apply database UDFs
- Compose Dataframes and apply pandas/spark UDFs
- Low Level Metric Store implementation using Apache Druid/Apache Pinot
Pros:
- This can be run against the databases as the final output is just plain SQL
- No need for any Deployment and CI/CD pipeline automation
Limitations:
- UDFs and UDAFs supported natively by databases can be limited
- Functions like EOmonth won’t be supported
- Still in this approach we depend on Cube.dev Semantic Model for creating JSON to SQL
Tools used:
- Cube (Simple Metrics)
- SQL (complex metrics)
Where are we in experimentation?
- We are trying to check whether we can store the existing metrics SQL query against the store Metrics and compose the CTEs dynamically
- Doing through trial and error for now, an agency might be able to give us the right sequence
Pros:
- Since we are using programming languages like Python, we can create all the necessary UDFs and UDAFs for the metric builder.
Limitations:
- This can’t be run against the databases, instead need to be executed against Spark/AWS EMR Clusters for large datasets: Infra challenge
- Deployment and CI/CD pipeline automation can be complex
- Delay in data- not real time
Tools used:
- Cube (Simple Metrics)
- Spark/Pandas
- GCP DataProc/AWS EMR
Where are we in experimentation?
- We are trying to check whether we can store the existing metrics SQL query against the store Metrics and compose the CTEs dynamically
- How to store it effectively against the clusters? How to store and run it against dataframes?
- Github POC Link: https://github.com/mohitPanwar1996/metric-store-builder
Approach 3: Low Level Metric Store Implementation using Apache Pinot/Druid
In this approach we explored Apache Pinot(Star Tree) and Apache Druid(Imply) for building metrics in real time during streaming/batch ingestion. But we were not able to make any progress due to the following reasons.
Pros:
Build Metrics in real time (the value will reflect in real time)
Limitations:
- Streaming Ingestion(mostly all of these services built on Kafka)
- Complex cloud setup
- Table Transformations not supported
- OBT(One Big table) approach is recommended mostly; Joining tables is not there
- Only SQL REST API are supported not like cube.dev(check the below screenshot for reference)
Tools Used:
- Star Tree Cloud/Imply Cloud – Processing
- Kafka/AWS MSK/Confluent – Streaming
- Apache Presto/Trino – Querying
Where are we in experimentation?
- We connected with Imply and StarTree cloud team for understanding their offerings
Tried creating measures, dimensions and metrics but we were not able to create derived metrics or formula based metrics on the fly. Need to explore.
Request for Advice
We are seeking advice, solutions, or recommendations from other consultants, forums, and consultancies on how to effectively implement and scale formula metrics in our system. Specifically, we are interested in best practices, tools, or strategies that could help overcome the limitations we are experiencing with cube.dev in handling these advanced use cases.
Questions where we need help?
-
How to effectively implement the formula based metrics?
a. We are trying to check whether we can store the existing metrics SQL query against the store Metrics and compose the CTEs dynamically for the first approach (Approach 1)
b. How to store it effectively against the clusters? How to store and run it against dataframes? (Approach 2)
-
How to implement the analytical functions against the formulas?
-
How to scale this across our multi tenant architecture?
References:
In our R&D we have gone through various formulas builder that other companies are using:
- Mosaic: https://www.mosaic.tech/metric-builder
- Drivetrain: [https://docs.drivetrain.ai/formulas/formulas-in-drivetrain
- Causal: https://www.causal.app/](https://stackoverflow.com)
- IBM Formula Builder – https://www.ibm.com/docs/en/mas-cd/maximo-monitor/continuous-delivery?topic=calculations-using-simple-functions
Screenshots: