- Data Export enabled and sending data to your internal warehouse or lakehouse
- Access to business intelligence tool (e.g., Looker, PowerBI, Hex) with ability to query Metronome tables
- At least one customer with an active commit
- Prevent customer churn from unexpected overspend
- Identify expansion opportunities from usage patterns
- Enable proactive customer success engagement
Before You Begin
Required Access & Permissions
Role needed: Admin or Analyst role in Metronome Permissions:- View customer contracts and commits
- Access Data Export configurations
- Create and schedule data exports
Understanding Your Data
Metronome’s Data Export feature provides daily exports of Metronome data to your warehouse of choice. The data models include customer informatoin, granular usage and commit data, contract details, and more.Step-by-Step Implementation
Step 1: Configure Data Export
PAID FEATUREData Export is a paid Metronome feature. Contact your Metronome representative to enable Data Export in your account.
- Navigate to Connections > Data Export in the Metronome App
- Click “Add Destination”
- Configure your warehouse or lakehouse destination
- Enable the following tables (note: clients may enable more than the following tables for additional use cases)
customer
- All customerscontracts
- All Contracts related data models (ask your Metronome rep to enable)invoice
&line_item
- All finalized invoices and line itemsdraft_invoice
&draft_line_item
- All draft invoices and line items
Step 2: Build Customer-level burn curve data models
Objective: Build a per-customer commit burn curve with actual and expected burn over time Instructions: To create a commit burn rate curve, we need to combine 2 concepts, how much of a commit should be burned and how much of a commit has actually been burned. The following queries provide a step-by-step guide to define both concepts using Metronome data export data.GENERALIZED SQLThe SQL used here is generalized SQL and will most likely not run in your query engine as-is. Python and SQL can both be used for the creation of these dashboards.
-
Create pacing rate from
access_schedule
in thecontracts_balances
table: The following query parses the access schedule JSON object from thecontracts_balances
table to generate the expected burn curve or how much of a commit should be burned at any given time. -
Calculate historical consumption using the invoice
line_item
table: The following query parses invoice line item data to determine how much of the commit was actually burned in a given month. This query only parses finalized invoices and will later need to be joined against the draft invoice data to build an up to date view of the remaining commit. -
Calculate current month consumption using the draft invoice
draft_line_item
table: The previous query provides a historical month over month view of commit burn. The following query can be combined with the above to show how the commit has been burned as of the current day in the billing period. The query logic is similar to the finalizedline_item
pattern above, with the only difference being that it pulls intra-month commit data from thedraft_invoice
table. -
Combine
access_schedule
,line_item
, anddraft_line_item
query results and create a date spine: There are many ways to achieve this in both SQL and python, and there are also many ways to group this data (i.e. account, industry type, etc), but the important things to flag are:line_item.commit_id
should be joined toaccess_schedule.balance_id
- date_spine should be created using the
schedule_start_date
andschedule_end_date
, inclusive of theschedule_start_date
so that consumption is only measured against the appropriate access schedule service period - when
line_item.commit_id
is null, this signifies on-demand usage, which can’t tie to a certain commit, thus this usage won’t show up in the burn curve - cumulative sums on
line_item_total_amount_usd
within anaccess_schedule
service period will generate the total burn against a commit by the end of that service period forecast_burn_curve
can be modeled based on the heuristics of your choosing- the below is an example of what a commit burn rate dashboard could look like
- Flag over or under consuming customers: Using the pacing curve and the actual burn curve, customers that are burning through their commits faster or slower than anticipated can be flagged. Many Metronome clients use this data to trigger early renewal conversations when a client is over consuming, or product activation conversations when a client is under consuming. A common pattern we’ve seen is to provide each Account Manager or CSM with a dashboard showing their assigned customers and the current state of consumption (over, under, on track).
Understanding Your Results
Interpreting the Data
What to look for:- Steady, predictable usage patterns (healthy adoption)
- Gradual usage increases (growing value realization)
- Burn rates between 80-110% (on-track consumption)
- Sudden usage spikes in first 30 days (potential misconfiguration)
- Burn rate greater than 150% before month 3 (unsustainable consumption)
- Repeated spike-and-drop patterns (unstable implementation)
- Pattern: 300% burn rate in week 1 → Action: Immediate check-in call to review implementation
- Pattern: Less than 50% burn rate after 60 days → Action: Success team to identify adoption blockers
Best Practices & Tips
Optimization Recommendations
- Frequency: Review high-risk accounts daily, all accounts weekly
- Timing: Metronome exports draft invoice data daily - it’s common for teams producing these dashboards to set SLAs by which their Account Management teams can expect to see fresh consumption data
- Audience: CSM team primary, Sales team for expansion and net new opportunities
Advanced Features
- Predictive modeling using historical seasonal patterns
- Multi-dimensional health scoring incorporating support tickets
- Automated playbook triggers based on usage patterns
Data Quality Considerations
- Backdated usage can cause temporary spikes—wait 24 hours before escalating
- Price changes mid-period may distort burn calculations
- Always verify anomalies against recent configuration changes
Integration & Next Steps
Sharing & Distribution
After the above metrics have been defined, there are many ways to operationalize the data. Included below are a handful of ideas we’ve seen Metronome clients deploy to help GTM teams prioritize customer reachouts and engagement. Internal teams:- Automated Slack posts for critical alerts
- Weekly email digest to leadership with at-risk accounts
- Real-time dashboard access for all customer-facing teams
- Monthly usage reports to customer admins
- Quarterly business reviews with trend analysis
Taking Action on Insights
- Overconsumption: Schedule immediate call to review usage patterns and optimization opportunities
- Underconsumption: Identify adoption barriers and offer implementation support
- Stable high usage: Proactive expansion conversation before commit exhaustion
Troubleshooting & FAQ
Common Issues
Q: Why does my customer show 500% burn rate on day 2? A: New customers often have initial configuration spikes as they’re learning to use your platform. We recommend monitoring their usage in this critical period and providing onboarding/implementation guidance to avoid these early costly mistakes. Q: Dashboard shows negative commit balance but customer hasn’t been notified? A: Metronome is a highly flexible system that allows for configurations which would result in negative usage. It’s important to ask your billing team for context if you see abnormal behavior in the commit burn curve visualizations.Data Discrepancy Checklist
When report data doesn’t match expectations:- ✓ Verify date ranges align with customer’s billing period
- ✓ Review for any backdated usage in past 48 hours
- ✓ Validate commit terms match contract configuration in Metronome (start date, amount, duration)