Hi Community,
I’m facing a puzzling issue where an alert consistently triggers on a stale value for its first run of the day, while the visualization included in the same alert notification correctly shows the fresh data.
The Scenario:
- Alert: Triggers when a calculated measure, percentage_change, is greater than 5%.
- Calculation: The percentage_change measure is defined in a PDT and uses a LAG() window function to compare the latest data load to the previous one.
- Timing: 00:00 CEST: Raw data finishes loading into a BigQuery table. 00:15 CEST: A sql_trigger inside the view initiates a rebuild of our PDT at the set time condition. 00:45 CEST: The alert is scheduled to run.
The Discrepancy:
At 00:45, the alert fires and the history shows it observed a stale value (e.g., 11%). However, the chart embedded in the resulting email notification correctly displays the true, fresh value (e.g., -51%).
Key Evidence:
- The Alert History for the trigger shows Source: query result, meaning it ran a live query against the database, not from cache.
- The Query History shows record of query being with details source = ‘alert’ AND result source = ‘query’ at 00:45:08 CEST which shows that alert engine triggered the query and got fresh data Also the query history’s next timestamp is of source = ‘api4’ AND result source = ‘cache’ at 00:45:13 CEST which shows the dashboard tile was sent with the latest cached data My Core
Question:
Why does the alert email gets triggered on an older stale value(11%) (from the last load) but the chart that is in the email contains the recent value (-51%) Ideally, the trigger condition shouldn’t have met and I shouldn’t have got the email alert.
Looking for insights into this mechanism and best practices for making the alerting process more robust. Thanks!