Cross-Granularity Calculations in Looker

Purpose: Implementing cross-granularity calculations in Looker is similar to working with “level of detail” calculations in Tableau, leveraging DAX filter functions in Microsoft PowerBI or using metric dimensionality in MicroStrategy. There are three separate approaches we wanted to highlight, providing you different ways on how you can establish fields escaping Explore context (e.g. breakdown by an additional dimension) while maintaining filter context, which allows for establishing e.g. ratios based on subtotals. The guide also elaborates on the pros and cons of each approach.

The standard example scenario for this post will be to calculate the % share of Sale Price by Order Status for each Country in a single analysis.

Below, you can see a templated base view, which we will use as a starting point for implementing the three approaches.

view: base_view {
  sql_table_name: `project_id.dataset_id.table_id` ;;

  # Dimensions ---------------------------------------------

  # ID (not referenced in example)
  dimension: id {
    primary_key: yes
    type: number
    sql: ${TABLE}.id ;;
  }

  # Number (here: Sale Price)
  dimension: some_number {
    type: number
    sql: ${TABLE}.some_number ;;
  }

  # Base Dimension (here: Country)
  dimension: base_dimension {
    type: string
    sql: ${TABLE}.base_dimension ;;
  }

  # Breakdown Dimension (here: Status)
  dimension: breakdown_dimension {
    type: string
    sql: ${TABLE}.breakdown_dimension ;;
  }

  # Measures -----------------------------------------------  

  # Measure (here: ∑ Sale Price)
  measure: some_measure {
    type: sum
    sql: ${some_number} ;;
  }
}

Approach 1: Table Calculations:

Step-by-Step Guide:

This approach leverages Looker’s rich table calculations function library as a quick way to create ad hoc metrics without the need for implementing extensive LookML code. As the solution is not centrally grounded in the Looker semantic layer, it needs to be implemented on each analysis separately. While the solution may look a bit complex, you can consult this Medium post as a resource for further, more detailed explanations.

  • Add your base dimension to the analysis (here: Country). This will be used as reference for calculating the subtotal used in the denominator.

  • Add your breakdown dimension to the analysis (here: Status). This will be used to break down your measure used in the counter.

  • Add your measure to the analysis (here: Sale Price).

  • Important: Ensure to sort your analysis by your base dimension.

  • Add a custom field / table calculation which supports identifying partitions (here: Partition). You may hide this field from your final analysis.

if(match(${base_dimension}, ${base_dimension}) = offset(match(${base_dimension}, ${base_dimension}), -1),
1 + row() - match(${base_dimension}, ${base_dimension}),
1)
  • Add a custom field / table calculation which indicates the total by base dimension, escaping breakdown dimension context (here: ∑ Sale Price by Country). You may hide this field from your final analysis.
if(NOT(${base_dimension} = offset(${base_dimension}, 1)),
sum(offset_list(${some_measure}, -(${partition} - 1), ${partition})),
null)
  • Add a custom field / table calculation which repeats the total by base dimension, escaping breakdown dimension context (here: ∑ Sale Price by Country (repeated)). You may hide this field from your final analysis.
index(${some_measure_by_base_dimension}, min(offset_list(if(${some_measure_by_base_dimension}>0, row(), null), 0, 5000)))
  • Add a custom field / table calculation to compute the final ratio (here: % Sale Price by Country)
${some_measure} / ${some_measure_by_base_dimension_repeated}

Screenshot:

Pros & cons of using table calculations:

+ No LookML Implementation required
+ Highly adaptable / flexible
- Not reusable across analysis (not part of the semantic layer)
- No governance (not part of the semantic layer)
- Risk of incorrect calculation if returned data exceeds row limit
- Strictly requires sorting pattern


Approach 2: Window Functions:

Step-by-Step Guide:

This approach uses LookML in conjunction with window functions to establish a measure which generally escapes additional dimensions (yet still acknowledging filters). Since the implementation is leveraging LookML, those measures can be generally used across analysis in a governed way.

The example below uses BigQuery’s PARTITION BY (see documentation) - however you can tailor this approach to your database’s SQL dialect.

  • Add your base dimension to the analysis (here: Country). This will be used as reference for calculating the subtotal used in the denominator. Sorting is not required for this approach.

  • Add your breakdown dimension to the analysis (here: Status). This will be used to break down your measure used in the counter.

  • Add your measure to the analysis (here: Sale Price).

  • Create a LookML measure within your semantic layer computing the KPI on a fixed level of detail.
    Note that the parameter required_fields ensures Looker considers the base dimension as part of the computed query, even if not explicitly included as part of the analysis by the end user.

measure: some_measure_by_base_dimension {
  required_fields: [base_dimension]
  type: number
  sql: SUM(${some_measure}) OVER (PARTITION BY ${base_dimension}) ;;
}
  • You can choose whether you want to add your newly created measure to the analysis to compute the measure by base dimension (here: ∑ Sale Price by Country).

  • Create a LookML measure within your semantic layer computing the ratio of your general measure divided by the measure with a fixed level of detail

measure: percentage_some_measure_by_base_dimension {
  required_fields: [base_dimension]
  type: number
  sql: 1.0 * ${some_measure} / NULLIF(${some_measure_by_base_dimension}, 0) ;;
}
  • Add your newly created measure to the analysis to compute the final ratio (here: % Sale Price by Country)

Screenshot:

Pros & cons of using window functions:

+ Reusable across analysis (part of the semantic layer)
+ Secured governance (part of the semantic layer)
+ Correct calculation even if results exceed row limit
Lean LookML implementation (slightly more complex)
- LookML implementation required
- Less adaptable / flexible
- Potentially unexpected behaviour when not adding base dimension to analysis (required field)


Approach 3: Derived Tables:

Step-by-Step Guide:

This approach leverages creating a fact table on the required level of detail using Looker’s native derived tables. Upon joining it back into the Explore, one can use measures on the fact table in order to escape additional dimensions (while still acknowledging filters when using parameters bind_filters respectively bind_all_filters).

  • Add your base dimension to the analysis (here: Country). This will be used as reference for calculating the subtotal used in the denominator. Sorting is not required for this approach.

  • Add your breakdown dimension to the analysis (here: Status). This will be used to break down your measure used in the counter.

  • Add your measure to the analysis (here: Sale Price).

  • Create a LookML measure within your semantic layer leveraging a derived table. This will compute the KPI on a fixed level of detail. Join back the derived table into your explore using base_dimension as a key.

view: base_dimension_facts {

  derived_table: {
    explore_source: base_table {
      column: base_dimension { field: base_dimension }
      column: some_dimensionalized_measure { field: some_measure }
      # optional
      bind_all_filters: yes
    }
  }

  # Dimensions -----------------------------------------------

  dimension: base_dimension {
    type: string
    primary_key: yes
    hidden: yes
  }

  dimension: some_dimensionalized_measure {
    type: number
    hidden: yes
  }

  # Measures -----------------------------------------------

  measure: some_measure_by_base_dimension {
    type: sum
    sql: ${some_dimensionalized_measure} ;;
  }
}
  • You can choose whether you want to add your newly created measure to the analysis to compute the measure by base dimension (here: ∑ Sale Price by Country).

  • Create a LookML measure within your semantic layer computing the ratio of your general measure divided by the measure with a fixed level of detail

measure: percentage_some_measure_by_base_dimension {
  required_fields: [base_dimension]
  type: number
  sql: 1.0 * ${measure} / NULLIF(${some_measure_by_base_dimension}, 0) ;;
}
  • Add your newly created measure to the analysis to compute the final ratio (here: % Sale Price by Country)

Screenshot:

Pros & cons of using derived tables:

+ Reusable across analysis (part of the semantic layer)
+ Secured governance (part of the semantic layer)
+ Correct calculation even if results exceed row limit
Simple LookML implementation (slightly more extensive)
- LookML implementation required
- Less adaptable / flexible


Conclusion:

Mastering cross-granularity calculations ensures that Looker remains a versatile powerhouse for sophisticated, multilevel analysis. Whether you prioritize the speed and flexibility of Table Calculations, the efficiency of SQL Window Functions, or the robust governance of Native Derived Tables, each method offers a unique pathway to achieving granular insights.

Topic Table Calculations Window Functions Native Derived Tables
Implementation Ad-hoc (No LookML) SQL-based LookML Model-based LookML
Reusability Low (Single Analysis) High (Governed) High (Governed)
Complexity Moderate Logic Lean & Direct Extensive Setup
Data Volume Best for < 5k rows High Performance High Performance
Filter Awareness Limited to View Deep Integration Deep (via bind_filters)

By selecting the approach that best fits your technical requirements, you can build dynamic, cross-granularity analyses that maintain precision regardless of the Explore context. Ultimately, these techniques empower your users to unlock deeper comparative insights while leveraging the full strength of a centralized semantic layer.

3 Likes

This is great thank you Michael!

If we could only define table calculations in LookML.. otherwise it’s not great to copy them into every single query we build

1 Like