Expensive queries in bigquery

I have a dashboard that includes a Raw Data Table with column statistics enabled, a pivot table, and several other charts. When the dashboard loads, it executes very expensive queries because the underlying tables contain millions of records (approximately 77 billion).

I was wondering if ChartFactor provides any features or best practices to reduce query cost or improve overall query performance in this kind of scenario.

I understand. While I don’t have access to your dashboard configuration, here are a few things you can do to help reduce expensive queries:

  • Use approximate metrics when possible
    If you are using KPIs or the unique metric operation, consider switching to unique_approx, which is much more efficient for large datasets:

    const metric = cf.Metric("my_attribute", "unique_approx");
    
  • Enable query caching
    If you are testing, have a consistent initial view, or regularly apply the same filters, you can take advantage of the provider’s cache feature. This can be enabled when creating the provider:

    // Define provider
    const provider = {
        name: 'Google BQ',
        provider: 'google-bigquery',
        projectId: 'my-project-id',
        clientId: 'oauthClientId', // Browser environment
        scope: [],                // Optional
        otherProjects: [],        // Optional
        enableCache: true,        // Optional, defaults to false
        cacheTimeout: '15m 30s',  // Optional, defaults to "1d"
    };
    

    With caching enabled, the result of a query is stored and reused. Subsequent executions of the same query configuration will return the cached result instead of re-running the query in BigQuery, significantly reducing cost and load.

1 Like

Thanks, this was very helpfula dn the docs are very clear. I’ll apply these suggestions and I’ll let you know if I need more help.