Normalized tables are excellent for storage and transaction safety, but they are a terrible fit for dashboards.
If you have ever tried to build a dashboard directly from a normalized schema, you know how painful it is. Every visualization needs multiple joins leading to a drastic drop in performance and an unpleasant user experience.
Perhaps, this is the reason why ArcGIS Dashboards does not even support dataset joins.
But you may ask: what if my data is stored in a well designed SQL database?
You're not supposed to plug your dashboard's backend to your production database.
Your normalized SQL schema is designed for efficient writing (insert and update) operations while maintaining perfect data integrity. A dashboard, on the other hand, requires fast reads. Those two needs rarely align. So feeding your normalised SQL database to your dashboard backend will result in a poor dashboard experience and it will slow down your production app.
Why you should not connect dashboards to production databases:
1. Joins across multiple tables slow everything down.
2. Dashboards fire repeated queries, which can overwhelm your main database.
3. Production databases should not be exposed directly for security reasons.
So what's the right way to feed your data to your dashboard?
The solution is to flatten or de-normalize your data before feeding it to your dashboard. There are several approaches:
1. Materialized views: Write SQL queries that pre-join and pre-aggregate your data, then save them as materialized views in a reporting schema.
Pro Tip: you can set up a pg_cron or regular Linux Cron job that automatically updated the materialised views during extremely low traffic hours and/or days.
2. ETL pipelines: Use an ETL process to extract from the normalized schema, transform it into a flat structure, and load it into a separate reporting database. You can achieve this using a simple Python script that reads from your database and outputs a flat table.
Example with Python and Pandas:
import pandas as pd
import sqlalchemy
engine = sqlalchemy.create_engine("postgresql://user:pass@localhost/mydb")
# Read from normalized tables
orders = pd.read_sql("SELECT * FROM orders", engine)
customers = pd.read_sql("SELECT * FROM customers", engine)
# Flatten into a dashboard ready table
dashboard_data = orders.merge(customers, on="customer_id")
# Save to a CSV or push to a reporting database
dashboard_data.to_csv("dashboard_ready.csv", index=False)
This approach gives you the best of both worlds. Your production database stays normalized and efficient, while your dashboard consumes a flat, query-ready dataset.
If you want a smooth dashboard experience, stop connecting it directly to normalized tables. Flatten first, then serve.
Follow Me