Scaling and Security Practices for Modern Applications
Common practices for structuring secure data, choosing rendering strategies and scaling SSR for large datasets
Last updated: 8/15/2025
This guide covers common practices for structuring and securing your data, choosing between server-side and client-side rendering, securing client-side data fetching and scaling Server-Side Rendering (SSR) for large datasets.
1. Separating Sensitive Data into Secure Tables
One of the most effective ways to protect sensitive information is to physically separate it into its own database table with stricter security rules.
Why Split Data Into Separate Tables?
If sensitive columns live in the same table as public ones, a single overly-permissive query or view could leak private data.
By moving those fields into a dedicated private table with Row-Level Security (RLS) enabled (default-deny), you create a strong separation between what's public and what's private.
How It Works
+—————––+ +—————––+
| Public Table | | Private Table |
|—————––| |—————––|
| id (PK) |<—––| listing_id (FK) |
| title | | seller_phone |
| price_display | | seller_email |
| status | | valuation_notes |
+—————––+ +—————––+
^ Broad RLS ^ Strict RLS
| |
Accessible to many Accessible only to
(or even public) authorised roles
Hardening Best Practices
- Default-deny and scope access (enable RLS and scope policies by tenant, user, or role)
- Avoid accidental leaks via views (exclude private columns from public views; keep internal views in restricted schemas)
- Use RPC or server functions for combined access (validate permissions server-side and return only required fields)
- Separate highly sensitive data further (place extremely sensitive fields like passwords and tokens in their own stricter table)
- Control media access (store public media in public buckets; private media in private buckets with signed URLs and short expiry)
- Audit access (log queries to private tables and monitor for anomalies)
- Test like an attacker (attempt queries as unauthenticated or unauthorised users to verify no sensitive rows are returned)
2. Server-Side vs Client-Side Components
Server-Side Components
- Run on the server before sending HTML to the browser
- Ideal for SEO, secure data fetching and fast initial load times
Client-Side Components
- Run in the browser after the page loads
- Handle interactivity and dynamic updates without a full page reload
Hybrid Approach
Modern frameworks (e.g., Next.js, Remix, Nuxt) combine both:
- Server-side for initial rendering and SEO
- Client-side for smooth, interactive experiences
Rendering Flow
[Browser] —Request–> [Server: Fetch + Render HTML] —HTML–> [Browser]
^ |
| <— Hydration (JS) — |
| v
| [Client Fetch / Update UI]
| (AJAX, GraphQL, WebSockets)
3. Using Both in One Application
A common hybrid workflow:
- Server renders HTML with initial data
- Browser hydrates interactive components with JavaScript
- Client fetches additional data on demand
Example:
- Server-side: Fetches product details from a database and renders HTML
- Client-side: Handles "Add to Cart" clicks and updates the UI without reloading
4. Secure Client-Side Fetching with RLS
Client-side fetching can be secure if API-level restrictions (such as RLS) strictly control data access.
Risks
- Requests are visible in browser DevTools
- API keys must never be embedded in frontend code
Best Practices
- Require authentication for sensitive data
- Apply RLS to scope queries to the authenticated user or tenant
- Never rely on client-side filtering alone
- Use short-lived tokens
- Rate-limit and log all sensitive queries
5. SSR with Large Databases
When scaling SSR for large datasets:
- Query only what you need (avoid
SELECT *) - Index to match
WHERE+ORDER BY - Use keyset/cursor pagination instead of OFFSET
- Cache at multiple layers (Redis for query results, CDN/edge for public HTML)
- Use read replicas for heavy reads; connection pooling to prevent overload
- Partition large tables by date, tenant, or logical grouping
- Stream SSR output to deliver above-the-fold content faster
6. Three Common SSR Page Types
Diagram – SSR Data Flow for a Large List
[Browser Request] –> [App Server: SSR List Page]
| |
| +–> Query DB (keyset pagination)
| +–> Cache layer (Redis)
| +–> Render HTML
v
[Browser Receives HTML + Hydration Scripts]
-
List Pages (large collections)
- Composite index on filter/sort columns
- Keyset paginate 20–50 items per request
- Cache for 30–60s in Redis; edge-cache HTML if public
-
Detail Pages
- Fetch by primary key plus a small related set
- Cache for 2–5 minutes with tag-based invalidation
-
Personal Dashboards
- Multiple small queries in parallel via connection pool
- Short-TTL Redis cache per widget; no public edge cache for private data
7. Operational Playbook
- Connection pooling (e.g., pgBouncer, pgpool) to reuse DB connections
- Read replicas for distributing read queries
- Backpressure with query timeouts and circuit breakers
- Online migrations for large tables
- Partitioning for very large tables
- Data TTL policies to archive old rows
8. Pitfalls to Avoid
- Offset pagination on huge tables
- Rendering massive lists without pagination
- Using
SELECT *in SSR - Missing covering indexes for frequent queries
- Publicly caching private or user-specific pages
- Blocking SSR while waiting on slow assets (stream instead)
9. Quick Checklist
- Queries return only the fields required for rendering
- Keyset pagination for large lists
- Indexes match
WHEREandORDER BYpatterns - Read replicas and pooling in place
- Redis cache with short TTL and stampede protection
- SSR streams above-the-fold first
- Least-privilege DB roles; secrets stored server-side only
- Tenant scoping and optional RLS enforced
- Monitoring for slow queries, cache hit rates and replica lag
Related Topics
Learn more about security and scaling:
- Security Concepts - Core security principles and practices
- Best-Practice Hardening - Separating sensitive data into secure tables
- Row Level Security (RLS) Fundamentals - Understanding PostgreSQL's built-in security
- Multi-tenant Security Patterns - Implementing team and organisation-based data isolation