#
Dashboard Architecture
Two flows handle video stats for dashboard-tracked videos: an immediate "start tracking" fetch triggered by the user, and a scheduled nightly refresh for all active tracked videos.
#
Flow A: Start Tracking
When a user clicks "Start Tracking" on a video, the system immediately enqueues a high-priority stats fetch.
sequenceDiagram
participant UI as Frontend
participant DW as dashboard-worker
participant DB as Supabase DB
participant HQ as HIGH_PRIORITY_QUEUE
participant QP as queue-processor
participant API as RapidAPI
UI->>DW: POST /dashboard/videos/start-tracking { video_id }
DW->>DB: UPDATE videos SET video_status = 'tracking'
DW->>DB: SELECT external_video_id, platform, video_url
DB-->>DW: video data
DW->>HQ: send { type: 'single_video', video_id, video: {...}, priority: 'high' }
DW-->>UI: { success: true }
Note over HQ,QP: Async — processed immediately
HQ->>QP: single_video message
QP->>API: Fetch stats for video (platform-specific RapidAPI endpoint)
API-->>QP: { views, likes, comments, reposts, saves, ... }
QP->>DB: INSERT video_statistics
QP->>DB: UPDATE videos (thumbnail_url, published_at, blogger_id, music_id)
#
Message Shape
{
type: 'single_video',
video_id: string, // UUID - internal video record ID
video: {
organization_id: string, // UUID
video_url: string,
initial_url: string,
platform: string, // 'TikTok' | 'Instagram' | 'YouTube' | ...
external_video_id: string,
user_id: string
},
source: 'dashboard-start-tracking',
priority: 'high'
}
#
DB Tables Written
#
Flow B: Nightly Video Refresh
A cron job fires at 00:00 UTC and 01:00 UTC daily. It reads all tracked videos that need stats refreshed and enqueues them as low-priority batches.
sequenceDiagram
participant CRON as Cloudflare Cron
participant QP2 as queue-producer
participant DB as Supabase DB
participant LQ as LOW_PRIORITY_QUEUE
participant QP as queue-processor
participant API as RapidAPI
CRON->>QP2: scheduled() event (00:00 or 01:00 UTC)
QP2->>DB: SELECT * FROM videos_needing_update (paginated, 1000/page)
Note over DB: Criteria: video_status='tracking' AND last stats > 3h ago
DB-->>QP2: Videos grouped by platform
loop Per platform batch
QP2->>QP2: Split into platform-sized batches
QP2->>LQ: sendBatch(video_batch messages)
end
LQ->>QP: video_batch message (5–15 videos)
QP->>API: Batch fetch stats (platform-specific)
API-->>QP: Stats array
QP->>DB: INSERT video_statistics (per video)
QP->>DB: UPDATE videos (thumbnail, published_at, blogger_id)
#
Platform Batch Sizes
#
Message Shape
{
type: 'video_batch',
platform: 'tiktok', // lowercase platform name
videos: [
{
videoUrl: string,
userId: string,
organizationId: string,
externalVideoId: string
}
],
priority: 'low',
batchNumber: number,
totalBatches: number,
batchSize: number,
enqueuedAt: string, // ISO8601
requestId: string // e.g. 'midnight-batch-tiktok-1'
}
#
videos_needing_update View
The view returns videos matching:
video_status = 'tracking'AND last statistics record > 3 hours old- OR
video_status = 'deleted'AND deleted within the last 7 days (for status tracking)
Excludes: platform = 'Other' and the test organization.