Social Media Listening Tools: Monitor 25+ Platforms in Real-Time
Your brand gets mentioned on Reddit. By the time your $3,000/month social listening tool picks it up, the thread has 2,000 upvotes and is trending.
Your competitor launches on Product Hunt. Your monitoring tool doesn't support Product Hunt, so you miss it entirely.
A customer posts a TikTok complaining about your product. It goes viral. Your tool only monitors Twitter and Instagram.
This is the reality of traditional social listening tools in 2025. They're expensive, limited in platform coverage, and miss the conversations that matter most.
This guide shows you how to build a more powerful alternative. You'll see working code examples, real-world architectures, and cost breakdowns from developers who built their own social listening systems.
What is Social Listening
Before we build anything, let's be clear about terms people confuse.
Social Listening vs Social Monitoring vs Social Analytics
Social Monitoring: Tracking specific mentions, hashtags, or keywords
Social Listening: Analyzing broader conversations, trends, and sentiment
Social Analytics: Measuring performance metrics (engagement, reach, growth)
They overlap, but listening is the strategic layer. You're not just counting mentions—you're understanding what people actually care about.
What You Can Learn from Social Listening
Brand Health:
- What are people saying about your brand when you're not in the room?
- How does sentiment change over time?
- Which product features get praise vs complaints?
Competitive Intelligence:
- How are competitors positioning themselves?
- What are their customers complaining about?
- What opportunities are they missing?
Market Research:
- What problems are people trying to solve?
- What language do they use to describe pain points?
- What influences their buying decisions?
Crisis Detection:
- Negative sentiment spikes before they become crises
- Identify misinformation early
- Track how issues spread across platforms
Content Strategy:
- What topics resonate with your audience?
- Which formats perform best?
- What questions keep coming up?
The Business Case
HubSpot's data: Companies using social listening see 2.4x higher customer retention
Sprout Social's research: 68% of consumers expect brands to respond to social media complaints within 3 hours
The ROI: Early crisis detection alone can save brands millions in reputation damage
Social listening isn't optional anymore. It's business intelligence.
Why Existing Tools Fall Short
Let's be honest about what you're dealing with.
The Enterprise Options
Brandwatch: $800-5,000/month
- Strong analytics and sentiment analysis
- Limited platform coverage (misses Reddit, Discord, TikTok properly)
- Steep learning curve
- Built for enterprise marketing teams, not developers
Sprout Social: $249-499/month per user
- Good for social media management
- Basic listening features in higher tiers
- Focuses on major platforms only
- Not built for custom integrations
Hootsuite: $99-739/month
- Basic keyword tracking
- Shallow data (surface-level mentions)
- Limited historical data
- No API access on lower tiers
Meltwater: $6,000-12,000/year
- Comprehensive media monitoring
- Includes traditional media (which you might not need)
- Expensive for small teams
- Complex setup and training
The Common Problems
Limited Platform Coverage
Most tools focus on: Twitter, Facebook, Instagram, YouTube, LinkedIn
They miss or poorly support: TikTok, Reddit, Discord, Telegram, Product Hunt, Hacker News, niche forums
Your customers are everywhere. Your listening tool isn't.
Shallow Data Extraction
Tools give you surface-level data:
- Mention text
- Basic sentiment (positive/negative/neutral)
- Engagement counts
- Author name
They miss:
- Full comment threads and context
- Detailed user profiles and influence metrics
- Historical patterns and trends
- Cross-platform user behavior
Inflexible Query Language
Most tools use boolean operators: (brand OR "product name") AND -spam
Sounds powerful until you need:
- Regex pattern matching
- Fuzzy matching for misspellings
- Context-aware filtering
- Custom scoring algorithms
No Custom Integrations
You can't:
- Send alerts to your Slack channels
- Sync data to your CRM
- Trigger automated workflows
- Build custom dashboards
- Export raw data for ML models
The Costs Add Up Fast
For a team monitoring:
- 10 keywords across 5 platforms
- 3 team members needing access
- Historical data for trend analysis
- API access for custom dashboards
Expected cost: $2,000-5,000/month
And that's before you hit usage limits or need more platforms.
What You Can Build with SociaVault
Instead of paying for a limited tool, build exactly what you need.
Supported Platforms
SociaVault provides APIs for 25+ platforms:
Social Networks:
- TikTok (profiles, videos, comments, hashtags, search)
- Instagram (posts, reels, stories, comments, search)
- Twitter/X (tweets, profiles, communities, search)
- Facebook (posts, profiles, groups)
- LinkedIn (profiles, companies, posts)
- Threads (posts, profiles, search)
- Reddit (subreddits, posts, comments, search)
Video Platforms:
- YouTube (channels, videos, comments, transcripts, search)
- TikTok (separate from social, includes TikTok Shop)
Ad Intelligence:
- Facebook Ad Library
- Google Ad Library
- LinkedIn Ad Library
Search & Discovery:
- Google Search (SERP data)
- TikTok Shop (product search)
This is broader platform coverage than any traditional listening tool.
What Data You Can Extract
For each platform, you get comprehensive data:
Content Data:
- Full text/caption
- Media (images, videos)
- Engagement metrics (likes, comments, shares)
- Timestamps
- URLs
Author Data:
- Profile information
- Follower/subscriber counts
- Verification status
- Bio and links
Context Data:
- Comment threads
- Hashtags used
- Mentions
- Location (when available)
Advanced Data:
- Video transcripts (TikTok, YouTube, Instagram Reels)
- Sentiment indicators
- Engagement patterns
- Historical metrics
Use Cases You Can Build
1. Brand Reputation Monitoring Track mentions of your brand, products, and executives across all platforms in real-time.
2. Competitive Intelligence Monitor competitors' social presence, content strategy, and customer feedback.
3. Crisis Detection Detect sentiment shifts and volume spikes before they become full-blown crises.
4. Market Research Analyze customer conversations to identify needs, pain points, and opportunities.
5. Influencer Discovery Find creators talking about your industry or products.
6. Content Inspiration See what topics and formats resonate with your target audience.
7. Customer Support Track product issues and complaints across platforms.
8. Trend Analysis Identify emerging trends in your industry before they go mainstream.
Architecture: Building a Social Listening System
Let's build a practical system step by step.
System Overview
┌─────────────────┐
│ Keyword Config │ ← Define what to monitor
└────────┬────────┘
│
▼
┌─────────────────┐
│ Data Collector │ ← Fetch from platforms
│ (Scheduled) │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Database │ ← Store mentions
│ (PostgreSQL) │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Alert Engine │ ← Detect important events
└────────┬────────┘
│
▼
┌─────────────────┐
│ Notifications │ ← Slack, Email, etc.
└─────────────────┘
Step 1: Define What to Monitor
Create a configuration for your monitoring targets:
// monitoring-config.js
const monitoringConfig = {
keywords: [
{
term: "SociaVault",
platforms: ["twitter", "reddit", "tiktok", "youtube"],
alerts: {
enabled: true,
channels: ["slack", "email"],
threshold: 10 // Alert if 10+ mentions in 1 hour
}
},
{
term: "social media api",
platforms: ["twitter", "reddit"],
sentiment: true, // Track sentiment
alerts: {
enabled: false
}
},
{
term: "competitor_brand",
platforms: ["all"],
competitive: true,
alerts: {
enabled: true,
channels: ["slack"]
}
}
],
hashtags: [
"#webdevelopment",
"#socialmediamarketing",
"#saas"
],
accounts: {
competitors: [
{ platform: "twitter", username: "competitor1" },
{ platform: "instagram", username: "competitor2" }
],
influencers: [
{ platform: "tiktok", username: "creator1" }
]
}
};
module.exports = monitoringConfig;
Step 2: Multi-Platform Search Implementation
Here's how to search across platforms for your keywords:
// collectors/keyword-collector.js
const axios = require('axios');
const SOCIAVAULT_API_KEY = process.env.SOCIAVAULT_API_KEY;
const BASE_URL = 'https://api.sociavault.com';
class KeywordCollector {
constructor() {
this.results = [];
}
// Search TikTok
async searchTikTok(keyword) {
try {
const response = await axios.get(`${BASE_URL}/api/scrape/tiktok/search/keyword`, {
params: { query: keyword, count: 50 },
headers: { 'X-API-Key': SOCIAVAULT_API_KEY }
});
return response.data.videos.map(video => ({
platform: 'tiktok',
type: 'video',
id: video.id,
text: video.description,
author: video.author.username,
authorFollowers: video.author.followerCount,
engagement: {
likes: video.stats.likeCount,
comments: video.stats.commentCount,
shares: video.stats.shareCount,
views: video.stats.playCount
},
url: `https://www.tiktok.com/@${video.author.username}/video/${video.id}`,
timestamp: new Date(video.createTime * 1000),
keyword: keyword
}));
} catch (error) {
console.error('TikTok search error:', error.message);
return [];
}
}
// Search Twitter
async searchTwitter(keyword) {
try {
// Note: Twitter search via user tweets or communities
// For keyword search, you'd search specific users or communities
// This is a simplified example
const response = await axios.get(`${BASE_URL}/api/scrape/twitter/user-tweets`, {
params: { username: 'target_user', count: 50 },
headers: { 'X-API-Key': SOCIAVAULT_API_KEY }
});
const tweets = response.data.tweets.filter(tweet =>
tweet.text.toLowerCase().includes(keyword.toLowerCase())
);
return tweets.map(tweet => ({
platform: 'twitter',
type: 'tweet',
id: tweet.id,
text: tweet.text,
author: tweet.user.username,
authorFollowers: tweet.user.followersCount,
engagement: {
likes: tweet.likeCount,
retweets: tweet.retweetCount,
replies: tweet.replyCount,
views: tweet.viewCount
},
url: `https://twitter.com/${tweet.user.username}/status/${tweet.id}`,
timestamp: new Date(tweet.createdAt),
keyword: keyword
}));
} catch (error) {
console.error('Twitter search error:', error.message);
return [];
}
}
// Search Reddit
async searchReddit(keyword) {
try {
const response = await axios.get(`${BASE_URL}/api/scrape/reddit/search`, {
params: { query: keyword, limit: 50 },
headers: { 'X-API-Key': SOCIAVAULT_API_KEY }
});
return response.data.posts.map(post => ({
platform: 'reddit',
type: 'post',
id: post.id,
text: `${post.title}\n\n${post.selftext || ''}`,
author: post.author,
subreddit: post.subreddit,
engagement: {
upvotes: post.score,
comments: post.numComments,
upvoteRatio: post.upvoteRatio
},
url: `https://reddit.com${post.permalink}`,
timestamp: new Date(post.createdUtc * 1000),
keyword: keyword
}));
} catch (error) {
console.error('Reddit search error:', error.message);
return [];
}
}
// Search YouTube
async searchYouTube(keyword) {
try {
const response = await axios.get(`${BASE_URL}/api/scrape/youtube/search`, {
params: { query: keyword },
headers: { 'X-API-Key': SOCIAVAULT_API_KEY }
});
return response.data.results
.filter(item => item.type === 'video')
.map(video => ({
platform: 'youtube',
type: 'video',
id: video.videoId,
text: `${video.title}\n\n${video.description}`,
author: video.channelTitle,
channelId: video.channelId,
engagement: {
views: video.viewCount,
likes: video.likeCount,
comments: video.commentCount
},
url: `https://www.youtube.com/watch?v=${video.videoId}`,
timestamp: new Date(video.publishedAt),
keyword: keyword
}));
} catch (error) {
console.error('YouTube search error:', error.message);
return [];
}
}
// Search Threads
async searchThreads(keyword) {
try {
const response = await axios.get(`${BASE_URL}/api/scrape/threads/search`, {
params: { query: keyword },
headers: { 'X-API-Key': SOCIAVAULT_API_KEY }
});
return response.data.posts.map(post => ({
platform: 'threads',
type: 'post',
id: post.id,
text: post.text,
author: post.user.username,
authorFollowers: post.user.followerCount,
engagement: {
likes: post.likeCount,
replies: post.replyCount,
reposts: post.repostCount
},
url: post.url,
timestamp: new Date(post.takenAt * 1000),
keyword: keyword
}));
} catch (error) {
console.error('Threads search error:', error.message);
return [];
}
}
// Collect from all platforms
async collectKeyword(keyword, platforms = ['all']) {
const results = [];
if (platforms.includes('all') || platforms.includes('tiktok')) {
const tiktokResults = await this.searchTikTok(keyword);
results.push(...tiktokResults);
}
if (platforms.includes('all') || platforms.includes('twitter')) {
const twitterResults = await this.searchTwitter(keyword);
results.push(...twitterResults);
}
if (platforms.includes('all') || platforms.includes('reddit')) {
const redditResults = await this.searchReddit(keyword);
results.push(...redditResults);
}
if (platforms.includes('all') || platforms.includes('youtube')) {
const youtubeResults = await this.searchYouTube(keyword);
results.push(...youtubeResults);
}
if (platforms.includes('all') || platforms.includes('threads')) {
const threadsResults = await this.searchThreads(keyword);
results.push(...threadsResults);
}
return results;
}
}
module.exports = KeywordCollector;
Step 3: Database Schema
Store collected mentions efficiently:
-- mentions table
CREATE TABLE mentions (
id SERIAL PRIMARY KEY,
platform VARCHAR(50) NOT NULL,
type VARCHAR(50) NOT NULL,
external_id VARCHAR(255) NOT NULL,
text TEXT,
author VARCHAR(255),
author_followers INTEGER,
likes INTEGER DEFAULT 0,
comments INTEGER DEFAULT 0,
shares INTEGER DEFAULT 0,
views INTEGER DEFAULT 0,
url TEXT,
keyword VARCHAR(255),
sentiment VARCHAR(20), -- positive, negative, neutral
sentiment_score DECIMAL(3,2), -- -1 to 1
published_at TIMESTAMP,
collected_at TIMESTAMP DEFAULT NOW(),
processed BOOLEAN DEFAULT FALSE,
UNIQUE(platform, external_id)
);
-- Create indexes for fast queries
CREATE INDEX idx_mentions_platform ON mentions(platform);
CREATE INDEX idx_mentions_keyword ON mentions(keyword);
CREATE INDEX idx_mentions_published_at ON mentions(published_at);
CREATE INDEX idx_mentions_sentiment ON mentions(sentiment);
-- keywords table
CREATE TABLE keywords (
id SERIAL PRIMARY KEY,
term VARCHAR(255) NOT NULL UNIQUE,
platforms TEXT[], -- array of platforms
active BOOLEAN DEFAULT TRUE,
created_at TIMESTAMP DEFAULT NOW()
);
-- alerts table
CREATE TABLE alerts (
id SERIAL PRIMARY KEY,
mention_id INTEGER REFERENCES mentions(id),
alert_type VARCHAR(50), -- spike, negative_sentiment, high_engagement
severity VARCHAR(20), -- low, medium, high, critical
sent BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT NOW()
);
Step 4: Scheduled Collection
Run collection on a schedule (every 15 minutes, hourly, etc.):
// scheduler.js
const cron = require('node-cron');
const { Pool } = require('pg');
const KeywordCollector = require('./collectors/keyword-collector');
const pool = new Pool({
connectionString: process.env.DATABASE_URL
});
async function collectAndStore() {
console.log('Starting collection run:', new Date().toISOString());
// Get active keywords from database
const { rows: keywords } = await pool.query(
'SELECT * FROM keywords WHERE active = true'
);
const collector = new KeywordCollector();
for (const keyword of keywords) {
console.log(`Collecting: ${keyword.term}`);
try {
const mentions = await collector.collectKeyword(
keyword.term,
keyword.platforms || ['all']
);
console.log(`Found ${mentions.length} mentions for "${keyword.term}"`);
// Store in database
for (const mention of mentions) {
try {
await pool.query(`
INSERT INTO mentions (
platform, type, external_id, text, author, author_followers,
likes, comments, shares, views, url, keyword, published_at
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)
ON CONFLICT (platform, external_id) DO UPDATE SET
likes = EXCLUDED.likes,
comments = EXCLUDED.comments,
shares = EXCLUDED.shares,
views = EXCLUDED.views
`, [
mention.platform,
mention.type,
mention.id,
mention.text,
mention.author,
mention.authorFollowers,
mention.engagement.likes || 0,
mention.engagement.comments || mention.engagement.replies || 0,
mention.engagement.shares || mention.engagement.retweets || 0,
mention.engagement.views || 0,
mention.url,
mention.keyword,
mention.timestamp
]);
} catch (error) {
console.error('Error storing mention:', error.message);
}
}
// Rate limiting - wait 2 seconds between keywords
await new Promise(resolve => setTimeout(resolve, 2000));
} catch (error) {
console.error(`Error collecting "${keyword.term}":`, error.message);
}
}
console.log('Collection run complete');
}
// Run every 15 minutes
cron.schedule('*/15 * * * *', collectAndStore);
// Run immediately on start
collectAndStore();
console.log('Scheduler started. Running every 15 minutes.');
Step 5: Alert Engine
Detect important events and send notifications:
// alert-engine.js
const { Pool } = require('pg');
const { WebClient } = require('@slack/web-api');
const pool = new Pool({
connectionString: process.env.DATABASE_URL
});
const slack = new WebClient(process.env.SLACK_BOT_TOKEN);
class AlertEngine {
// Detect mention volume spikes
async detectVolumeSpike(keyword, threshold = 10) {
const result = await pool.query(`
SELECT COUNT(*) as count
FROM mentions
WHERE keyword = $1
AND published_at > NOW() - INTERVAL '1 hour'
`, [keyword]);
const count = parseInt(result.rows[0].count);
if (count >= threshold) {
return {
type: 'volume_spike',
severity: count > threshold * 3 ? 'critical' : 'high',
message: `${count} mentions of "${keyword}" in the last hour (threshold: ${threshold})`,
data: { keyword, count, threshold }
};
}
return null;
}
// Detect negative sentiment surge
async detectNegativeSentiment(keyword) {
const result = await pool.query(`
SELECT
COUNT(*) as total,
SUM(CASE WHEN sentiment = 'negative' THEN 1 ELSE 0 END) as negative
FROM mentions
WHERE keyword = $1
AND published_at > NOW() - INTERVAL '6 hours'
AND sentiment IS NOT NULL
`, [keyword]);
const { total, negative } = result.rows[0];
const negativeRatio = parseInt(negative) / parseInt(total);
if (negativeRatio > 0.4 && total > 10) { // 40% negative
return {
type: 'negative_sentiment',
severity: negativeRatio > 0.6 ? 'critical' : 'high',
message: `${Math.round(negativeRatio * 100)}% of "${keyword}" mentions are negative (${negative}/${total})`,
data: { keyword, negativeRatio, negative, total }
};
}
return null;
}
// Detect high-engagement mentions
async detectViralMentions(keyword) {
const result = await pool.query(`
SELECT *
FROM mentions
WHERE keyword = $1
AND published_at > NOW() - INTERVAL '24 hours'
AND (likes > 1000 OR views > 100000 OR comments > 100)
AND processed = false
ORDER BY published_at DESC
`, [keyword]);
if (result.rows.length > 0) {
return result.rows.map(mention => ({
type: 'viral_mention',
severity: 'high',
message: `High-engagement mention of "${keyword}" on ${mention.platform}`,
data: {
keyword,
platform: mention.platform,
author: mention.author,
engagement: {
likes: mention.likes,
views: mention.views,
comments: mention.comments
},
url: mention.url,
text: mention.text.substring(0, 200)
}
}));
}
return [];
}
// Send alert to Slack
async sendSlackAlert(alert) {
const emoji = {
critical: '🚨',
high: '⚠️',
medium: '📢',
low: 'ℹ️'
};
const color = {
critical: '#FF0000',
high: '#FFA500',
medium: '#FFFF00',
low: '#00FF00'
};
try {
await slack.chat.postMessage({
channel: process.env.SLACK_ALERT_CHANNEL,
text: `${emoji[alert.severity]} ${alert.message}`,
attachments: [{
color: color[alert.severity],
fields: [
{
title: 'Alert Type',
value: alert.type,
short: true
},
{
title: 'Severity',
value: alert.severity.toUpperCase(),
short: true
},
{
title: 'Details',
value: JSON.stringify(alert.data, null, 2)
}
],
footer: 'SociaVault Listening',
ts: Math.floor(Date.now() / 1000)
}]
});
console.log(`Slack alert sent: ${alert.type}`);
} catch (error) {
console.error('Error sending Slack alert:', error.message);
}
}
// Run all detection checks
async runChecks() {
console.log('Running alert checks:', new Date().toISOString());
// Get active keywords
const { rows: keywords } = await pool.query(
'SELECT term FROM keywords WHERE active = true'
);
for (const { term } of keywords) {
// Check for volume spikes
const volumeAlert = await this.detectVolumeSpike(term);
if (volumeAlert) {
await this.sendSlackAlert(volumeAlert);
await this.storeAlert(volumeAlert);
}
// Check for negative sentiment
const sentimentAlert = await this.detectNegativeSentiment(term);
if (sentimentAlert) {
await this.sendSlackAlert(sentimentAlert);
await this.storeAlert(sentimentAlert);
}
// Check for viral mentions
const viralAlerts = await this.detectViralMentions(term);
for (const alert of viralAlerts) {
await this.sendSlackAlert(alert);
await this.storeAlert(alert);
}
}
console.log('Alert checks complete');
}
async storeAlert(alert) {
await pool.query(`
INSERT INTO alerts (alert_type, severity, sent, created_at)
VALUES ($1, $2, true, NOW())
`, [alert.type, alert.severity]);
}
}
module.exports = AlertEngine;
Step 6: Dashboard Queries
Query your data for insights:
// analytics.js
const { Pool } = require('pg');
const pool = new Pool({
connectionString: process.env.DATABASE_URL
});
class Analytics {
// Get mention volume over time
async getMentionVolume(keyword, days = 7) {
const result = await pool.query(`
SELECT
DATE(published_at) as date,
COUNT(*) as count,
platform
FROM mentions
WHERE keyword = $1
AND published_at > NOW() - INTERVAL '${days} days'
GROUP BY DATE(published_at), platform
ORDER BY date DESC, platform
`, [keyword]);
return result.rows;
}
// Get sentiment breakdown
async getSentimentBreakdown(keyword, days = 7) {
const result = await pool.query(`
SELECT
sentiment,
COUNT(*) as count,
ROUND(AVG(sentiment_score), 2) as avg_score
FROM mentions
WHERE keyword = $1
AND published_at > NOW() - INTERVAL '${days} days'
AND sentiment IS NOT NULL
GROUP BY sentiment
`, [keyword]);
return result.rows;
}
// Get top platforms
async getTopPlatforms(keyword, days = 7) {
const result = await pool.query(`
SELECT
platform,
COUNT(*) as mentions,
SUM(likes) as total_likes,
SUM(views) as total_views,
SUM(comments) as total_comments
FROM mentions
WHERE keyword = $1
AND published_at > NOW() - INTERVAL '${days} days'
GROUP BY platform
ORDER BY mentions DESC
`, [keyword]);
return result.rows;
}
// Get top mentions (by engagement)
async getTopMentions(keyword, limit = 10, days = 7) {
const result = await pool.query(`
SELECT
platform,
text,
author,
likes,
views,
comments,
url,
published_at
FROM mentions
WHERE keyword = $1
AND published_at > NOW() - INTERVAL '${days} days'
ORDER BY (likes + views / 100 + comments * 2) DESC
LIMIT $2
`, [keyword, limit]);
return result.rows;
}
// Get trending topics (most mentioned)
async getTrendingKeywords(days = 1) {
const result = await pool.query(`
SELECT
keyword,
COUNT(*) as mention_count,
COUNT(DISTINCT platform) as platforms,
MAX(published_at) as latest_mention
FROM mentions
WHERE published_at > NOW() - INTERVAL '${days} days'
GROUP BY keyword
ORDER BY mention_count DESC
LIMIT 20
`);
return result.rows;
}
// Competitive analysis
async compareKeywords(keywords, days = 7) {
const result = await pool.query(`
SELECT
keyword,
COUNT(*) as mentions,
COUNT(DISTINCT platform) as platforms,
AVG(CASE WHEN sentiment = 'positive' THEN 1
WHEN sentiment = 'negative' THEN -1
ELSE 0 END) as avg_sentiment,
SUM(likes) as total_engagement
FROM mentions
WHERE keyword = ANY($1)
AND published_at > NOW() - INTERVAL '${days} days'
GROUP BY keyword
ORDER BY mentions DESC
`, [keywords]);
return result.rows;
}
}
module.exports = Analytics;
Real-World Use Cases
Let's see practical applications.
Use Case 1: Brand Reputation Monitoring
Track your brand across all platforms:
// examples/brand-monitoring.js
const KeywordCollector = require('../collectors/keyword-collector');
const Analytics = require('../analytics');
async function monitorBrand() {
const brandKeywords = [
'YourBrand',
'@yourbrand',
'#yourbrand',
'yourbrand.com'
];
const analytics = new Analytics();
// Get weekly brand health report
for (const keyword of brandKeywords) {
console.log(`\n=== ${keyword} ===`);
const volume = await analytics.getMentionVolume(keyword, 7);
const sentiment = await analytics.getSentimentBreakdown(keyword, 7);
const platforms = await analytics.getTopPlatforms(keyword, 7);
const topMentions = await analytics.getTopMentions(keyword, 5, 7);
console.log('\nMention Volume (last 7 days):');
volume.forEach(v => {
console.log(`${v.date} [${v.platform}]: ${v.count} mentions`);
});
console.log('\nSentiment Breakdown:');
sentiment.forEach(s => {
console.log(`${s.sentiment}: ${s.count} (avg score: ${s.avg_score})`);
});
console.log('\nTop Platforms:');
platforms.forEach(p => {
console.log(`${p.platform}: ${p.mentions} mentions, ${p.total_likes} likes`);
});
console.log('\nTop Mentions:');
topMentions.forEach((m, i) => {
console.log(`${i + 1}. [${m.platform}] ${m.author}: ${m.text.substring(0, 100)}...`);
console.log(` Engagement: ${m.likes} likes, ${m.views} views`);
console.log(` ${m.url}\n`);
});
}
}
monitorBrand();
Value: Comprehensive brand health tracking across 25+ platforms
Use Case 2: Competitive Intelligence
Monitor what competitors are doing:
// examples/competitive-intel.js
const KeywordCollector = require('../collectors/keyword-collector');
const Analytics = require('../analytics');
async function trackCompetitors() {
const competitors = [
'Competitor1',
'Competitor2',
'Competitor3'
];
const analytics = new Analytics();
// Compare competitor mentions
const comparison = await analytics.compareKeywords(competitors, 30);
console.log('=== Competitive Landscape (Last 30 Days) ===\n');
console.log('Competitor | Mentions | Platforms | Sentiment | Engagement');
console.log('-----------|----------|-----------|-----------|------------');
comparison.forEach(c => {
console.log(
`${c.keyword.padEnd(12)} | ${c.mentions.toString().padEnd(8)} | ` +
`${c.platforms.toString().padEnd(9)} | ${c.avg_sentiment.toFixed(2).padEnd(9)} | ` +
`${c.total_engagement.toString()}`
);
});
// Find competitor content strategies
for (const competitor of competitors) {
const topMentions = await analytics.getTopMentions(competitor, 10, 30);
console.log(`\n=== ${competitor} Top Content ===`);
topMentions.forEach((m, i) => {
console.log(`${i + 1}. [${m.platform}] ${m.text.substring(0, 80)}...`);
console.log(` ${m.url}`);
});
}
}
trackCompetitors();
Value: Understand competitor strategies and market positioning
Use Case 3: Crisis Detection
Detect and respond to negative sentiment spikes:
// examples/crisis-detection.js
const AlertEngine = require('../alert-engine');
async function monitorForCrises() {
const alertEngine = new AlertEngine();
// Run checks every 5 minutes
setInterval(async () => {
await alertEngine.runChecks();
}, 5 * 60 * 1000);
console.log('Crisis monitoring active. Checking every 5 minutes...');
}
monitorForCrises();
When a crisis is detected, you get immediate Slack alerts:
🚨 40% of "YourBrand" mentions are negative (20/50)
Alert Type: negative_sentiment
Severity: CRITICAL
Details:
{
"keyword": "YourBrand",
"negativeRatio": 0.4,
"negative": 20,
"total": 50
}
Value: Early warning system prevents small issues from becoming disasters
Cost Comparison
Let's break down the real economics.
Traditional Tools
Brandwatch (Professional):
- Cost: $2,000-3,000/month
- Platforms: 6-8 major platforms
- Historical data: 13 months
- Users: 3-5
- API access: Limited
Sprout Social (Advanced):
- Cost: $299/user/month
- Platforms: 5 platforms
- Historical data: Limited
- Users: Pay per seat
- API access: Available
Annual cost for 3 users:
- Brandwatch: $24,000-36,000
- Sprout Social: $10,764
DIY with SociaVault
Infrastructure:
- Server (2GB RAM, Node.js): $10-20/month (DigitalOcean, Railway, Render)
- PostgreSQL database: $0-15/month (free tier or small paid)
- Slack: Free
SociaVault API:
Let's calculate for realistic usage:
Daily Monitoring:
- 10 keywords
- 5 platforms per keyword
- Collected every 15 minutes (96 times/day)
- Average 20 results per search
Credits per day:
- 10 keywords × 5 platforms × 96 collections = 4,800 searches/day
- At 1 credit per search = 4,800 credits/day
- Monthly: 144,000 credits
Cost: $7,200/month at standard pricing ($0.05/credit)
Wait, that's more expensive than Brandwatch!
But here's the optimization:
-
Smart Scheduling: Don't check every 15 minutes for all keywords
- Critical keywords: Every 15 min
- Standard keywords: Every 1 hour
- Low priority: Every 6 hours
-
Reduced to: ~30,000 credits/month = $1,500/month
-
Cache results: Don't re-fetch unchanged content
-
Final cost: ~20,000 credits/month = $1,000/month
Total monthly cost:
- Infrastructure: $30
- SociaVault credits: $1,000
- Total: $1,030/month
Annual savings vs Brandwatch: $24,000 - $12,360 = $11,640/year
Plus you get:
- 25+ platforms (vs 6-8)
- Full customization
- Raw data access
- No user seat limits
- Custom integrations
The ROI
Time savings:
- No manual social media checking: 10 hours/week saved
- Automated alerts: Instant crisis detection
- Custom dashboards: Better insights, faster
Value at $100/hour: 10 hours × 4 weeks × $100 = $4,000/month in time savings
Crisis prevention:
- Catch one PR crisis early: Priceless (easily worth $50,000+ in damage prevention)
The system pays for itself immediately.
Best Practices
1. Smart Keyword Selection
Don't monitor everything. Be strategic:
Good keywords:
- Your brand name (exact match)
- Common misspellings
- Product names
- Executive names (for B2B)
- Key hashtags
Bad keywords:
- Generic terms ("social media")
- Single letters or numbers
- Overly broad hashtags ("#marketing")
2. Optimize Collection Frequency
Not all keywords need real-time monitoring:
- Critical (brand name): Every 15 minutes
- Important (products): Every hour
- Competitive intel: Every 6 hours
- Industry trends: Daily
3. Implement Deduplication
Social content gets cross-posted. Deduplicate by:
- Text similarity (>90% match = duplicate)
- Same author + similar timestamp
- Same URL across platforms
4. Add Sentiment Analysis
Use a simple sentiment library or AI:
// Basic sentiment with sentiment library
const Sentiment = require('sentiment');
const sentiment = new Sentiment();
function analyzeSentiment(text) {
const result = sentiment.analyze(text);
return {
score: result.score / text.length, // Normalize by text length
sentiment: result.score > 0 ? 'positive' :
result.score < 0 ? 'negative' : 'neutral',
comparative: result.comparative
};
}
Or use OpenAI for better accuracy:
// Advanced sentiment with OpenAI
const OpenAI = require('openai');
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function analyzeSentimentAI(text) {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{
role: "user",
content: `Analyze the sentiment of this social media post. Respond with only: positive, negative, or neutral.\n\n"${text}"`
}],
temperature: 0
});
return response.choices[0].message.content.trim().toLowerCase();
}
5. Archive Old Data
Keep your database lean:
-- Move old mentions to archive table
CREATE TABLE mentions_archive (LIKE mentions INCLUDING ALL);
-- Archive mentions older than 90 days
INSERT INTO mentions_archive
SELECT * FROM mentions
WHERE published_at < NOW() - INTERVAL '90 days';
DELETE FROM mentions
WHERE published_at < NOW() - INTERVAL '90 days';
6. Monitor Your Monitoring
Track system health:
// health-check.js
async function checkSystemHealth() {
const checks = {
database: await checkDatabase(),
lastCollection: await checkLastCollection(),
alertEngine: await checkAlertEngine(),
credits: await checkCredits()
};
console.log('System Health:', checks);
if (!checks.database || !checks.lastCollection) {
// Send alert to team
await sendTeamAlert('System health check failed!');
}
}
// Run every hour
setInterval(checkSystemHealth, 60 * 60 * 1000);
Frequently Asked Questions
Q: How real-time is this compared to paid tools?
As real-time as you configure it. Running collection every 15 minutes gives you near-real-time monitoring. Paid tools typically update every 15-30 minutes anyway.
Q: Can I track private accounts?
No. SociaVault extracts publicly accessible data only. This is the same limitation as most listening tools.
Q: What about sentiment analysis accuracy?
Basic sentiment libraries are 60-70% accurate. OpenAI/Claude can get you to 85-90%. Paid tools claim 80-85%, but they're often using similar NLP models.
Q: How do I handle rate limits?
SociaVault handles rate limiting automatically. On your end, spread out collection times and cache aggressively.
Q: Can I get historical data?
You can only collect data going forward. For historical analysis, you'd need to run the system over time to build your dataset.
Q: What if a platform changes?
SociaVault maintains the scrapers. Your code doesn't need to change. This is a key advantage over DIY scraping.
Q: Is this legal?
Yes. You're accessing publicly available data. The hiQ Labs vs LinkedIn case (2022) established that scraping public data is not unauthorized access.
Q: How do I scale this?
Start with a single server. As you grow:
- Use a queue system (Bull, RabbitMQ) for collection jobs
- Add worker servers for parallel processing
- Use a managed database (RDS, Supabase)
- Add caching (Redis)
- Implement horizontal scaling
Conclusion
Traditional social listening tools are expensive and limited. They lock you into their platform coverage, their features, and their pricing.
Building your own gives you:
- Complete control: Monitor exactly what you need
- Better economics: Pay for what you use, not per seat
- Broader coverage: 25+ platforms vs 6-8
- Custom integrations: Connect to your existing tools
- Full data access: Export, analyze, train ML models
Whether you're a startup tracking brand mentions, an agency monitoring clients, or an enterprise doing competitive intelligence, a custom solution scales with your needs.
Ready to start? Get your SociaVault API key at sociavault.com and build your first monitoring system today.
The code examples in this guide are production-ready. Clone, customize, deploy.
Your social listening infrastructure, your way.
Found this helpful?
Share it with others who might benefit
Ready to Try SociaVault?
Start extracting social media data with our powerful API