Back to Blog
Tutorial

How to Scrape Instagram Followers: Export Any Follower List (2026)

February 10, 2026
8 min read
S
By SociaVault Team
Instagram ScrapingFollowersPythonJavaScriptAPIData ExtractionExport FollowersInstagram API

How to Scrape Instagram Followers: Complete Guide with Code

Need to export an Instagram account's followers? Whether you're doing influencer research, competitor analysis, or building a CRM, getting follower data programmatically is the way to go.

This guide shows you exactly how to scrape Instagram followers using code—with working examples in both JavaScript and Python.

New to scraping? Check our social media scraping overview first.

What Follower Data Can You Get?

When you scrape Instagram followers, you can extract:

  • Username - The follower's @handle
  • Full name - Display name
  • Profile picture - URL to avatar
  • Bio - Profile description
  • Follower count - How many follow them
  • Following count - How many they follow
  • Is verified - Blue checkmark status
  • Is private - Whether their account is private

The fastest and most reliable method. An API handles rate limits, proxies, and anti-bot detection for you.

JavaScript Example

const API_KEY = 'your_api_key_here';

async function getInstagramFollowers(username, limit = 100) {
  const response = await fetch(
    `https://api.sociavault.com/v1/scrape/instagram/followers?username=${username}&limit=${limit}`,
    {
      headers: {
        'Authorization': `Bearer ${API_KEY}`,
        'Content-Type': 'application/json'
      }
    }
  );
  
  if (!response.ok) {
    throw new Error(`API error: ${response.status}`);
  }
  
  return response.json();
}

// Usage
async function main() {
  const result = await getInstagramFollowers('nike', 500);
  
  console.log(`Found ${result.data.length} followers`);
  
  result.data.forEach(follower => {
    console.log({
      username: follower.username,
      fullName: follower.full_name,
      followers: follower.follower_count,
      isVerified: follower.is_verified
    });
  });
}

main();

Python Example

import requests
import json

API_KEY = 'your_api_key_here'

def get_instagram_followers(username, limit=100):
    """Scrape Instagram followers for a given username."""
    
    response = requests.get(
        'https://api.sociavault.com/v1/scrape/instagram/followers',
        params={
            'username': username,
            'limit': limit
        },
        headers={
            'Authorization': f'Bearer {API_KEY}'
        }
    )
    
    response.raise_for_status()
    return response.json()

# Usage
if __name__ == '__main__':
    result = get_instagram_followers('nike', 500)
    
    print(f"Found {len(result['data'])} followers")
    
    for follower in result['data']:
        print(f"@{follower['username']} - {follower['full_name']} ({follower['follower_count']} followers)")

Response Format

{
  "success": true,
  "data": [
    {
      "id": "12345678",
      "username": "john_doe",
      "full_name": "John Doe",
      "profile_pic_url": "https://...",
      "is_verified": false,
      "is_private": false,
      "follower_count": 1542,
      "following_count": 892
    },
    {
      "id": "87654321",
      "username": "jane_smith",
      "full_name": "Jane Smith",
      "profile_pic_url": "https://...",
      "is_verified": true,
      "is_private": false,
      "follower_count": 25000,
      "following_count": 445
    }
  ],
  "pagination": {
    "has_next": true,
    "cursor": "abc123..."
  }
}

Method 2: Scraping with Puppeteer (DIY)

If you want full control, you can scrape directly with browser automation. Be warned: this requires proxy management and breaks frequently.

const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');

puppeteer.use(StealthPlugin());

async function scrapeFollowers(username, maxFollowers = 100) {
  const browser = await puppeteer.launch({ 
    headless: false, // Instagram detects headless browsers
    args: ['--no-sandbox']
  });
  
  const page = await browser.newPage();
  
  // You'll need to be logged in - Instagram requires auth for follower lists
  // This is where DIY scraping gets complicated...
  
  try {
    // Navigate to profile
    await page.goto(`https://www.instagram.com/${username}/`, {
      waitUntil: 'networkidle2'
    });
    
    // Click followers button
    await page.click('a[href$="/followers/"]');
    await page.waitForSelector('div[role="dialog"]');
    
    const followers = [];
    let previousHeight = 0;
    
    // Scroll through follower list
    while (followers.length < maxFollowers) {
      // Extract visible followers
      const newFollowers = await page.evaluate(() => {
        const items = document.querySelectorAll('div[role="dialog"] a[role="link"]');
        return Array.from(items).map(item => ({
          username: item.getAttribute('href')?.replace(/\//g, ''),
          fullName: item.querySelector('span')?.innerText
        }));
      });
      
      // Add unique followers
      newFollowers.forEach(f => {
        if (f.username && !followers.find(e => e.username === f.username)) {
          followers.push(f);
        }
      });
      
      // Scroll down
      await page.evaluate(() => {
        const dialog = document.querySelector('div[role="dialog"] div[style*="overflow"]');
        if (dialog) dialog.scrollTop = dialog.scrollHeight;
      });
      
      await page.waitForTimeout(2000);
      
      // Check if we've reached the end
      const currentHeight = await page.evaluate(() => {
        const dialog = document.querySelector('div[role="dialog"] div[style*="overflow"]');
        return dialog?.scrollHeight || 0;
      });
      
      if (currentHeight === previousHeight) break;
      previousHeight = currentHeight;
    }
    
    return followers.slice(0, maxFollowers);
    
  } finally {
    await browser.close();
  }
}

Problems with DIY Scraping

  1. Login required - Instagram requires authentication to view follower lists
  2. Rate limits - You'll get blocked after scraping a few hundred followers
  3. CAPTCHAs - Frequent challenges that stop your script
  4. Account bans - Risk of your Instagram account being disabled
  5. Constant updates - Instagram changes their DOM frequently

Scraping the Following List

The process is identical—just change the endpoint:

// Get accounts a user is following
async function getInstagramFollowing(username, limit = 100) {
  const response = await fetch(
    `https://api.sociavault.com/v1/scrape/instagram/following?username=${username}&limit=${limit}`,
    {
      headers: {
        'Authorization': `Bearer ${API_KEY}`
      }
    }
  );
  
  return response.json();
}

// Compare followers vs following
async function findNotFollowingBack(username) {
  const [followers, following] = await Promise.all([
    getInstagramFollowers(username, 1000),
    getInstagramFollowing(username, 1000)
  ]);
  
  const followerUsernames = new Set(followers.data.map(f => f.username));
  
  const notFollowingBack = following.data.filter(
    f => !followerUsernames.has(f.username)
  );
  
  console.log(`${notFollowingBack.length} accounts don't follow back:`);
  notFollowingBack.forEach(f => console.log(`  @${f.username}`));
  
  return notFollowingBack;
}

Practical Use Cases

1. Export Followers to CSV

const fs = require('fs');

async function exportFollowersToCSV(username, filename) {
  const result = await getInstagramFollowers(username, 1000);
  
  const headers = ['username', 'full_name', 'follower_count', 'following_count', 'is_verified'];
  const rows = result.data.map(f => [
    f.username,
    f.full_name || '',
    f.follower_count || 0,
    f.following_count || 0,
    f.is_verified
  ]);
  
  const csv = [
    headers.join(','),
    ...rows.map(r => r.map(v => `"${v}"`).join(','))
  ].join('\n');
  
  fs.writeFileSync(filename, csv);
  console.log(`Exported ${rows.length} followers to ${filename}`);
}

exportFollowersToCSV('nike', 'nike-followers.csv');

2. Find Influencers Among Followers

async function findInfluencerFollowers(username, minFollowers = 10000) {
  const result = await getInstagramFollowers(username, 500);
  
  const influencers = result.data
    .filter(f => f.follower_count >= minFollowers)
    .sort((a, b) => b.follower_count - a.follower_count);
  
  console.log(`Found ${influencers.length} influencers following @${username}:`);
  
  influencers.forEach(inf => {
    console.log(`  @${inf.username} - ${inf.follower_count.toLocaleString()} followers`);
  });
  
  return influencers;
}

// Find influencers who follow a brand
findInfluencerFollowers('glossier', 10000);

3. Analyze Competitor Audience Overlap

import requests
from collections import Counter

API_KEY = 'your_api_key'

def get_followers(username, limit=1000):
    response = requests.get(
        'https://api.sociavault.com/v1/scrape/instagram/followers',
        params={'username': username, 'limit': limit},
        headers={'Authorization': f'Bearer {API_KEY}'}
    )
    return [f['username'] for f in response.json()['data']]

def analyze_audience_overlap(accounts):
    """Find followers who follow multiple accounts."""
    
    all_followers = []
    
    for account in accounts:
        print(f"Getting followers for @{account}...")
        followers = get_followers(account, 500)
        all_followers.extend([(f, account) for f in followers])
    
    # Count how many accounts each follower follows
    follower_counts = Counter(f[0] for f in all_followers)
    
    # Find users who follow multiple accounts
    overlap = {user: count for user, count in follower_counts.items() if count > 1}
    
    print(f"\nAudience overlap analysis:")
    print(f"Total unique followers: {len(follower_counts)}")
    print(f"Followers of 2+ accounts: {len(overlap)}")
    
    # Show top shared followers
    top_shared = sorted(overlap.items(), key=lambda x: x[1], reverse=True)[:20]
    print(f"\nTop shared followers:")
    for user, count in top_shared:
        print(f"  @{user} follows {count} of these accounts")
    
    return overlap

# Compare fitness brand audiences
competitors = ['gymshark', 'alphalete', 'youngla']
analyze_audience_overlap(competitors)

4. Build a Follower Database

const Database = require('better-sqlite3');

const db = new Database('instagram_followers.db');

// Create table
db.exec(`
  CREATE TABLE IF NOT EXISTS followers (
    id TEXT,
    target_account TEXT,
    username TEXT,
    full_name TEXT,
    follower_count INTEGER,
    following_count INTEGER,
    is_verified INTEGER,
    is_private INTEGER,
    scraped_at TEXT,
    PRIMARY KEY (target_account, username)
  )
`);

async function saveFollowersToDatabase(targetAccount) {
  const result = await getInstagramFollowers(targetAccount, 1000);
  
  const stmt = db.prepare(`
    INSERT OR REPLACE INTO followers 
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, datetime('now'))
  `);
  
  for (const f of result.data) {
    stmt.run(
      f.id,
      targetAccount,
      f.username,
      f.full_name,
      f.follower_count,
      f.following_count,
      f.is_verified ? 1 : 0,
      f.is_private ? 1 : 0
    );
  }
  
  console.log(`Saved ${result.data.length} followers of @${targetAccount}`);
}

// Track followers over time
async function trackFollowerGrowth(accounts) {
  for (const account of accounts) {
    await saveFollowersToDatabase(account);
  }
}

Handling Large Accounts

For accounts with millions of followers, use pagination:

async function getAllFollowers(username) {
  const allFollowers = [];
  let cursor = null;
  
  do {
    const url = new URL('https://api.sociavault.com/v1/scrape/instagram/followers');
    url.searchParams.set('username', username);
    url.searchParams.set('limit', '100');
    if (cursor) url.searchParams.set('cursor', cursor);
    
    const response = await fetch(url, {
      headers: { 'Authorization': `Bearer ${API_KEY}` }
    });
    
    const result = await response.json();
    
    allFollowers.push(...result.data);
    cursor = result.pagination?.cursor;
    
    console.log(`Fetched ${allFollowers.length} followers...`);
    
    // Respect rate limits
    await new Promise(r => setTimeout(r, 500));
    
  } while (cursor);
  
  return allFollowers;
}

Rate Limits and Best Practices

  1. Don't scrape too fast - Add delays between requests (500ms minimum)
  2. Cache results - Store data locally to avoid re-scraping
  3. Use pagination - Fetch in batches of 100-500
  4. Handle errors gracefully - Implement retry logic
  5. Respect privacy - Only collect publicly available data
async function fetchWithRetry(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      const response = await fetch(url, options);
      
      if (response.status === 429) {
        // Rate limited - wait and retry
        const waitTime = Math.pow(2, i) * 1000;
        console.log(`Rate limited. Waiting ${waitTime}ms...`);
        await new Promise(r => setTimeout(r, waitTime));
        continue;
      }
      
      return response;
    } catch (error) {
      if (i === maxRetries - 1) throw error;
    }
  }
}

Getting Started

  1. Get an API key at sociavault.com/auth/sign-up
  2. Start with 50 free credits
  3. Test with small accounts first
  4. Scale up once your code works

Frequently Asked Questions

Yes, scraping publicly available follower lists is legal. Courts have ruled that public data isn't protected. However, never scrape private accounts or use data for spam. See our Instagram scraping legal guide.

Can I scrape followers of private accounts?

No. Private accounts require login to view followers, and scraping them violates Instagram's terms. Only public account followers are accessible.

How many followers can I scrape?

With an API, you can scrape millions. For very large accounts (1M+), sampling 50-100K followers is usually sufficient for analysis.

Will my account get banned for scraping?

If you use an API like SociaVault, no—the scraping happens on API infrastructure, not your account. DIY scraping with your own account risks bans.


Related guides:

Found this helpful?

Share it with others who might benefit

Ready to Try SociaVault?

Start extracting social media data with our powerful API. No credit card required.