← Back to Labs

Research Methodology

How we conduct credible research

Credibility is earned through transparency. This document explains how SociaVault Labs conducts research, our standards for statistical validity, and how we ensure our findings can be trusted.

Core Research Principles

Transparency

We publish our methodology, sample sizes, and limitations with every report. You can see exactly how we reached our conclusions.

Reproducibility

Our research is designed so others can replicate our studies using the same methods and data sources.

Objectivity

We report what the data shows, even when findings don't favor our business interests. Credibility comes from honesty.

Ethics

We only analyze publicly available data, respect privacy, and never identify individuals without consent.

The Research Process

Every study follows a rigorous 6-step process designed to ensure validity, reproducibility, and transparency.

01

Define the Research Question

Every study starts with a specific, measurable question. 'What is the average engagement rate by follower tier?' not 'What's happening on TikTok?'

  • Question must be specific and measurable
  • Hypothesis defined before data collection
  • Research question documented publicly
02

Design the Study

We define our population, sampling method, variables, and time period before collecting any data.

  • Population and sample clearly defined
  • Sampling method prevents selection bias
  • Variables and metrics explicitly documented
03

Collect Data

Data is collected from the SociaVault API with rigorous logging, error handling, and quality checks throughout.

  • Automated collection with logging
  • Checkpoints and error handling
  • Rate limiting respected
04

Clean & Validate

Raw data is cleaned, validated, and documented. We never silently drop records or hide data quality issues.

  • Duplicates removed and logged
  • Missing values documented
  • Outliers investigated, not auto-removed
05

Analyze

Statistical analysis follows pre-defined plans. We use appropriate tests and always report confidence intervals.

  • Pre-registered analysis plan
  • 95% confidence level standard
  • Effect sizes reported, not just p-values
06

Report Findings

Reports include methodology, limitations, and conflict of interest disclosures. Conclusions are supported by data.

  • Clear methodology section
  • Limitations acknowledged
  • Conflict of interest disclosed

Data Standards

Minimum requirements for publishable research

Sample Sizes

MetricMinimumIdeal
Platform benchmark10,000+50,000+
Cross-platform comparison5,000 per platform20,000+
Niche analysis1,000 per niche5,000+
Segment reporting100 per segment500+

Statistical Standards

MetricMinimumIdeal
Confidence level95%95-99%
Data freshness90 days30 days
Platform coverageSame time periodIdentical criteria

Standard Metric Definitions

How we calculate key metrics across all studies

Engagement Rate (Standard)

Engagement Rate = (Likes + Comments + Shares) / Followers × 100

Used for profile-level engagement analysis. Measures how actively an audience interacts relative to total followers.

Engagement Rate (Video)

Engagement Rate = (Likes + Comments + Shares) / Views × 100

Used for video content analysis. Measures engagement relative to actual reach rather than follower count.

View Rate

View Rate = Views / Followers × 100

Measures content reach relative to audience size. A view rate above 100% indicates algorithmic distribution beyond followers.

Follower Growth Rate

Growth Rate = (New Followers - Lost Followers) / Starting Followers × 100

Measures net audience growth over a specified period. Useful for identifying suspicious growth patterns.

Conflict of Interest Disclosure

SociaVault Labs is the research division of SociaVault, a social media data API company. We acknowledge that findings demonstrating the value of social media data could indirectly benefit our business.

We commit to publishing findings regardless of whether they favor our business interests. Our credibility depends on objectivity, and we will never suppress unfavorable results.

Language Standards

We use careful language to avoid overstating conclusions

We Don't SayWe Say Instead
"proves that""suggests that"
"always""typically" or "in most cases"
"all accounts""accounts in our sample"
"the engagement rate is""the average engagement rate was"
"this means""this may indicate"
"significant" (without context)"statistically significant (p<0.05)"

Corrections Policy

If we discover an error in a published report, we will:

  1. 1Acknowledge the error publicly on the report page
  2. 2Explain what was incorrect and why
  3. 3Publish corrected data and analysis
  4. 4Add a prominent correction notice to the original report
  5. 5Notify subscribers who downloaded the original

Questions About Our Methodology?

We welcome scrutiny of our research methods. If you have questions or feedback, please reach out.