ViciStack Research Get a VICIdial audit

Research Methodology

This page documents how ViciStack Research produces the numbers, benchmarks, and ranges that appear on Call Center KPI Dashboard articles. It exists because the single biggest failure mode in contact-center content is the "a study showed" citation chain that eventually resolves to a vendor blog post from 2013. Readers — especially operators about to commit six figures to a predictive-dialer contract — deserve to see the data sources, the analysis method, the known limitations, and the history of changes.

Data sources fall into four buckets. First, primary regulatory filings: FCC robocall and STIR/SHAKEN documents, FTC Telemarketing Sales Rule enforcement data, and state-level TCPA statutes. These are cited by URL to the authoritative source (fcc.gov, ftc.gov, or the state legislature's site), never through an intermediate summary. Second, public benchmarks: U.S. Bureau of Labor Statistics wage data, PACE industry guidelines, SIP Forum trunking best practices, and public vendor filings where a vendor has disclosed a metric in an earnings call or S-1. Third, internal engagement data: campaigns that Jason Shouldice and the ViciStack Research team have personally tuned, with identifying details (client name, exact list size, exact caller-ID prefix) stripped and with agent-count ranges published as bands rather than point values. Fourth, reproducible experiments: where feasible, we describe the VICIdial campaign configuration and the SQL query used to compute a metric so a reader can reproduce the number on their own install.

Analysis method is deliberately boring. We prefer medians over means when a distribution is known to be long-tailed (cost per contact, list-decay curves, abandonment spikes). We publish a range — typically the 25th-to-75th-percentile band — rather than a single point whenever the sample allows it. Where a number is computed from a formula rather than measured (for example, break-even seat-count math for a licensing-fee comparison), the formula is shown inline so the reader can plug their own inputs. We do not publish lift claims without a control group, and we flag any number labeled "lift" that came from a before/after comparison without a concurrent control as "before-after, no control" so the reader can price the claim accordingly.

Known limitations are load-bearing and we try to name them on every page. Answer-rate numbers from caller-ID reputation testing are highly sensitive to the carrier, the day of the week, and the state being dialed — a number that holds on a Tuesday in Texas will not hold on a Friday in California. TCPA-adjacent numbers (abandonment ceilings, consent-capture rates) are sensitive to the exact wording of the enforcement posture at the time of writing; the FCC has been actively revising its robocall framework and numbers that held in 2023 may not hold today. Cost-per-contact numbers assume a fully-loaded agent cost including payroll tax and benefits, not a base wage; when we cite a competitor that uses base wage only we mark the comparison as "base-wage, not loaded" so the reader can normalize.

Version history is maintained on this page and on the individual articles it affects. Every meaningful methodology change — a new data source added, a benchmark band re-widened because the underlying sample grew, a regulatory citation updated because the FCC published a new order — creates a new version entry with the date, the change, and (when relevant) the affected article slugs. Older versions of a page are retrievable by emailing jason@ccdocs.com; we do not currently publish a full diff viewer but the corrections log captures every meaningful content change.

Reviewer accountability: every article published on this site is reviewed by Jason Shouldice before publication. There is no "our editors" abstraction. If a number is wrong, it is wrong because Jason Shouldice missed it, and the correction carries his name on the diff. This is intentional — E-E-A-T signals only matter if the E's stand for actual people who can be held accountable, and diffusing authorship across a byline pool tends to erode exactly the signal we are trying to build.