Maverix Studio is engaged to design and build an AI-powered agent workforce for Brabys, automating the verification of business contact data across their directory at scale — starting with data verification and expanding to outbound sales.
Brabys maintains one of Southern Africa's most established business directories across three product lines: Digital, Paper Products, and Promotional Gifts. The accuracy of their subscriber database is a direct driver of product value — stale contact data erodes advertiser confidence and downstream revenue.
Today, data verification is a manual, labour-intensive process. This project replaces that process with a continuously operating AI agent workforce capable of calling hundreds of businesses per day, verifying contact details in real-time conversation, and writing clean, audited records back to the production database — with zero direct AI access to live data.
Replace manual data verification with an always-on AI workforce. Reduce cost-per-verified-record by 80%+. Expand to outbound sales in Phase 2 — using the same infrastructure, with a separate agent persona and script to protect the verification brand.
Business directory data degrades at approximately 20–30% per year. For a directory the size of Brabys, that represents tens of thousands of stale records annually — each one a risk to advertiser trust and renewal revenue.
Traditional outbound verification relies on human agents: expensive, inconsistent, limited in hours, and unable to scale cost-effectively during peak periods. A human agent can complete 20–30 calls per day. An AI agent workforce can complete hundreds — simultaneously, at any time of day, with perfect script consistency and full call logging.
Data verification is a structured, repetitive conversation: confirm five fields, capture corrections, handle a small set of objections, end the call. It is precisely the type of task where AI voice agents outperform humans — consistent tone, no fatigue, no forgotten fields, no off-script improvisation.
Key insight: The Brabys verification script is designed to be non-threatening — "this is not a sales call" is stated in the opening line. This makes it ideal for AI delivery. Businesses are more patient with a calm, professional automated voice than with a rushed human agent.
The same infrastructure supports outbound sales — but with a critical architectural decision: the sales agent must be a separate persona from the verification agent. The trust that Claire builds ("this is not a sales call") must not be compromised. Phase 2 introduces a dedicated sales agent with a distinct voice, name, and script — triggered by warm signals captured during verification calls.
The Brabys AI workforce consists of three purpose-built agents, each with a defined role, database access level, and operational boundary. Complexity is deliberately minimised — three agents replace what could be five, by collapsing post-call functions into a single processor.
Orchestrates the entire verification operation. Reads the target queue, assigns records to Claire instances, monitors progress, retries failures, and generates daily reports.
DB Access: Read/write to staging DB. Read-only to call log.
Makes outbound calls via 3CX + ElevenLabs. Conducts the verification conversation, captures field corrections, flags DNC requests. Spawned dynamically as needed.
Voice: South African female (ElevenLabs Claire)
Processes completed call transcripts. Validates all field updates (email, URL, address, phone), scores confidence, approves clean records, flags low-confidence for human review.
DB Access: Write to staging DB and call log only.
We recommend replacing the CSV-based working dataset with a dedicated PostgreSQL staging database. This removes file-based bottlenecks, enables true concurrency across multiple Claire instances, and provides a complete audit trail from initial import through to production merge.
A CSV file cannot support concurrent writes from multiple Claire instances, has no transaction safety, provides no query capability, and requires manual hand-off at every stage. At 100+ calls/day with 2–5 simultaneous agents, a database is not optional — it is the correct tool for the job.
The staging database consists of three primary tables, each serving a distinct function in the verification lifecycle:
| Table | Purpose | Owner Agent |
|---|---|---|
| records | Working copy of Informix subscriber records. Tracks status through the full lifecycle. | General (R/W), Claire (R), Scrub (W) |
| call_log | One row per call attempt. Captures outcome, transcript, duration, ElevenLabs session ID, 3CX call reference. | Claire (W), Scrub (R), General (R) |
| field_updates | Field-level delta layer. Old value vs new value, confidence score, QA status. Source for merge process. | Scrub (W), General (R) |
Every record flows through a defined set of statuses, enforced by the database and respected by each agent:
NEW → QUEUED → CALLING → QA_PENDING → APPROVED → MERGED
Exception paths: DNC (Do Not Call) · FAILED · HUMAN_REVIEW
| Agent | records | call_log | field_updates | Informix |
|---|---|---|---|---|
| General | Read / Write | Read | Read | No access |
| Claire | Read only | Write | No access | No access |
| Scrub | Status update only | Read | Write | No access |
| Merge Process | Read APPROVED | Read | Read APPROVED | Write (controlled) |
Security principle: No AI agent has direct access to the IBM Informix production database at any time. Informix is only touched by the controlled Merge Process, triggered by a human-approved action or scheduled job — never autonomously by an agent.
The staging database is populated from Informix via a controlled extract process. Three modes are supported:
| Mode | Description | When to Use |
|---|---|---|
| Manual kickoff | Operator uploads a CSV export from Informix and runs the import script. Records inserted as NEW. | Initial pilot, ad-hoc campaigns, sector-specific batches |
| Scheduled cron | Nightly job extracts records from Informix matching defined criteria (geography, sector, last-verified date) and populates the queue automatically. | Ongoing operations at full scale |
| Rules-based filter | Import is filtered by configurable definitions: last verified > 180 days, missing email, region, product line, etc. | Priority campaigns — e.g. email append, renewal cohorts |
Once Scrub has approved a set of records, the Merge Process writes verified field updates back to Informix. This is the only point of contact between the AI system and the production database.
field_updates table — any merge can be reversed within 30 daysIf a record has been updated in Informix since the staging copy was extracted, the Merge Process flags a conflict and routes to human review. It never overwrites a newer Informix record with older staging data.
As provided by Brabys. Reproduced for reference. The agent names, architecture, and database recommendations in this document supersede the CSV-based approach described below.
| Step | Component | Description |
|---|---|---|
| 1 | IBM Informix DB | Production subscriber database — source of truth |
| 2 | CSV Extract | Working dataset: primary key, business name, address, phone, cell, email, website |
| 3 | AI Calling Agent | Dials business, verifies details, captures corrections. → Now: Claire |
| 4 | Updated CSV | Delta/revision layer with corrections. → Now: Staging DB field_updates table |
| 5 | Vetting Process | Human review or AI QA — address, spelling, email, website. → Now: Scrub |
| 6 | Update Engine | Compares primary key, applies field updates, logs changes. → Now: Merge Process |
| 7 | IBM Informix DB | Production update — clean, verified data |
All three principles are preserved and strengthened in the Maverix architecture. The CSV working dataset is replaced by a proper PostgreSQL staging database — a direct upgrade that preserves the air-gap principle while adding concurrency, auditability, and query capability.
The following is the complete system prompt deployed to the ElevenLabs Conversational AI agent (Agent ID: agent_6101kjdakj41fhq8ptkxffcxcwrz).
| Parameter | Value |
|---|---|
| ElevenLabs Agent ID | agent_6101kjdakj41fhq8ptkxffcxcwrz |
| Voice | Claire — South African Female (ElevenLabs) |
| Voice ID | gsm4lUH9bnZ3pjR1Pw7w |
| Model | eleven_turbo_v2 |
| LLM | Gemini 2.5 Flash |
| Max call duration | 10 minutes |
| Language | English (en) |
| Component | Technology | Notes |
|---|---|---|
| Voice Agent | ElevenLabs Conversational AI | Claire voice, eleven_turbo_v2 |
| VOIP / Dialling | 3CX | Outbound SIP, existing Brabys instance TBC |
| Staging Database | PostgreSQL 15+ | Hosted on Maverix infrastructure |
| Production Database | IBM Informix | Read-only extract; write via Merge only |
| Orchestration | Node.js | General supervisor loop |
| Call Processing | Node.js | Scrub post-call processor |
| Dashboard | HTML/JS (Express) | localhost:3500, deployable to Cloudflare Pages |
| Target Import | CSV → PostgreSQL | Manual or scheduled cron |
| Merge Engine | Node.js | Field-level diff writer to Informix |
The system is designed to scale horizontally by spawning additional Claire instances. General monitors queue depth and spawns agents dynamically:
| Daily Call Volume | Claire Instances | Infrastructure |
|---|---|---|
| 10–50 calls/day | 1–2 instances | Single server, current setup |
| 50–200 calls/day | 2–5 instances | Single server with concurrency |
| 200–500 calls/day | 5–10 instances | Load-balanced, dedicated DB |
| 500+ calls/day | 10+ instances | Cloud-hosted, auto-scaling |