Maverix Studio
Prepared for
Brabys Directory Services
AI Agent
Workforce
Project Outline & Technical Specification
Document Type Project Outline v1.0
Date February 2026
Classification Confidential
Authors Darren Isaacs & Phineas
Contents

Table of Contents

Overview
1.Executive Summary
2.The Opportunity
3.Scope of Work — Phase 1
The Agent Workforce
4.Agent Architecture Overview
5.General — Supervisor Agent
6.Claire — Voice Verification Agent
7.Scrub — Post-Call Processor
Data Architecture
8.Recommended Database Architecture
9.Target List Management
10.Record Merge Process
11.Call Tracking & Reporting
Appendix
A.Original Process Specification (Brabys)
B.Claire — Full System Prompt
C.Technical Stack Summary
Section 1

Executive Summary

Maverix Studio is engaged to design and build an AI-powered agent workforce for Brabys, automating the verification of business contact data across their directory at scale — starting with data verification and expanding to outbound sales.

Brabys maintains one of Southern Africa's most established business directories across three product lines: Digital, Paper Products, and Promotional Gifts. The accuracy of their subscriber database is a direct driver of product value — stale contact data erodes advertiser confidence and downstream revenue.

Today, data verification is a manual, labour-intensive process. This project replaces that process with a continuously operating AI agent workforce capable of calling hundreds of businesses per day, verifying contact details in real-time conversation, and writing clean, audited records back to the production database — with zero direct AI access to live data.

Strategic Objective

Replace manual data verification with an always-on AI workforce. Reduce cost-per-verified-record by 80%+. Expand to outbound sales in Phase 2 — using the same infrastructure, with a separate agent persona and script to protect the verification brand.

Phase 1 Deliverables

Section 2

The Opportunity

Business directory data degrades at approximately 20–30% per year. For a directory the size of Brabys, that represents tens of thousands of stale records annually — each one a risk to advertiser trust and renewal revenue.

The Problem with Manual Verification

Traditional outbound verification relies on human agents: expensive, inconsistent, limited in hours, and unable to scale cost-effectively during peak periods. A human agent can complete 20–30 calls per day. An AI agent workforce can complete hundreds — simultaneously, at any time of day, with perfect script consistency and full call logging.

Why AI Works Here

Data verification is a structured, repetitive conversation: confirm five fields, capture corrections, handle a small set of objections, end the call. It is precisely the type of task where AI voice agents outperform humans — consistent tone, no fatigue, no forgotten fields, no off-script improvisation.

Key insight: The Brabys verification script is designed to be non-threatening — "this is not a sales call" is stated in the opening line. This makes it ideal for AI delivery. Businesses are more patient with a calm, professional automated voice than with a rushed human agent.

Phase 2 Path: Outbound Sales

The same infrastructure supports outbound sales — but with a critical architectural decision: the sales agent must be a separate persona from the verification agent. The trust that Claire builds ("this is not a sales call") must not be compromised. Phase 2 introduces a dedicated sales agent with a distinct voice, name, and script — triggered by warm signals captured during verification calls.

Section 4

Agent Architecture Overview

The Brabys AI workforce consists of three purpose-built agents, each with a defined role, database access level, and operational boundary. Complexity is deliberately minimised — three agents replace what could be five, by collapsing post-call functions into a single processor.

🧠
General
Supervisor Agent

Orchestrates the entire verification operation. Reads the target queue, assigns records to Claire instances, monitors progress, retries failures, and generates daily reports.

DB Access: Read/write to staging DB. Read-only to call log.

Orchestrator
📞
Claire
Voice Verification Agent

Makes outbound calls via 3CX + ElevenLabs. Conducts the verification conversation, captures field corrections, flags DNC requests. Spawned dynamically as needed.

Voice: South African female (ElevenLabs Claire)

Voice · ElevenLabs
🧹
Scrub
Post-Call Processor

Processes completed call transcripts. Validates all field updates (email, URL, address, phone), scores confidence, approves clean records, flags low-confidence for human review.

DB Access: Write to staging DB and call log only.

Processor

End-to-End Flow

🗄️
Informix
Production
Source of truth
📋
Target List
Population
Manual or scheduled
🧠
General
Queues & assigns
📞
Claire
Calls & verifies
🧹
Scrub
Validates & approves
🔀
Merge
Process
Controlled write-back
🗄️
Informix
Updated ✓
Clean data
Section 7

Scrub — Validation Rules & Red/Green Framework

Every field update captured by Claire passes through Scrub before it can be approved for merge. Scrub applies a deterministic red/green validation framework — each field either passes, is flagged for correction, or blocks the record entirely.

Design Principle

Scrub is not AI-powered — it is a rules engine. Deterministic validation is faster, cheaper, and more auditable than AI judgement for structured data fields. AI is used only for transcript parsing (extracting what Claire captured). The validation itself is pure logic.

Red / Green Decision Framework

Each field update receives one of three outcomes. The worst outcome across all fields determines the record's overall status:

SignalMeaningAction
🟢 GREEN Field passes all validation rules. High confidence. Auto-approved. No human review required.
🟡 AMBER Field is plausible but cannot be fully verified. Medium confidence. Routed to human review queue. Operator approves or rejects.
🔴 RED Field fails a hard validation rule. Cannot be merged. Field update rejected. Original Informix value retained. Record flagged.

Field Validation Rules

📧 Email Address

CheckMethodResult
Format validRegex: name@domain.tld structure🔴 RED if malformed
Domain existsDNS MX record lookup🔴 RED if no MX record found
Not a generic/role addressFlag: info@, admin@, noreply@, test@🟡 AMBER — may be valid but low value
Domain matches businessCompare to known business domain (if available)🟡 AMBER if mismatch (e.g. Gmail for a business)
Changed from previousDiff against Informix value🟡 AMBER if entirely new domain

🌐 Website / URL

CheckMethodResult
Format validURL parse — must have valid TLD🔴 RED if unparseable
Site reachableHTTP HEAD request (timeout 5s)🟡 AMBER if unreachable (may be temporarily down)
Not a social media linkFlag: facebook.com, instagram.com, etc.🟡 AMBER — capture separately, not as website field
Redirects to parking pageDetect common parking page signatures🟡 AMBER — domain registered but not live

📞 Phone Number (SA-specific rules)

CheckMethodResult
Reconstruct from numeric storageApply dialcode from Informix 4GL logic (leading zeros stripped in DB)🔴 RED if cannot reconstruct valid number
Correct digit countSA landline: 10 digits · SA mobile: 10 digits starting 06/07/08🔴 RED if wrong length
Valid area/dialcodeCheck against SA area code list🔴 RED if invalid prefix
Mobile vs landline classification07x/08x prefix = mobile · otherwise = landline🟡 AMBER if type changes (landline → mobile)
Number reachable (optional)Test dial — disabled by default, enable per campaign🟡 AMBER if no ring (not RED — may be engaged)

🏠 Physical Address

CheckMethodResult
Suburb resolves to Informix codeLookup in cossubrb table — exact or fuzzy match🔴 RED if no match found → write to cosaltad instead
Street resolves to Informix codeLookup in cosstrdc table🟡 AMBER if fuzzy match only
Town/region consistentCross-check suburb → town → region codes🔴 RED if inconsistent hierarchy
Non-standard addressCannot resolve to Informix address tables🟡 AMBER → write to cosaltad, flag for human confirmation

🏢 Business Name

CheckMethodResult
Not emptyNull/blank check🔴 RED if blank
Similarity to originalLevenshtein distance — flag if >40% different🟡 AMBER — may be a genuine rebrand or a mishear
Common abbreviationsNormalise: Pty Ltd, (Pty) Ltd, CC, Inc — accept all variants🟢 GREEN if only suffix changed
Completely different name>60% change from original🔴 RED — route to human review, possible business change of ownership

Overall Record Outcome

After all fields are validated, the record gets a composite outcome:

OutcomeConditionNext Status
🟢 All GREENAll changed fields passed validationAPPROVED — auto-merges on next run
🟡 Any AMBEROne or more fields need human eyesHUMAN_REVIEW — operator queue
🔴 Any REDOne or more fields failed hard validation🔴 RED fields rejected, remaining GREEN fields may still merge (configurable)
No changes capturedAll fields confirmed correct by businessAPPROVED — no updates, confirmed valid
DNC flaggedBusiness requested removal during callDNC — quarantined, never merged

Partial merge option: By default, RED field updates are rejected but the rest of the record can still merge. This is configurable — campaigns can require all-or-nothing (one RED = full record held) if data integrity requirements are stricter.

Section 8

Recommended Database Architecture

We recommend replacing the CSV-based working dataset with a dedicated PostgreSQL staging database. This removes file-based bottlenecks, enables true concurrency across multiple Claire instances, and provides a complete audit trail from initial import through to production merge.

Why Not CSV?

A CSV file cannot support concurrent writes from multiple Claire instances, has no transaction safety, provides no query capability, and requires manual hand-off at every stage. At 100+ calls/day with 2–5 simultaneous agents, a database is not optional — it is the correct tool for the job.

Database Structure

The staging database consists of three primary tables, each serving a distinct function in the verification lifecycle:

TablePurposeOwner Agent
recordsWorking copy of Informix subscriber records. Tracks status through the full lifecycle.General (R/W), Claire (R), Scrub (W)
call_logOne row per call attempt. Captures outcome, transcript, duration, ElevenLabs session ID, 3CX call reference.Claire (W), Scrub (R), General (R)
field_updatesField-level delta layer. Old value vs new value, confidence score, QA status. Source for merge process.Scrub (W), General (R)

Record Status Lifecycle

Every record flows through a defined set of statuses, enforced by the database and respected by each agent:

NEWQUEUEDCALLINGQA_PENDINGAPPROVEDMERGED

Exception paths: DNC (Do Not Call) · FAILED · HUMAN_REVIEW

Calling Loop — Retry & Voicemail Logic

The CALLING status is not a single event — it is a configurable loop. General manages the entire retry lifecycle. The loop is defined by an If-This-Then-That (IFTTT) rule set, configurable per campaign without code changes.

Calling Loop Flow
QUEUED Claire dials Outcome?
✅ Connected
Conversation complete → status → QA_PENDING
📭 Voicemail
Leave message (if attempt ≤ max voicemails) → schedule retry on next day
🚫 No Answer
Increment attempt counter → wait configured interval → retry
When max attempts reached without contact → status → FAILED or routed to human review

Configurable IFTTT Rules (per campaign)

General reads these rules from a campaign config — no code changes required to adjust behaviour between campaigns:

RuleDefault ValueDescription
max_attempts3Total call attempts before marking FAILED
max_voicemails2Maximum voicemail messages to leave per record
retry_interval_hours24Hours to wait between retry attempts
retry_time_window08:00–17:00 SASTOnly dial within this window
voicemail_on_attempt1, 3Which attempt numbers to leave a voicemail
no_answer_actionretryretry · voicemail · skip
busy_actionretry_soonretry_soon (2hrs) · retry_tomorrow · skip
max_campaign_days14Days before uncontacted record is abandoned

Example scenario: Claire calls on Day 1 — no answer, leaves voicemail (attempt 1). Retries Day 2 — no answer, no voicemail (attempt 2). Retries Day 3 — no answer, leaves second voicemail (attempt 3). Max attempts reached — record moves to FAILED. All three call attempts are logged in call_log with timestamps, outcome, and whether a voicemail was left.

Agent Database Rights

Agentrecordscall_logfield_updatesInformix
GeneralRead / WriteReadReadNo access
ClaireRead onlyWriteNo accessNo access
ScrubStatus update onlyReadWriteNo access
Merge ProcessRead APPROVEDReadRead APPROVEDWrite (controlled)

Security principle: No AI agent has direct access to the IBM Informix production database at any time. Informix is only touched by the controlled Merge Process, triggered by a human-approved action or scheduled job — never autonomously by an agent.

Sections 9 & 10

Target List Management & Record Merge

Populating the Target List

The staging database is populated from Informix via a controlled extract process. Three modes are supported:

ModeDescriptionWhen to Use
Manual kickoffOperator uploads a CSV export from Informix and runs the import script. Records inserted as NEW.Initial pilot, ad-hoc campaigns, sector-specific batches
Scheduled cronNightly job extracts records from Informix matching defined criteria (geography, sector, last-verified date) and populates the queue automatically.Ongoing operations at full scale
Rules-based filterImport is filtered by configurable definitions: last verified > 180 days, missing email, region, product line, etc.Priority campaigns — e.g. email append, renewal cohorts

Target Definition Parameters

The Merge Process

Once Scrub has approved a set of records, the Merge Process writes verified field updates back to Informix. This is the only point of contact between the AI system and the production database.

Merge Controls

Conflict Resolution

If a record has been updated in Informix since the staging copy was extracted, the Merge Process flags a conflict and routes to human review. It never overwrites a newer Informix record with older staging data.

Appendix A

Original Process Specification

As provided by Brabys. Reproduced for reference. The agent names, architecture, and database recommendations in this document supersede the CSV-based approach described below.

Data Verification Dataflow (Original)

StepComponentDescription
1IBM Informix DBProduction subscriber database — source of truth
2CSV ExtractWorking dataset: primary key, business name, address, phone, cell, email, website
3AI Calling AgentDials business, verifies details, captures corrections. → Now: Claire
4Updated CSVDelta/revision layer with corrections. → Now: Staging DB field_updates table
5Vetting ProcessHuman review or AI QA — address, spelling, email, website. → Now: Scrub
6Update EngineCompares primary key, applies field updates, logs changes. → Now: Merge Process
7IBM Informix DBProduction update — clean, verified data

Architectural Principles (from Brabys specification)

All three principles are preserved and strengthened in the Maverix architecture. The CSV working dataset is replaced by a proper PostgreSQL staging database — a direct upgrade that preserves the air-gap principle while adding concurrency, auditability, and query capability.

Appendix B

Claire — Full System Prompt

The following is the complete system prompt deployed to the ElevenLabs Conversational AI agent (Agent ID: agent_6101kjdakj41fhq8ptkxffcxcwrz).

## Role You are a polite, calm automated agent calling to verify business contact details in the Braybies directory. This is NOT a sales call. Never sell, pitch, upsell, or mention advertising. You only capture inbound interest if the business brings it up themselves. ## Core Rules - Be respectful of their time. Never rush them, but do not linger either. - If someone refuses or is unavailable, thank them and end the call immediately. - Never argue or pressure. - When reading back details (especially email), go slowly and confirm. ## Opening "Hi, this is an automated call from Braybies. I am just calling to verify your business contact details in our directory — this is not a sales call. Am I speaking with someone who can help with that?" → If no: "No problem, thank you for your time." [END CALL] → If yes: proceed to Step 1 ## Step 1 — Business Name "I have your business listed as {business_name} — is that correct?" → If incorrect: capture correction, repeat back, proceed. ## Step 2 — Phone Number "The number we have is {phone_number} — still correct?" → If incorrect: capture, read back to confirm. ## Step 3 — Email (do not rush) "We have {email} on file — is that still the best email for enquiries?" → If corrected: spell back slowly, confirm before proceeding. ## Step 4 — Physical Address "Your address is listed as {address} — has that changed?" → If changed: capture new address, confirm. ## Step 5 — Website / Social (optional) "Do you have a website or Facebook page you would like listed?" → If yes: capture URL, confirm. → If interest in getting one: refer to team, ask for decision-maker contact. → If no interest: proceed to close. ## If Busy "No problem — I can keep this quick. Just confirming your phone number and email will do for now." ## Objection Handling "Where did you get my details?" → "Your business is in our public directory. We verify periodically to keep it accurate." "Not interested" → "That is absolutely fine. Thank you for your time." [END] "Remove us" → "Thank you for letting me know — I will flag that accordingly." [END CALL — mark DNC] ## Closing "Thank you for your time — I have updated your details to keep our records accurate. Have a great day."
ParameterValue
ElevenLabs Agent IDagent_6101kjdakj41fhq8ptkxffcxcwrz
VoiceClaire — South African Female (ElevenLabs)
Voice IDgsm4lUH9bnZ3pjR1Pw7w
Modeleleven_turbo_v2
LLMGemini 2.5 Flash
Max call duration10 minutes
LanguageEnglish (en)
Appendix C

Technical Stack Summary

ComponentTechnologyNotes
Voice AgentElevenLabs Conversational AIClaire voice, eleven_turbo_v2
VOIP / Dialling3CXOutbound SIP, existing Brabys instance TBC
Staging DatabasePostgreSQL 15+Hosted on Maverix infrastructure
Production DatabaseIBM InformixRead-only extract; write via Merge only
OrchestrationNode.jsGeneral supervisor loop
Call ProcessingNode.jsScrub post-call processor
DashboardHTML/JS (Express)localhost:3500, deployable to Cloudflare Pages
Target ImportCSV → PostgreSQLManual or scheduled cron
Merge EngineNode.jsField-level diff writer to Informix

Scalability Model

The system is designed to scale horizontally by spawning additional Claire instances. General monitors queue depth and spawns agents dynamically:

Daily Call VolumeClaire InstancesInfrastructure
10–50 calls/day1–2 instancesSingle server, current setup
50–200 calls/day2–5 instancesSingle server with concurrency
200–500 calls/day5–10 instancesLoad-balanced, dedicated DB
500+ calls/day10+ instancesCloud-hosted, auto-scaling

Open Items for Confirmation

Prepared by
Maverix Studio
darren@maverixstudio.com · maverixstudio.com