1. Overview
ChatPilot uses artificial intelligence to power automated conversations between businesses and their customers on WhatsApp. This policy explains what AI systems we use, how they work, what data they process, what they are permitted to do, what limits apply, and what responsibilities businesses using ChatPilot hold when deploying AI conversations with their customers.
We believe AI systems used in commerce — particularly those that handle customer inquiries, process payments, and communicate with people on behalf of a business — should be transparent, accountable, and bounded by clear rules. This policy is part of that commitment.
2. AI Systems Used in ChatPilot
2.1 Conversation Classification — Claude (Anthropic)
What it does: When a customer sends a WhatsApp message, ChatPilot uses Claude (developed by Anthropic) to classify the intent of the message. This includes determining whether the message is a product inquiry, a complaint, a payment query, a booking request, a personal message (not business-related), or a message requiring human intervention.
This classification determines what happens next — whether the AI responds, retrieves information from tenant content, triggers a payment flow, or escalates to a human.
What data it receives:
- The text content of the incoming WhatsApp message
- Limited context from the recent conversation history (to understand follow-up messages correctly)
- No customer personal identifiers are sent to Anthropic beyond what appears in the message text itself
How long it is retained by Anthropic: ChatPilot uses Anthropic's API under terms that prohibit Anthropic from using API-submitted data to train their models. Message data transmitted to Anthropic is processed in real time and is subject to Anthropic's API data retention terms (typically 30 days for safety monitoring, then deleted). ChatPilot does not share customer names, phone numbers, payment data, or any data beyond message text with Anthropic.
Anthropic's policy: anthropic.com/privacy
2.2 Content Embeddings — Cohere Multilingual Model
What it does: When a Tenant uploads content to their content library — product descriptions, FAQs, pricing, policies — ChatPilot converts that text into vector embeddings using Cohere's multilingual embedding model. These embeddings allow the AI to perform semantic search: when a customer asks a question, the system retrieves the most relevant tenant content and uses it to construct a response.
The Cohere multilingual model is specifically chosen for its performance on Swahili, Sheng, and East African language patterns — standard English-only models perform significantly worse on these languages.
What data it receives:
- Text content from the Tenant's content library (product descriptions, FAQs, policies)
- This is business content, not personal data belonging to customers
- Customer message text is not sent to Cohere — only Tenant content is embedded
How often it runs: Embeddings are generated when Tenant content is first added or updated. They are not generated per customer conversation. Customer messages are compared against pre-computed embeddings locally — the customer message text is never sent to Cohere.
Cohere's policy: cohere.com/privacy
2.3 Response Generation — ChatPilot AI Layer
What it does: ChatPilot's AI layer combines the intent classification (from Claude), the retrieved tenant content (from vector search), and the conversation context to construct or select an appropriate response. For most interactions, responses are grounded in the Tenant's configured content — the AI does not invent information not provided by the Tenant.
Boundaries:
- The AI responds only from content the Tenant has explicitly provided or approved
- If a customer asks a question the tenant content does not cover, the AI states the limit and escalates to a human or invites the customer to provide more context
- The AI does not make financial promises, legal representations, or medical claims on behalf of Tenants
- The AI does not engage in deceptive practices — it will not deny being an automated system if a customer sincerely asks
3. What ChatPilot AI Will and Will Not Do
The AI Will:
- Answer customer questions accurately based on the Tenant's configured content
- Collect order details (product, quantity, delivery information) from customers conversationally
- Initiate M-Pesa payment requests when a customer is ready to pay
- Confirm payment receipt and generate order references
- Send follow-up messages in automated sequences configured by the Tenant
- Escalate conversations to a human when it detects: complexity beyond its configured content, a complaint, a request for a human, or high frustration signals in the message
- Respond in the language the customer uses — English, Swahili, Sheng, or a mix
- Classify incoming messages to determine whether they are business-related or personal (for WhatsApp Coexistence)
- Respect opt-out signals — if a customer messages "stop", "ondoa", or equivalent, the AI will flag the contact for opt-out and cease automated outreach
The AI Will Not:
- Deny being an automated system when sincerely asked. If a customer asks "Am I talking to a bot?" or "Is this a real person?", the AI will state that it is an automated assistant operating on behalf of the business
- Fabricate product information, prices, or availability not present in tenant content
- Make guarantees, warranties, or legally binding representations on behalf of the Tenant that the Tenant has not explicitly configured
- Process or store credit card data — payment processing is handled through M-Pesa's Daraja API directly
- Engage with content that is abusive, harassing, or violates ChatPilot's Terms of Service, even if instructed to by a Tenant
- Access or respond to messages from a customer's personal contacts — WhatsApp Coexistence ensures only business-relevant messages are handled by the AI
- Send unsolicited messages to customers who have not initiated contact or opted in to outreach
- Discriminate against customers on the basis of protected characteristics
4. Human Oversight and Escalation
ChatPilot's AI is designed to support human oversight, not replace it entirely. Every ChatPilot deployment includes:
Mandatory Escalation Triggers
The AI automatically escalates a conversation to a human in the following situations:
- The customer explicitly requests a human: "I want to talk to a person", "niongee na mtu", or similar
- The AI's confidence in its response falls below configured thresholds
- The conversation involves a payment dispute, refund request, or formal complaint
- The customer expresses significant distress or uses language indicating urgency
- The conversation reaches a defined step count without resolution
- The message content is outside the scope of the Tenant's configured content entirely
Human Takeover
Tenants and their staff can take over any conversation at any time from the ChatPilot dashboard. When a human takes over, the AI pauses on that conversation. The AI does not resume until explicitly reactivated by the Tenant or after a configured timeout.
Tenants receive notifications of escalated conversations via their dedicated notification WhatsApp number — so high-priority conversations reach them immediately, even when they are not in the dashboard.
Audit Trail
Every AI response sent on behalf of a Tenant is logged with:
- The message content sent
- The tenant content retrieved and used
- The intent classification score
- The timestamp
- Whether a human subsequently reviewed or overrode the response
This log is available to Tenants in their dashboard for review and accountability.
5. Tenant Responsibilities When Using ChatPilot AI
Businesses deploying ChatPilot AI to communicate with their customers have the following obligations:
5.1 Content Accuracy
Tenants are responsible for the accuracy of the content they upload to their content library. The AI can only be as accurate as the information it is given. Tenants must:
- Keep product prices, availability, and policies current in tenant content
- Remove outdated offers, discontinued products, or changed policies promptly
- Not upload content designed to mislead or deceive customers
5.2 Informed Customer Consent
Tenants must ensure their customers have a reasonable expectation that they may be communicating with an automated AI system. Acceptable methods include:
- A disclosure in the business's WhatsApp profile description
- An AI disclosure in the initial greeting message (e.g. "Hi, I'm [BotName], [BusinessName]'s virtual assistant")
- A general notice in the business's terms, website, or point-of-sale communications
ChatPilot provides default greeting templates that include an AI disclosure. Tenants who remove or obscure this disclosure do so in breach of this policy.
5.3 Opt-In for Broadcast Campaigns
Outbound broadcast campaigns sent via ChatPilot (new stock announcements, flash sales, follow-up sequences) must only be sent to contacts who have opted in to receive communications from the business. Under Meta's WhatsApp Business Messaging Policy, businesses must have received a message from the contact first, or obtained express opt-in via another channel.
Tenants must not upload purchased contact lists, scraped phone numbers, or any contacts who have not explicitly agreed to receive WhatsApp messages from the business.
5.4 Prohibited Use Cases
The following uses of ChatPilot AI are prohibited:
- Impersonating a government authority, financial institution, or regulated entity without appropriate authorisation
- Using the AI to collect or transmit sensitive personal data — health information, identity document numbers, financial credentials — without explicit customer consent and appropriate security measures
- Deploying the AI to communicate with individuals who have previously opted out or requested to be removed from contact
- Using the AI in connection with any activity that is illegal under Kenyan law or the laws of the jurisdiction in which the Tenant operates
- Attempting to circumvent or manipulate the AI's escalation logic to avoid human review of complaints or disputes
- Using the AI to generate, distribute, or engage with content that is abusive, discriminatory, or harassing
Breach of these prohibitions may result in suspension or termination of the Tenant's ChatPilot account.
6. AI Limitations — What Tenants and Customers Should Know
ChatPilot's AI is capable but not infallible. Tenants and their customers should be aware of the following inherent limitations:
Content boundaries: The AI uses only what the Tenant has configured in content entries and connected sources. It cannot access the Tenant's private systems, stock management tools, or other data sources unless explicitly integrated.
Language nuance: While the AI performs well on Swahili, Sheng, and English, highly idiomatic, dialectal, or ambiguous language may be misclassified. The escalation system is designed to catch these cases.
Context window limits: Very long conversations may cause earlier context to be deprioritised. For complex, multi-session negotiations or support cases, human involvement is recommended.
Novel situations: The AI is trained on patterns from the Tenant's configured content. Genuinely new situations — unusual product requests, edge-case payment scenarios, uncommon complaints — are best handled by a human. The escalation system is designed to route these appropriately.
No guarantee of outcome: ChatPilot's AI facilitates conversations and collects orders. Commercial outcomes — whether a sale completes, whether a product satisfies the customer, whether a dispute is resolved — depend on the Tenant's business operations, not the AI.
7. Continuous Improvement and Model Updates
ChatPilot may update the AI models, prompts, and retrieval systems used in the platform. Updates may improve accuracy, language support, escalation logic, or response quality. Tenants will be notified of material changes to AI behaviour that could affect their customer experience via email and in-app notification.
We do not use individual Tenant conversation data to train shared AI models. Improvements are made through:
- Updates to foundation models (Claude, Cohere) by their respective providers
- Improvements to ChatPilot's own retrieval and prompt systems
- Aggregate anonymised signal analysis (e.g. escalation rates, resolution rates) that does not expose individual message content
8. Reporting AI Errors and Concerns
If a ChatPilot AI response causes harm to a customer, communicates inaccurate information, or behaves in a way that is inconsistent with this policy, we want to know.
Tenants can report AI issues via the dashboard (flag a conversation) or by emailing ai-feedback@chatpilot.biz.
End users (customers of ChatPilot-powered businesses) who have concerns about AI interactions can contact privacy@chatpilot.biz. We will investigate and work with the relevant Tenant to address the concern.
We review AI error reports weekly and use them to improve classification accuracy, escalation logic, and content guidance.
9. Changes to This Policy
As AI capabilities evolve and as we add new AI-powered features to ChatPilot, this policy will be updated to reflect those changes. Material changes will be communicated to Tenants with at least 14 days notice before taking effect.
10. Contact
For questions about this AI Usage Policy:
ChatPilot Ltd Email: ai-feedback@chatpilot.biz General enquiries: hello@chatpilot.biz Address: Nairobi, Kenya