Acceptable Use Policy (AUP)
Version: 0.1 (DRAFT — pending lawyer review) Effective from: TBD (post lawyer review and ToS binding) Last updated: 2026-05-02 Operator: Stefan Stešević, sole proprietor, trading as meandai, registered in Montenegro (EU candidate state).
This Acceptable Use Policy ("AUP") governs your use of the meandai platform (the "Platform"), operated by Stefan Stešević ("meandai", "we", "us", "our"). It is part of, and incorporated by reference into, the meandai Terms of Service. Capitalised terms have the meaning given in the Terms of Service.
1. Who this applies to
This AUP applies to:
- Each Customer (the legal entity that signs up and pays).
- Each Authorised User of a Customer (employees, contractors, or other persons given access to the Customer's tenant).
- Each AI agent operated within a Customer's tenant. Customer is responsible for the actions of all agents configured within its tenant.
2. Prohibited use
You agree that you will not use, and will not permit any Authorised User or configured AI agent to use, the Platform to:
2.1 Illegal activity
(a) violate any applicable law, regulation, court order, or third-party right, including data protection law (GDPR, EU AI Act, ePrivacy), consumer protection law, intellectual-property law, export-control law, or anti-money-laundering rules;
(b) facilitate, plan, or carry out criminal activity, including fraud, tax evasion, money laundering, sanctions evasion, terrorism financing, or trafficking;
(c) infringe, misappropriate, or otherwise violate any patent, copyright, trademark, trade secret, moral right, publicity right, or other intellectual-property right of any person;
(d) defame, libel, or harass any natural or legal person.
2.2 Abuse of communication channels
(e) send unsolicited bulk commercial communications (spam) by email, WhatsApp, SMS, voice, or any other channel, including but not limited to communications that violate the EU ePrivacy Directive (Directive 2002/58/EC), the German UWG, the Italian Consumer Code, the UK PECR, the US CAN-SPAM Act or TCPA, or equivalent rules in any other jurisdiction;
(f) impersonate any person or entity, including by spoofing the "From" header of an email, the display name of a WhatsApp number, or the caller-ID of a voice call;
(g) send any communication that is deceptive about the fact that it was generated, in whole or in part, by an AI system, when such disclosure is required by Article 50 of the EU AI Act or by analogous transparency rules;
(h) operate the Platform in a manner that exceeds the fair-use volume limits in §3, including the mass-CRM cap of 10 records per autonomous batch without a Human-in-the-Loop (HITL) approval step;
(i) place unsolicited cold outreach to any natural person who has previously requested not to receive such outreach, or to any address found on the Robinson List, the Bundesnetzagentur Robinsonliste, or an analogous opt-out registry.
2.3 Harmful content
(j) generate, store, or transmit content that is sexually explicit involving minors, that depicts non-consensual sexual conduct, or that constitutes child sexual abuse material under any applicable law;
(k) generate, store, or transmit content that incites violence, hatred, or discrimination against any person or group on the basis of race, ethnicity, national origin, religion, sex, gender identity, sexual orientation, age, disability, or any other protected characteristic;
(l) generate or distribute disinformation, including manipulated audio or video ("deepfakes") of identifiable persons, in violation of Article 50(4) of the EU AI Act;
(m) generate biometric categorisation outputs based on race, political opinion, religion, sexual orientation, or other categories prohibited by Article 5 of the EU AI Act;
(n) operate the Platform in any manner that constitutes a "prohibited AI practice" under Article 5 of the EU AI Act, including subliminal manipulation, exploitation of vulnerabilities, or social scoring.
2.4 Security and platform integrity
(o) probe, scan, or test the vulnerability of the Platform, except via the meandai responsible-disclosure programme published at https://meandai.com/security (when live);
(p) circumvent, disable, or otherwise interfere with security or authentication features of the Platform, including the multi-tenant isolation guarantees, the capability-token system, the kill switch, the rate limiter, the prompt-injection filter, or the audit log;
(q) attempt to extract, reverse-engineer, or train a model on the Platform's source code, model weights, or proprietary prompts;
(r) introduce malware, ransomware, viruses, trojans, worms, time bombs, cancelbots, or other malicious code into the Platform;
(s) use automated means (other than the public API) to access the Platform, including scraping, harvesting, or crawling beyond the rate limits published in the developer documentation;
(t) attempt to identify any other Customer or Authorised User of the Platform, or to access another tenant's data, including via prompt injection, side-channel attack, or social engineering of meandai personnel.
2.5 Data hygiene
(u) upload to, or process via, the Platform any of the following categories of data without first obtaining meandai's prior written consent:
- protected health information governed by HIPAA (United States) or the equivalent national rules in EU member states;
- payment card data subject to PCI-DSS;
- classified government information;
- biometric data within the meaning of GDPR Article 9(1) used for the purpose of uniquely identifying a natural person;
- credentials of any kind belonging to third parties (passwords, private keys, OAuth refresh tokens of unrelated services);
(v) upload, transmit, or store any personal data without a lawful basis under GDPR Article 6, and where applicable Article 9, that you can document on request;
(w) configure an AI agent to take an action with legal or significant similarly significant effect on a natural person without an effective human review step (Article 22 GDPR), unless the customer has obtained explicit consent under Article 22(2)(c).
2.6 Misuse of AI agents
(x) instruct an AI agent to make, on its own and without HITL review, a final decision that creates legal effects for a natural person — for example, hiring, firing, denying credit, denying insurance, denying access to a public service;
(y) operate an AI agent in Full Autonomy mode for any tenant where the data subjects have not been given the transparency notice required by Article 50 of the EU AI Act and Article 13/14 of the GDPR;
(z) use the AI agents to generate text, images, audio, or video that you then publish or sell as your original human work without disclosing that it was generated, in whole or in part, by an AI system, where such disclosure is required by law, contract, professional code of conduct, or by an editorial standard of the publishing venue.
3. Fair-use and rate limits
The Platform is provided on a fair-use basis. Concrete numerical limits are published in the API documentation (when live) and may include:
- Mass-CRM cap: an AI agent may not, in a single autonomous batch, write to more than 10 records in any external CRM, marketing automation, or messaging system without an explicit HITL approval step. Batches above this cap require either an explicit human approval per batch, or explicit prior contractual authorisation in the customer's order form.
- Outbound email cap: an AI agent may not, without HITL approval, send more than 50 outbound emails per tenant per hour, or 500 per tenant per day, regardless of the recipient address.
- Outbound WhatsApp / SMS cap: an AI agent may not, without HITL approval, send to more than 20 unique recipients per tenant per hour.
- MCP tool-call burst: an AI agent may not exceed the per-tool burst limits published per tool in the documentation.
These limits are enforced both client-side (rate limiter) and server-side (capability token). Limit increases require a written change to the order form and are evaluated against the AUP risk profile of the requested use case.
4. Operator's enforcement rights
To preserve Platform integrity, protect other Customers, and comply with our own legal obligations, meandai may, without prior notice, take any of the following actions where it has a good-faith belief that this AUP has been violated or that immediate action is necessary:
(a) Throttle outbound traffic from a Customer's tenant.
(b) Suspend a specific AI agent within a Customer's tenant via the kill switch (PostHog feature flag agent_kill_<slug>), pending investigation.
(c) Suspend an entire Customer tenant via the global kill switch (agent_kill_all_<tenant>).
(d) Quarantine specific Authorised User accounts.
(e) Preserve evidence relating to the suspected violation, including logs, prompts, generated outputs, and capability-token audit trails.
(f) Notify affected third parties or competent supervisory authorities where required by law (including GDPR Article 33 personal-data-breach notifications).
(g) Terminate the Customer's account in accordance with the Termination clause in the Terms of Service.
Where action is taken, meandai will provide written notice to the Customer's primary contact within 24 hours of the action and will, where possible and not legally restricted, provide the Customer with a reasonable opportunity to cure the violation before termination.
5. Reporting violations
To report a suspected AUP violation, contact abuse@meandai.io (or the legacy alias abuse@meandai.com, monitored as a fallback). Provide as much detail as possible, including timestamps, tenant identifiers (where known), and the nature of the suspected violation. We will investigate every report received in good faith.
For violations of the EU Digital Services Act (Regulation (EU) 2022/2065) — including illegal content, deceptive design, or systemic risks created by AI-generated outputs distributed via the Platform — you may also use the dedicated dsa@meandai.io alias. Reports flagged as DSA notices are processed in line with Articles 16, 17, and 22 of the DSA and the recipient is informed of the action taken.
5.1 Mechanical AUP enforcement
In addition to this human-reporting channel, meandai operates a daily automated AUP-monitoring cron (inngest/functions/aup_monitor.py) that consumes per-tenant abuse signals from third-party providers (Resend bounce / spam-complaint rates, Buffer / Meta account-warning flags when connected, PostHog usage anomalies) and the Platform's own outbound-velocity counters. When the configured per-plan thresholds are crossed, the cron triggers the kill switch via the Telegram bot (/kill <agent> or /kill_all <tenant>) and emits a Slack DM to the operator with the supporting evidence.
The kill switch is reversible by Stefan via /unkill <agent> if the trigger turns out to be a false positive; the trigger event and the manual override are both written to the audit log.
6. Changes to this AUP
We may update this AUP from time to time. Material changes will be communicated to the Customer's primary contact at least 30 days before they take effect. Continued use of the Platform after the effective date of a change constitutes acceptance of the change.
7. Relationship to other documents
This AUP is incorporated by reference into the Terms of Service. To the extent of any conflict between this AUP and the Terms of Service, the Terms of Service prevail. The Privacy Policy and the DPA govern the processing of personal data and prevail over this AUP on data-protection matters.
Acknowledgement: by clicking "I agree" during signup, by accessing the Platform after the effective date, or by allowing any Authorised User to do so, you accept this AUP on behalf of the Customer.