← All legal documents

Data Protection Impact Assessment (DPIA)

Version: 0.1 (DRAFT — pending lawyer + DPO sign-off) Effective from: TBD (post lawyer review and first paying customer signup) Last updated: 2026-05-04 Owner: Stefan Stešević (interim DPO under GDPR Art. 37(1) — re-evaluate at €5M ARR or 25 staff) Review cadence: quarterly + on every material change to data flow, model, or subprocessor Statutory basis: GDPR Article 35; EU AI Act 2024/1689 Article 27 (high-risk system FRIA — Fundamental Rights Impact Assessment)

This DPIA documents the meandai platform's processing of personal data, with the methodology required by Article 35 GDPR and the EDPB Guidelines on Data Protection Impact Assessment (WP248 rev.01). It is a living document — it must be updated when (a) a new subprocessor is added, (b) a new high-risk processing activity is introduced, (c) the operating mode default changes, or (d) the AI model family changes.

1. Why a DPIA is required

The processing meets at least three of the EDPB high-risk criteria, any two of which trigger a mandatory DPIA:

  1. Innovative use of new technology — autonomous AI agents that act on the customer's behalf via connected accounts (Gmail, Calendar, CRM, Slack, WhatsApp).
  2. Systematic monitoring — continuous capture of customer-side communication and internal-tool activity within the tenant (audit log, capability-token issuance log, prompt log).
  3. Data on a large scale — projected processing of communications and contact data for 100+ tenants, each tenant containing on average 10 000+ external contacts.
  4. Vulnerable data subjects — incidental processing of children's data (Italian / German hospitality customer pipelines may include family bookings).
  5. Matching or combining datasets — knowledge-graph layer combines connected-account exhaust into a per-tenant graph that did not exist before.
  6. Automated decision-making with legal or significant effectnot engaged by default (Concierge mode requires HITL); becomes engaged only if the customer enables Full Autonomy mode for an agent that takes such a decision (which the AUP §2.6(x) prohibits without HITL, and which Article 22 GDPR safeguards back).

The processing is also a high-risk AI system under the EU AI Act 2024/1689 Annex III §1(d) (use in critical infrastructure — not engaged) and §3 (employment / worker management — engaged when a customer uses the HR-coordinator agent). For those tenants, this document also serves as the FRIA required by Article 27 of the AI Act.

2. Description of the processing

2.1 Nature

meandai operates a multi-tenant SaaS platform that lets customers configure AI agents which:

  • read incoming email and draft replies (Gmail / Outlook);
  • read calendar invites and propose responses (Google / Microsoft Calendar);
  • summarise documents stored in the customer's Drive / OneDrive / SharePoint;
  • write to external CRM systems (Airtable, HubSpot, Salesforce — when connected);
  • post to messaging surfaces (Slack, Telegram, WhatsApp Business, Buffer for social);
  • transcribe and summarise meetings (Zoom, Google Meet, Microsoft Teams).

All actions are subject to per-tenant operating mode (Concierge / Standard / Full Autonomy — see ToS §6) and per-tool capability tokens.

2.2 Scope

DimensionDetail
Data categoriesIdentification (name, email, phone), professional (title, employer), communication content (email body, message body, meeting transcripts), calendar metadata, document content, AI-generated outputs that may contain personal data, OAuth tokens (encrypted at rest)
Special-category data (Art. 9)Not designed-for. AUP §2.5 prohibits without prior written authorisation. Where present incidentally (e.g., in a meeting transcript), the prompt-injection filter and data-minimisation policy apply but cannot prevent inclusion in plaintext input
Data subjects(a) Authorised Users of the customer; (b) the customer's external contacts whose data flows through the connected accounts
VolumeAt launch: ~10 tenants × ~10 000 contacts × ~50 communications / contact / year ≈ 5M data subjects, ~250M records / year
Geographic scopePrimary: EU customer base. Limited US, UK, MENA inflow via customer relationships
RetentionCustomer-controlled; default 7 years for audit, 30 days for raw logs, 1 year for aggregated metrics
Recipients(a) AI model subprocessors per inference call; (b) infrastructure subprocessors per docs/legal/SUBPROCESSORS.md

2.3 Context

  • Customers are EU SMBs and enterprises. Deal pipeline focused on hospitality (MLA pilot, Italian / German / Austrian luxury) before broadening.
  • Data subjects do not have a direct relationship with meandai; they have a relationship with the customer, who is the GDPR controller.
  • meandai operates as a GDPR processor under Article 28, evidenced by a per-customer DPA.

2.4 Purposes

  • Provide AI-assisted automation of office work for the customer.
  • Detect, contain, and remediate abuse of the platform (AUP enforcement).
  • Comply with legal and regulatory obligations.

The platform does not train any AI model on customer data; this is a contractual commitment with each AI model subprocessor.

3. Necessity and proportionality

3.1 Lawful basis

Processing of personal data within a tenant happens on the customer's documented instructions (DPA §4). The customer is responsible for identifying and documenting the lawful basis under GDPR Art. 6 (and where applicable Art. 9) for each processing activity it instructs.

For data meandai controls in its own right (visitor data, Authorised User account data, billing data), the lawful bases are documented in the Privacy Policy §1.1.

3.2 Data minimisation

  • The platform reads only the connected accounts the customer explicitly authorises.
  • Per-agent rubrics scope what an agent may request (e.g., a CRM-write agent does not need read access to the document store).
  • Capability tokens are scoped per (tool, tenant, agent, params hash) and TTL-limited to 60 seconds.
  • Meeting transcripts are processed in memory; only the structured summary is persisted unless the customer opts into full transcript retention.

3.3 Purpose limitation

Customer data is used solely to provide the requested AI-assisted action. Aggregated, de-identified usage statistics may be used to improve the platform; they cannot be linked back to a customer or data subject.

3.4 Storage limitation

Default retention is the lesser of (a) the customer's instruction and (b) the period necessary for the purpose. Default deletion at termination is 90 days; encrypted backups roll off in 30 days (daily) / 12 months (monthly).

3.5 Accuracy

AI-generated outputs are described to the customer as drafts subject to human review (Concierge mode), within the AUP rate limits (Standard mode), or accepted as the customer's own outputs (Full Autonomy mode). The platform provides an audit log of every agent action.

4. Risks to data subjects

#RiskLikelihoodSeverityInherent rating
R1Cross-tenant data leak (one customer reads another customer's contacts)LowCriticalHigh
R2Prompt injection from inbound email exfiltrates tenant data to attackerLow–MediumHighHigh
R3AI agent sends communication to wrong recipient (typo, hallucinated address)MediumMediumMedium
R4AI agent generates defamatory or discriminatory content about a data subjectLowHighMedium
R5Subprocessor breach exposes encrypted-at-rest tenant dataLowHighMedium
R6OAuth token compromise allows unauthorised reading of connected accountLowCriticalHigh
R7Audit log tampering hides agent misbehaviourVery lowHighLow
R8Data subject not informed an AI is acting (Art. 50 EU AI Act, Art. 13/14 GDPR)MediumMediumMedium
R9Data subject's Art. 15–22 rights cannot be fulfilled because data is unfindableLowMediumLow
R10Backup restore exposes data the data subject has since requested erasedLowMediumLow
R11Sub-processor moves data outside EEA without SCC updateLowMediumLow
R12Children's data processed without consentMediumMediumMedium

5. Mitigations

#RiskMitigation
R1Cross-tenant leakPostgres FORCE RLS per-tenant policies on every tenant-scoped table; capability tokens bound to (tenant_id, agent_id, params_hash); tenant-isolation audit (scripts/audit_tenant_isolation.py) gates every PR to main; per-tenant Aura graph databases
R2Prompt injectionAnthropic Haiku-based ingress filter on all untrusted input (incoming email, document upload, MCP tool responses); block at confidence > 0.85; alert at confidence > 0.6; quarterly red-team pass scheduled post-launch
R3Wrong-recipient sendConcierge mode = HITL on every external action; Standard mode = HITL on irreversible classes (external email, CRM write, social post); Full Autonomy = customer accepts liability per ToS §6.1(c)
R4Harmful contentAUP §2.3 prohibits; PromptArmor profanity / hate-speech detector; per-tenant brand-pack forbidden_phrases; judge L2 rubric blocks below threshold; kill switch (Telegram /kill <agent>) for runaway agent
R5Subprocessor breachAWS KMS envelope encryption (per-tenant DEK) for OAuth tokens and connected-account credentials; daily backups to R2 also envelope-encrypted; subprocessors listed in SUBPROCESSORS.md, each contractually committed to encryption-in-transit and at-rest
R6OAuth token compromiseTokens never returned via API; KMS-decrypted only inside the agent execution context; quarterly rotation; immediate revocation flow on customer-reported compromise
R7Audit log tamperingAudit log written to Postgres + tail-shipped to Logfire (immutable from app side); separate reader credential for Stefan's manual review
R8Lack of AI transparencyAUP §2.6(y) requires Article 50 transparency notice before Full Autonomy; default footer on AI-drafted email pending customer brand-pack override; Privacy Policy §3 describes role of AI
R9Right-of-access fulfilmentPer-tenant data export endpoint planned (Q3 launch); meanwhile manual export procedure documented in DR runbook
R10Erasure-vs-backup tensionErasure requests honoured in primary DB within 90 days; backups roll off in 30 daily / 12 monthly cycle, so erasure is fully effected within 12 months max; documented in DPA §10
R11Cross-border driftSubprocessor list updated on every change; SCC re-execution required before any new non-EEA processor added; 30 days customer notice per DPA §7.2
R12Children's dataAUP §2.6 + Privacy Policy §11 — service not directed at under-16s; Italian / German hospitality customers warned in onboarding to redact minor data

After mitigations, all residual risks are rated Low under the EDPB matrix. Where residual risk would have been Medium or High, an additional mitigation is added to bring it to Low; this is the EDPB-recommended threshold for processing without prior consultation under Art. 36.

6. Stakeholder consultation

StakeholderMechanismStatus
Founder / DPO interimAuthor of this DPIADone
Legal counsel (external EU-SaaS specialist)Review of full legal packPending Track 0.C
Cyber-insurance carrier (Coalition)Underwriting reviewPending policy bind
Pilot customer (MLA)Privacy walkthrough during onboardingDone verbally; written acknowledgement at GA
Data subject representativesNot separately consulted (data subjects are customer's contacts, the customer is the controller and represents them)N/A
EU AI Act FRIA reviewerTBD — required only for tenants enabling HR-coordinator (Annex III §3)Triggered on first such tenant

Per Art. 35(9) GDPR, where appropriate, the controller must seek the views of data subjects or their representatives. Because data subjects on this platform are the customer's contacts (not meandai's), the customer assumes that obligation in its own controller capacity. The DPA records the customer's acknowledgement.

7. Approval and review

RoleNameDateSignature
Interim DPOStefan Stešević2026-05-04(pending)
Legal counsel(TBD)(TBD)(TBD)
Cyber-insurance underwriter(TBD)(TBD)(TBD)

Next review: 2026-08-04 (quarterly) or sooner if any material change.

8. Decision: prior consultation under Art. 36?

After mitigations, no residual risk is rated High. No prior consultation with the supervisory authority is required. This decision is recorded in the DPIA register.

If any future change pushes a residual risk back to High and mitigation cannot bring it down, prior consultation with the Agency for Personal Data Protection of Montenegro (Agencija za zaštitu ličnih podataka, https://azlp.me) is mandatory before processing begins.

9. Linked documents

10. Change log

DateChangeAuthor
2026-05-04Initial draftTrack 9 / Claude Code