Commitment 01
We do not train any model on your transcripts or audio.
Not for retrieval, not for ranking, not for "general improvement." Your conversations are not training data — not for the AI Summary engine, not for Promise Tracker, not for the ECAPA voice-ID model.
Operationalised by: the inference endpoints we use are configured no-retention; we run periodic audits against the upstream providers' retention controls; the architecture team-of-one has explicit instruction in writing not to enable any opt-in training feature on any account, ever.
Commitment 02
We do not run cross-account voice search, ever.
There is no global voice database. There is no "find this voice across all Bonfiyah users" pathway. Your speaker library is scoped to your iCloud cohort and stops there.
Operationalised by: voice signatures are stored in your private iCloud, not in Bonfiyah-controlled storage. The matching API is structurally incapable of querying across users — there is no shared index, by architecture.
Commitment 03
Audio stays on your iPhone unless you opt into iCloud sync.
Recording capture and transcription happen on-device. Audio leaves your iPhone for one reason and only one: you turned on iCloud sync to your other devices. Even then, it goes to your iCloud, not to ours.
Operationalised by: we have no audio-storage backend. We physically cannot keep your audio on our servers because there is no server with that role.
Commitment 04
Compatibility Analysis refuses to run without explicit consent.
Not "shows a warning." Not "requires a checkbox." Refuses to run. The function that produces the scorecard takes the consent state as a typed argument and crashes if it isn't .bothConsented or .internalOnly.
Operationalised by: the consent gate in code, with no override branch. Read the gate spec.
Commitment 05
Two-party consent management is in every tier, including Free.
Twelve U.S. states require all-party consent. Most recording apps treat this as your problem. We don't make consent management a paid feature because making it paid would be tasteless — recording someone without their consent is a category of harm we don't want our free users to do.
Operationalised by: consent capture, state-by-state law guidance, the verbal-consent detection feature, and the exportable consent log are all available to every Free user from first launch.
Commitment 06
Consent revocation actually deletes things.
If a participant revokes consent on a recording, the recording is deleted. Any analyses derived from it (Compatibility, Team Dynamics, AI Summaries) are flagged for re-run or invalidated. Any voice signature that was bound from that recording is purged.
Operationalised by: a foreign-key deletion cascade on the on-device database, plus a background pass that purges iCloud copies on the next sync. Logged in the audit trail. We do not quietly retain a "just in case" copy.
Commitment 07
No telemetry on your transcripts.
Bonfiyah's analytics know that you opened the app, that an AI feature was used, and that an export ran. They do not know what was in the transcript, who the speakers were, or what the summary said. The content of your conversations is structurally outside the analytics pipeline.
Operationalised by: the analytics SDK has no read access to the recordings table; the privacy nutrition labels in App Store Connect document this explicitly.
Commitment 08
No advertising. No ads on you. No ads from anyone.
Bonfiyah is funded by subscriptions, not by advertising. Your data does not feed an ad model — there is no ad model — and we will not introduce one. If we ever change this, it would be a major-version migration with affirmative re-consent.
Operationalised by: the pricing covers the engineering and inference costs at sustainable scale. We don't need ads, and we don't want them.
Commitment 09
Subpoena response policy is published.
If we receive a valid legal request for your data, we will tell you (where legally permitted), give you the right of first refusal to comply yourself, and only produce the minimum amount of data legally required. We will not over-comply.
Operationalised by: our published privacy policy includes the response procedure. Most of your data is on your device or your iCloud, where we don't have access to it; the architectural choice is the most material protection.
Commitment 10
Voice signatures are biometric data; we treat them that way.
Under GDPR, BIPA, and similar regimes, voice embeddings are biometric identifiers. Ours are stored encrypted in your iCloud, never sent to a Bonfiyah-controlled backend, and inference runs on-device via Core ML. Delete a speaker, the embedding goes.
Operationalised by: the cohort-aware identity layer in /features/voice-id, including the BIPA-style disclosure in the consent flow for jurisdictions that require it.
Commitment 11
Notifications are local. We do not use APNs.
Proactive Notifications are computed on Bonfiyah's backend from your own cohort's data and delivered as a list to your iPhone, where iOS schedules each as a local notification on your device. We never use Apple Push Notification Service for proactive pings — which means your notification content is never visible to APNs servers, never logged in our infrastructure, and never visible to a third-party push provider. The pings live entirely between your iPhone and the lock screen.
Operationalised by: the candidate-feed endpoint returns a list, not a payload. The iOS app schedules each candidate via UNUserNotificationCenter.add(_:); the body of the notification — the quote, speaker name, deadline — is constructed and stored on-device. Most apps with "smart notifications" pump body content through APNs; ours doesn't. Read the architecture →
Commitment 12
If we ever change any of these, we tell you affirmatively.
Privacy policies usually change quietly with a footer date and a "we updated our policy" email. We will not do it that way. A material change to any commitment on this page is a major-version event, with an in-app re-consent dialog and a public changelog entry, and the old policy preserved.
Operationalised by: a versioned commitments file in the codebase that the app reads at launch; a mismatch surfaces a re-consent flow and refuses to run features behind the changed clauses until you've affirmatively accepted.