Bonfiyah Get the app

Pro AI feature · v3.0

Team Dynamics.

Pick the people. The recordings are already there. The map appears.

A 9-box matrix placing the speakers you select, a team cohesion score, swap recommendations, coverage-gap analysis. What McKinsey charges $400K to do over six weeks, you can do in 90 seconds across the people you've already recorded.

The standard team-diagnostic costs more than it should.

You've sat through this exercise. The consultancy ships in. Six weeks of interviews, surveys, 360s, off-sites. A deck arrives. The deck has a 9-box. The deck has a cohesion score. The deck has phrases like "execution muscle" and "psychological safety vector." The bill is six figures.

The interesting part is that the inputs to the deck were conversations. Conversations the consultancy had to schedule, transcribe, code, normalise, and interpret. Most of which you'd already had on your own — in standups, 1:1s, retros, decision reviews — and lost to the air.

If the inputs are recordings, and you already have the recordings, the diagnostic is a 90-second pass over your own library. That is what Team Dynamics is.

Sample output

Six speakers. One matrix. Three concrete recommendations.

Sample team analysis from a six-person product team across 14 recorded standups, retros, and decision reviews. Each placement on the matrix is anchored to source quotes — tap any speaker to see the moments that produced their position.

Team Dynamics — Product Team A

6 speakers · 14 recordings · cohesion 0.71 · saved May 5

Pro AI

9-Box · Drive × Cohesion

Low driveMidHigh drive
↑ High cohesion · ↓ Low cohesion

Speakers

  • · S Sarah · sponsor
  • · P Priya · tech lead
  • · M Marcus · PM
  • · D Dani · IC eng
  • · R Riley · IC eng (new)
  • · J Jordan · designer

Cohesion · 0.71

Functional with watch items. Sarah/Priya/Marcus form a tight strategic core. Dani drives execution but operates in a high-output, low-cohesion quadrant. Riley (new) is high-drive but disconnected — typical onboarding curve, but worth monitoring at 60 days.

Coverage gaps

  • · No high-cohesion, low-drive voice. Team has no one playing the listening / synthesis role; tension repeatedly resolves by Sarah deciding rather than by reaching consensus.
  • · Jordan (design) is in the low-low quadrant — not a competence read; design is not invited into the product decisions where placement would shift. Surfaced from 4 specific moments. Cited →

Swap recommendations

  • · Pair Riley with Priya for the next 30 days — Priya's tenure + Riley's drive is the highest-EV onboarding pairing.
  • · Move Jordan earlier in product decisions. Three of the four cited moments show Jordan raising a real concern after the decision was made.
  • · Consider whether a missing high-cohesion-low-drive voice needs to be hired or whether the function can be carried by Marcus expanding scope.

Generated in roughly 12 seconds across 14 recordings. Saveable as a named team analysis. Re-runs against new recordings as they come in.

How Team Dynamics is computed.

  1. 1

    You pick the people.

    Open the speaker library, select the people who form the team, save the selection as a named team. Two to twelve speakers is the workable range. The recordings that include those people become the analysis substrate automatically — no separate dataset to assemble.

  2. 2

    Each speaker's behavioural signature is derived.

    Across every recording the speaker appears in, Bonfiyah builds a behavioural profile: airtime distribution, decision-shaping vs. information-asking ratio, follow-through reliability, repair patterns, conflict mode. The profile is not a personality test — it is an aggregation of observable conversational behaviour over your specific recordings of that person.

  3. 3

    Speakers are placed on the 9-box.

    The default axes are Drive × Cohesion — drive measured from initiation rate, decision-leaning utterances, and Promise Tracker follow-through; cohesion measured from repair attempts, bid responsiveness, and pair-interaction smoothness with other team members. Both axes are configurable; alternative axis pairs (e.g. Strategy × Execution) are available.

  4. 4

    Team cohesion is scored.

    A single team-level cohesion score (0.0 to 1.0) summarises the pair-interaction quality across the matrix. Computed from cross-pair repair rates, bid acceptance, contempt absence (Gottman), and decision-resolution patterns. The score is interpretive, not predictive — and we report what produced it, not just the number.

  5. 5

    Coverage-gap analysis.

    Empty quadrants of the matrix are surfaced as gaps with a read on whether they matter. A team with no high-cohesion-low-drive voice will resolve tension by decision-making rather than consensus; a team with no low-cohesion-high-drive voice may lack the friction needed to surface bad ideas early. The output names the gap and what behaviour it tends to produce.

  6. 6

    Swap and pairing recommendations.

    Concrete suggestions for who to pair on what work, who to move earlier in a decision flow, where to consider a hire vs. a scope expansion. Each recommendation cites the specific moments in your recordings that produced it. You can dismiss any recommendation; dismissals re-weight future runs against your team.

  7. 7

    Save the analysis. Re-run on demand.

    A team analysis is a saved object — keyed to the team selection and the underlying recording set. Add new recordings, re-run; the diff against the prior analysis is shown. Watch a placement migrate over a quarter; watch the cohesion score recover after a hard conversation.

Who Team Dynamics is for.

Anyone who runs a team they record meetings with.

Engineering and product managers

Pre-quarter team scan.

Run before you set the next quarter's team structure. Surface coverage gaps before you assign work, not after the work surfaces them. The 9-box you would have paid an org-design consultant to produce, generated from the recordings you've already made.

Founders

Co-founder and exec-team diagnostic.

Two-to-five-person founding teams are where most cohesion problems live. Recorded board updates, strategy sessions, and 1:1s are the substrate. The cohesion trajectory over time is more useful than any single point read.

Executive coaches

Per-team artefact for engagement scoping.

When a CEO hires you to "fix the exec team," you have to discover what's actually broken in the first three sessions. Team Dynamics gives you a pre-engagement read keyed to actual recordings — assuming the engagement consents to using them.

Consulting partners

Replace the survey-based 9-box.

If your firm currently produces a 9-box from surveys and 360s, and the client has been recording their meetings, this is a faster and more behaviourally grounded substrate. Subject to the client's consent posture, it can be a meaningful part of the diagnostic.

Consent

The recordings have to have been recorded properly.

Team Dynamics is a synthesis on top of recordings you already have. The consent posture inherits from the recordings themselves. A team analysis can only include recordings whose consent state is at least two_party_aware — meaning every speaker on the recording knew the recording was happening.

Recordings whose consent state is unknown or partial are excluded automatically. The analysis tells you which recordings were excluded and why. You are not required to take any action; the gate enforces itself.

If a teammate later revokes consent on a recording, the team analyses that included it are flagged as needing a re-run, and the prior analysis is invalidated cleanly.

A note on the McKinsey line.

The headline says "What McKinsey charges $400K to do over six weeks, you can do in 90 seconds." We mean this in a specific way.

A real org-design engagement is not just a 9-box. It is interview design, stakeholder management, executive education, change management, and an actual consultant who is responsible for being right. Team Dynamics is none of those things. It is the artefact at the centre of the deck — the matrix and the read.

What we are claiming is that the artefact, on its own, costs almost nothing to produce when you have the recordings. What you do with it — whether you bring in someone to run the conversation around it, whether you use it as input to a hiring decision, whether you treat it as a season-over-season trend — is up to you. We are explicit about the line between what the tool does and what a person should do.

Privacy

Where the inference runs.

The synthesis runs on a Bonfiyah-managed inference endpoint, processed in-memory, never logged or retained for training. Identical posture to Promise Tracker, Story Mode, and Project Context. Your transcripts do not become a row in someone else's training set.

The team analysis itself is stored in your project layer, encrypted, synced through your private iCloud only. A saved team is a private object — there is no shared team workspace by design. If you want to share an analysis with a team member, export to PDF and decide for yourself who to send it to.

FAQ

How many recordings do I need before this is useful?

Roughly five recordings spanning the team selection, with each member appearing in at least two. Below that, the behavioural-signature pass is too thin to anchor confident placements. Five-to-fifteen is the sweet spot. Beyond fifteen, the analysis stays useful but the marginal information per recording drops.

Are the axes really configurable?

Yes. Drive × Cohesion is the default because it's the most useful pair for most teams, but you can pick from a small library of evidence-supported axis pairs (Strategy × Execution, Vision × Reality-test, Initiation × Repair). We resisted "build your own axis" because half-thought-through axes produce confidently wrong placements, which is worse than no analysis.

Can I export the matrix?

PDF export carries the matrix, the cohesion score, the coverage-gap read, and the recommendations — with cited transcript moments per claim. Markdown export gives you the same information as plain text for pasting into Notion or Linear. There is no PowerPoint export and we don't plan to add one.

Can I share an analysis with my team?

You can export to PDF and send it. You cannot share a live team-dynamics view with another Bonfiyah user — there is no shared workspace, by design. The reason is that a 9-box of a team, viewable by every member, becomes a different kind of artefact: it stops being a diagnostic and starts being a performance review. We didn't build the affordance because we didn't want to build the failure mode.

What if a placement is obviously wrong?

Tap the speaker, see the cited moments that produced their placement. Often "obviously wrong" turns out to mean "behaved differently than I expected on five specific occasions" — which is the value of the tool. If the cited moments don't actually support the placement (it happens), flag the placement; the flag re-weights future runs against your team.

Does this replace 360 reviews?

No, and we wouldn't claim it does. A 360 captures perceptions, which are part of the system. Team Dynamics captures observable conversational behaviour, which is another part. The two are complementary — and Team Dynamics is much cheaper to run, which is the point.

See a sample team analysis

The full sample analysis above as a PDF — the matrix, the cohesion score, the coverage-gap read, the swap recommendations, the cited moments. Useful even if you never install.

No spam. We use ConvertKit. See our privacy policy.