Bringing structure, data, and judgement to BTSE's people strategy
Operating philosophy I would bring to this role
In the closing section, after the answers to the case questions, I sketch out a 90-day plan for how I would translate this operating philosophy into the first three months in role.
01Organizational design & resource allocation
Two questions on how to size, shape, and reshape teams as BTSE scales — without breaking what is already working.
Q 1.1
How do you approach staffing and org chart design, including assessing team needs and identifying over- or understaffing?
▾
Five-lens diagnostic
Sequenced from context to capacity. Each lens answers a different question; together they form a defensible picture of where each team sits.
Does this team's mandate ladder to a company OKR? What is intentional vs path-dependent (founders, acquisitions, past leaders)? Much of any org chart persists for legacy reasons, not logic.
Output is not only generated inside teams, but also across teams and departments. Map the end-to-end flow for the 3–4 outputs that matter most. Cross-boundary friction is where most "team capacity" problems actually live.
Throughput per FTE, cycle-time trend, backlog age, the leader's own approval queue, and a one-week time-use sample (% strategic vs reactive).
Span of control (target 5–9 directs), layers (flag >5 between IC and CEO at BTSE's size), manager-to-IC ratio, single points of failure.
Leader and team capability vs forward demand. Same role, different person — materially different output. The diagnosis has to name this honestly or it isn't useful.
The process lens, illustrated
Why Lens 2 earns its place. Often, "team capacity" problems show up at the boundary, not inside the box.
leads in
customer
If BD is "understaffed," the answer might be one more BD — or it might be that compliance review is the real constraint and BD only looks slow because of it. The lens decides.
Process
- Map. One row per role: function, cost, span, manager, charter, OKR linkage, key cross-team handoffs. Pulled from people / project / finance systems where they exist (see closing on data infrastructure).
- Listen. 1:1 with each department head (45 min): "where would one more person change the curve, where would one fewer not be missed in 90 days, and where in the flow with other teams do you lose the most time?"
- Triangulate. Cross-check the leader's view against workload, shape, and the cross-team flow. Disagreements are the most useful information.
- Co-design moves. Each recommendation has a name, a date, and change management measures attached. A re-org without change management is what damages morale, not the re-org itself.
Output — staffing & structure diagnostic by department
A view I would maintain quarterly with the COO. Each team gets both a staffing recommendation (capacity adds, reductions, redeployments) and, where relevant, a structural recommendation (process redesign, span/layer changes, scope shifts, automation). Click a department to see the team-level diagnosis. Sample data only.
Go-to-Market
| Team | Mandate | Workload | Org shape | Move |
|---|---|---|---|---|
| Institutional BD | Build & Scale | Under-cap (cov 1.8×) | Wide span (1:11) | Hire 3 BDs and 1 regional lead; split book by region. |
| Marketing — Content | Run & Optimize | On plan | Top-heavy (span 1:3) | Flatten one layer; reallocate one role into higher-leverage growth work. |
Engineering
| Team | Mandate | Workload | Org shape | Move |
|---|---|---|---|---|
| Trading Engineering | Build & Scale | Stretched (cycle +18%) | Healthy (1:7) | Add 2 SREs for on-call relief; no additional IC headcount. |
| Infrastructure | Build & Scale | Stretched (incidents +25% YoY) | SPOF on lead architect | Hire 1 senior platform engineer for redundancy; invest in observability tooling and AI-assisted incident triage. |
Operations
| Team | Mandate | Workload | Org shape | Move |
|---|---|---|---|---|
| People Ops | Run & Optimize | Overloaded (SLA 78%) | Gap — no APAC HRBP | Invest in tooling first; add 1 HRBP for APAC. Not raw headcount growth. |
| Customer Support — Tier 1 | Run & Optimize | Overloaded (ticket vol +35% YoY) | Wide span; no specialisation | Managed reduction over 2 quarters as AI agent-assist deploys: −8 FTE on L1, retain 2 senior for escalations. |
Compliance
| Team | Mandate | Workload | Org shape | Move |
|---|---|---|---|---|
| KYC Ops | Build & Scale | Stretched (volume +40% YoY) | Healthy (1:6) | Automate L1 review; hire 1 senior reviewer for escalations. |
Finance
| Team | Mandate | Workload | Org shape | Move |
|---|---|---|---|---|
| AR / Billing | Run & Optimize | On plan | SPOF on lead role | Cross-train backup; hire 1 mid-level for resilience. |
Q 1.2
The company wants to cut 15% labor costs but maintain output. How would you analyze team productivity and propose changes without damaging morale?
▾
The five levers — what each actually does
Only two of these cut labor cost; the others create capacity. Every plan starts from this honest base.
| Lever | Speed | Effect | Morale risk |
|---|---|---|---|
| Managed reduction | Fast | Cuts cost | High if poorly executed |
| Hiring freeze + attrition | Slow | Cuts cost | Low–medium (boiling-frog risk if uncertainty drags) |
| Performance management | Medium | Cuts cost; lifts team | Often morale-positive — top performers resent weak ones being protected |
| Re-allocation | Medium | Creates capacity | Low |
| AI & automation | Slow–medium | Creates capacity | Low if framed as augmentation |
How I would diagnose productivity — two layers
- Team layer. KPI achievement vs goals over 2–4 cycles, plus cost per FTE and per unit of output. Pulled from existing systems; light manual collection only where data is missing. Place each team on the matrix below.
- Individual layer (within affected teams). Performance vs goals + strategic fit. Used to make per-person moves defensible.
Team level — what should happen to each team as a whole.
From team to individual
For teams in the "process / leadership fix" or "restructure" quadrants, per-person moves have to be defensible. Two inputs, mirroring the team layer: strategic fit (does the role / skills / seniority match the team's forward direction) on the Y, and performance (KPI vs goals, behaviour, impact) on the X — same axis ordering as the team matrix.
Individual level — what should happen to each person within the team.
From analysis to execution
No flat 15%. Growth-engine teams may need to grow. Run & Optimize teams may deliver 25%+. The portfolio averages to 15% — the per-team number is set by quadrant placement, not the corporate target.
Surgical (managed reduction + performance management, 1–2 quarters) for teams whose roles are no longer strategic. Gradual (hiring freeze + attrition + automation, 3–6 quarters) for any mature, stable function with predictable workload — back-office, but equally platform engineering, infrastructure ops, established support functions.
At BTSE's scale, attrition alone is unlikely to deliver fast enough. Realistic blend: surgical-heavy, gradual-supporting.
Narrative. "Structural fitness for the next phase," not "we are in trouble." One CEO / COO communication; answers to the questions employees actually have.
Packages. Exit / reallocation for those leaving; upskill / reskill for those staying. Survivor disengagement costs more than generosity.
Leading indicators tracked for 2 quarters (Q3.1). Damage shows months 3–6, not week 1.
Inconsistency — different rules for different teams without explanation. Silence — leadership going dark for 2+ weeks during the process. Multiple rounds — far worse than one decisive move.
02Performance tracking & metrics
Three questions on the measurement layer — what to track, by role, and how to monitor without alienating the leaders being measured. Q1.1 and Q1.2 produce one-time diagnostics; the trackers here are the ongoing layer that operationalises them, so the insight doesn't go stale and the COO sees movement against the plan.
Q 2.1
After your conclusion in #1, what steps would you take to track this? What trackers would you build? What are the high-level parameters or metrics you'd benchmark?
▾
The seven trackers
The "Source" column shows which Q1 diagnostic each tracker operationalises. Trackers without a source are independent monitoring layers that don't map back to a one-time Q1 output.
| Tracker | Source | Core question | Headline metrics | Cadence | Audience |
|---|---|---|---|---|---|
| 1. Workforce plan | Q1.1 staffing | Are we hiring the right shape, in the right place, on time, on budget? | HC plan vs actual; vacancy days; cost per FTE; office split | Monthly | COO, CFO, dept heads |
| 2. Org shape | Q1.1 Lens 4 | Is the structure itself healthy? | Span of control; layers; manager-to-IC ratio; SPOF roles | Quarterly | COO, dept heads |
| 3. Goal achievement (OKRs) | — | Did teams deliver against the goals they committed to? | OKR completion %; on-track / at-risk / off-track; missed-by-time vs missed-by-scope | Quarterly + mid-Q checkpoint | CEO, COO, dept heads |
| 4. Productivity & efficiency | Q1.2 productivity | Is each function producing more output per unit of cost or effort, over time? | Output trend per FTE; cost per unit of output; function-specific (Q2.2) | Monthly | COO, dept heads |
| 5. Hiring funnel | — | Is the TA team's pipeline efficient and high quality? | Time-to-hire; pass-through; offer accept; quality-of-hire at 6 mo; source-of-hire | Monthly | COO, head of TA |
| 6. Attrition & engagement | Q1.2 morale | Are we losing the right people for the right reasons? | Regrettable vs total; tenure-at-exit; eNPS; exit-driver categories; manager NPS | Monthly + biannual deep dive | COO, dept heads, HR |
| 7. Compensation fairness | — | Are we paying market, paying fairly, paying for performance? | Comparison vs benchmark; band-placement distribution; pay-rating curve; gender/region pay parity | Biannual | COO, CFO, HR |
Tracker 3 (goal achievement) vs Tracker 4 (productivity): a team can hit all OKRs while being unproductive (over-resourced, soft targets), or be highly productive while missing OKRs (poorly set goals). Both are needed.
Operating cadence
- Weekly 1:1 with COO — exception report only.
- Bi-weekly with department heads — common dashboard view, short Start / Stop / Continue per owner. Layered into existing forums where possible — never a new meeting for a new dashboard.
- Monthly — workforce plan + productivity review.
- Quarterly — org shape + OKR review with the COO.
The data layer — likely a real project, not just plumbing
None of these trackers work without a canonical data view. Three workstreams:
- Connect what exists — HRIS, project tools, finance / ERP, engagement surveys.
- Build what's missing — capability profiles, manager time-use, qualitative project status. Lightweight templates and disciplined manual input where automation isn't realistic.
- Layer AI as the analysis surface — leaders ask natural-language questions of their own data ("show me my at-risk OKRs and cycle-time trend") instead of waiting for a custom report.
I would scope this as an explicit foundational project in the first 90 days — see closing.
Q 2.2
Tell me KPI / OKR / productivity metrics for: an engineer, a product manager, an accountant, a business operations associate, a BD/Sales.
▾
| Layer | Metric | Why it matters | Anti-pattern |
|---|---|---|---|
| Delivery | PR cycle time; deployment frequency; lead time for changes; code-review turnaround | DORA-aligned — the industry standard for engineering velocity. Sustainable rhythm. | Lines of code; commit count; story points without cycle context |
| Impact | Change-failure rate; MTTR; defect-escape rate; features shipped meeting success criteria; SLO / uptime contribution | Connects engineering work to user / business outcomes — fast delivery of broken software is not impact | "Tickets closed" without quality measure |
| Behaviour | Mentorship & coaching; design-review quality (peer-rated); cross-team unblocking; quality of incident learnings shared | Multipliers — the people who make engineering teams 2× better | Stack-ranking individuals on a single composite score |
Cadence: delivery weekly (auto from project tools); impact quarterly with PM & eng manager; behaviour reviewed half-yearly via 360s.
| Layer | Metric | Why it matters | Anti-pattern |
|---|---|---|---|
| Delivery | PRDs shipped on time; roadmap commit accuracy; decision turnaround; hypothesis kill-rate | The PM is the team's clock — slow PMs slow the whole product. Killing bad ideas matters as much as shipping good ones. | Feature count (volume ≠ value) |
| Impact | OKR achievement on owned features; adoption / retention / activation lifts; revenue or usage attributable to launches; NPS on owned area | The metric the company actually pays for — measured at the user / business level, not the launch level | Vanity metrics with no segmentation |
| Behaviour | Engineering & design satisfaction (quarterly survey); stakeholder alignment quality; PRD clarity | Bad PMs frustrate strong engineers — direct attrition risk | Asking the PM to self-rate stakeholder satisfaction |
| Layer | Metric | Why it matters | Anti-pattern |
|---|---|---|---|
| Delivery | Month-end close days; reconciliation completeness; on-time regulatory filings; AR / AP cycle time | The audit standard; close speed is a known maturity signal | Hours worked |
| Impact | Audit findings (count & severity); cost saved through implemented automations; working-capital improvement (DSO / DPO / CCC) | Strong accountants prevent risk and free cash — this is where they pay for themselves | "Tickets closed" — accounting is not a queue function |
| Behaviour | Cross-team collaboration rating; process documentation quality; quality of risk flags raised | Accountants who say nothing are usually a problem; healthy ones surface risk early | Rewarding "no surprises" — incentivises hiding issues |
| Layer | Metric | Why it matters | Anti-pattern |
|---|---|---|---|
| Delivery | Project completion rate vs plan; SLA on cross-functional requests; analyses delivered with a recommended decision | Biz Ops's delivery is decision-making capacity, not deck production | Counting decks produced |
| Impact | $ or hours saved through implemented improvements; OKR contribution on cross-team initiatives; decisions taken from their work | The right question is not "did you do the work" but "did the company move because of you" | Self-reported "impact" without an attribution anchor |
| Behaviour | Stakeholder NPS; willingness to take the ambiguous brief; rigour in pushing back on weak hypotheses | The role lives in ambiguity by definition | Punishing the person who pushed back on a bad idea |
| Layer | Metric | Why it matters | Anti-pattern |
|---|---|---|---|
| Delivery | Qualified meetings within ICP; pipeline coverage (3–4× target); outreach activity within ICP. Split B2B (institutional) vs B2C (retail) — different funnels | Activity is a leading indicator — only if it's the right activity, on the right segment | Raw call/email volume detached from ICP and segment |
| Impact | Quota attainment %; volume / revenue signed; win rate by stage; time-to-first-trade; net retention. By region and by segment | What the business pays for. Quota attainment is the canonical headline; the rest qualifies the quality of attainment. | Overweighting one quarter over trailing 4 |
| Behaviour | Brand-representation score from call samples (Q5.1) — incl. regulatory phrasing accuracy by jurisdiction; deal-review quality; CRM hygiene; cross-region collaboration | For BTSE in crypto, off-script positioning in the wrong jurisdiction is regulatory exposure, not just a brand miss | Top performers exempted from CRM / compliance discipline — corrodes the team and the licence |
Every role gets 3–6 KPIs, not 12. Beyond that, no one optimises for any of them. The tables above are menus per layer, not checklists — each role's actual scorecard picks 3–6 from across the three layers. Each KPI must have an owner, a cadence, a target, and a "so what" decision it triggers when it goes red. KPIs without a decision attached are dashboard art.
Q 2.3
How to monitor every leader's KPIs without every leader disliking you?
▾
Four operating rules
- Co-author, don't impose. The first KPI conversation with every leader is a working session. Quarterly office hours after that — where leaders can retire, change, or argue for any metric — keep the dashboard a living tool, not a fixed one. Imposed metrics generate compliance, not improvement.
- Outcome over activity; tiered visibility. Outcome metrics feel like accountability; activity metrics feel like surveillance. Across the leadership tier, share high-level outcomes (department OKR achievement, direction-of-travel on key business metrics) — that breaks data silos, aligns department goals to company strategy, and turns the dashboard from COO oversight into peer accountability. Keep granular and sensitive data private to the leader and COO (individual performance, named cases, comp). The aim is constructive transparency, not a competitive scoreboard.
- Audit yourself first. The COO Office's KPIs (forecast accuracy, time-to-resolution on escalations, leader satisfaction with the office) sit at the top of the page. Reciprocity matters.
- Be the first call when a number goes red. "I noticed your eNPS dropped 8 points; here are three hypotheses — 30 min?" — not "explain yourself in tomorrow's review."
Invest in leaders, not only in the dashboard
A dashboard only works if leaders know how to use it. In parallel:
- Goal-setting — many leaders set inputs, not outcomes. Coaching here changes the data the dashboard sees.
- Coaching skill — KPIs become useful only when they fuel real coaching conversations.
- Being a coachee — leaders modelling openness with their leader is the only way the cultural change travels down.
BTSE has had a KPI cascade since 2024 (CEO/COO → department heads → teams), and HR owns the performance review process. The trackers and rituals proposed here are designed to build on and evolve that foundation — sharpening what already works, partnering with HR on the diagnostic side, and adjusting only where there is a real gap. Leaders and their teams should experience this as added clarity, not as everything being changed at once. Consultant, not auditor: the day a department head sees me on their calendar and feels relief instead of suspicion, the function is working.
03Diagnosing performance issues
Three scenarios. Each one tests whether you can resist the instinct to act before you understand.
Q 3.1
You notice high turnover in one department. Walk me through how you would diagnose root causes and propose immediate and long-term solutions.
▾
Five-step diagnosis
- Quantify. Annualised attrition, regrettable share, tenure-at-exit, vs company & industry benchmark. Confirm there is actually a problem and how big it is.
- Pattern. Cut by sub-team, manager, level, gender, geography, hire vintage, performance rating. Most "department-wide" turnover is concentrated.
- Listen. Exit interviews; stay interviews across a representative cross-section (performance levels, tenure, sub-team) so the signal isn't biased by who you talked to; confidential pulse survey to the whole department.
- Triangulate. Cross-reference with manager 360 scores, comp benchmarking on roles in question, workload signals, recent re-orgs, and — importantly — the cross-department interactions. Many "team" problems are actually friction at the touchpoints with another team.
- Categorise root cause. Manager / Compensation / Career path / Workload & burnout / Mission alignment / Hiring profile mismatch / Cultural friction / Process friction with another team / Scope of work itself. The action plan is different for each.
Decision tree → action
Action menu — match move to diagnosis
- Manager: coach, redirect, or replace
- Comp & career: benchmark review; ladder clarity (IC vs mgmt)
- Hiring & onboarding: JD, interview loop, 30/60/90 redesign
- Workload: capacity model + targeted capacity adds
- Process: redesign within the team and at touchpoints with peer teams
- Shape: restructure internally; reduce or shed scope where redundant
- Leadership: replace the department head if it's the demonstrated root cause
Sequencing
- Anonymous pulse survey within 5 days
- COO / HR skip-level 1:1s prioritised by flight-risk signal and role criticality (institutional knowledge, hard-to-replace expertise) — not by performance rating alone
- Retention conversations — confirm path, recognise contribution; comp only where the diagnosis says so
- Pause any structural changes pending diagnosis
- Apply the matched move from the menu
- Track for 2 quarters — most morale signal appears months 3–6
- Close the loop — communicate what changed, what didn't, why
Q 3.2
A department head Joe hired 7 extra headcount 9 months ago. However, his output is still similar. What are the possible reasons? How would you explore this? What metrics would you track? How would you improve this?
▾
Hypotheses, grouped
| Type | Hypothesis | What I'd expect in data |
|---|---|---|
| System | The bottleneck isn't headcount. It's somewhere else: Joe's approval queue, an upstream team's delay, unclear priorities, or the team's own process. Or — the new hires are creating coordination overhead (more handoffs, more meetings) instead of relief. | Cycle time flat or rising; work piling up in queues; new hires idle while waiting on Joe; meeting load up |
| Leadership | Goals unclear, weak performance management (the most common cause), wrong hiring profile, or the workflow was never redesigned — bodies just added on top of an unchanged process. | Team members describe the mandate differently; long-tenured weak performers carrying low loads; senior-heavy team on junior work |
| Measurement | Output is actually up, the metric isn't capturing it. | Joe gives examples of new work not on the dashboard |
| Demand | Less real work to do — not a delivery issue. | Backlog age dropping, inbound requests falling |
Investigation plan
- Open conversation with Joe. "Walk me through the original headcount case. What changed?" Joe often knows more than the data shows.
- Process scan. Map the team's workflow end-to-end. Where does work pile up between steps? Where are people waiting? Don't assume where the bottleneck is — let the data show it.
- Performance distribution. Look at how output is spread across the team. Are a few people carrying disproportionately more than others? A long-tail pattern points to uneven capability, regardless of headcount.
- Cross-functional view. Interview the 2–3 peer teams that depend on Joe's. Their experience is valuable input.
- Onboarding audit for the 7 hires — time-to-productivity, 30/60/90 completion.
Metrics on the tracker
Throughput per FTE; cycle time; WIP age
Queue length at each workflow step; upstream wait time
Skill-mix vs work-mix; senior:junior ratio
Rating distribution; long-tail share
Inbound volume; backlog growth
Time-to-productivity vs benchmark
Improvement levers — match to diagnosis
- Decisions all funnel through one person → delegation coaching; deputy or RACI redesign so decisions don't all queue on the leader.
- Performance management has been avoided → COO-backed coaching for Joe; HR partner for the difficult conversations. Hiring is not a substitute.
- Workflow has bottlenecks (from the process scan) → adjust the process at the points where work piles up, before adding capacity. Adding people upstream of a bottleneck only grows the queue in front of it.
- Skill mix doesn't match the work → redeploy or cross-train; pause further hiring until the mix is corrected.
- Demand fell → repurpose the team. Don't fight the market.
Q 3.3
Three department leaders are all saying Joe is too slow. What do you do next?
▾
- Take it seriously, don't act yet. Three independent signals is meaningful. But "slow" is a subjective word — slow on what, vs what expectation, against what SLA?
- Get specific with each leader. 30-min 1:1 with each of the three. "Tell me the last three concrete examples." Specifics reveal whether this is a process issue, capacity issue, or capability issue.
- Understand the inter-departmental system. The 4 departments (Joe + 3 peers) form a system — the issue may be how they interact (handoffs, dependencies, intake quality), not Joe alone.
- Hear Joe's view — on the same situations. Take the specific examples the three leaders cited and ask Joe about each, without framing them as complaints. "Walk me through how the [specific project / decision / handoff] played out." Joe may describe the same events very differently — unclear briefs, shifting priorities, dependencies he can't control. Comparing both sides on the same facts is more useful than asking Joe to defend himself against a label. The framing-back to Joe (if needed) is downstream — after the diagnosis, by the COO.
- Verify with data. Joe's KPIs vs goals; cycle time on Joe's deliverables to peer teams; SLA breach rate on cross-functional requests; intake quality (are the asks well-specified to begin with?).
- Categorise the root cause and provide suitable measures:
- Process within Joe's department → standardise intake, prioritisation, internal workflow
- Process between departments → SLA agreements at the touchpoints; intake-quality requirements from peers
- Capacity → either reduce commitments or add capacity (Q1.1 framework)
- Capability → coaching plan with the COO; if not closeable in 90 days, role change
I would not take the three peers' framing at face value, summon Joe to a meeting, and tell him "you're slow." That is one of the fastest ways to lose a competent leader and learn nothing about the actual problem. The job is to bring evidence and structure to a complaint — not amplify it.
04Interpersonal & team dynamics
Q 4.1
Joe and Jane do not get along or cooperate efficiently — but both are good at their job and need to work together. What do you do?
▾
Most "Joe and Jane" problems are stuck between storming (open friction) and norming (no agreed way to work together). The objective is not to skip storming — it's to move through it deliberately. Two strong professionals don't have to like each other; they have to know how to work together.
Five-step intervention
- 1:1 with each, separately. Surface the friction — style, values, credit / territory, a specific past incident. Often one holds the bigger grievance.
- Joint, facilitated session — interface design, not therapy. Outputs: shared definition of the work overlap; RACI on overlapping decisions; rules of engagement (no public disagreements; escalate to me before each other's manager). Make explicit that differences in style are normal — a broken interface for the company is not.
- Structural moves to reduce unnecessary collaboration. Most "Joe vs Jane" friction is over-collaboration on work that should have a clean interface and async cadence.
- Aligned shared OKR. One outcome they both win or lose together. Forces practical cooperation without pretending to like each other.
- Manager coaching for both. Each gets specific feedback on how their behaviour is read by the other. Usually neither knows.
Concrete example — what the RACI looks like
Worked example: Joe leads BD/commercial, Jane leads Customer Onboarding / Ops. They share an interface around enterprise pricing exceptions and integration scope. The most common "Joe vs Jane" failure mode is both arguing on every deal.
| Decision | Responsible | Accountable | Consulted | Informed |
|---|---|---|---|---|
| Standard pricing within band | Joe (BD) | Joe | — | Jane |
| Pricing exception >X% off list | Joe | CFO / COO | Jane (delivery cost) | — |
| Custom integration scope & SLA | Jane (Ops) | Jane | Joe (commercial impact) | — |
| Go-live date commitment to client | Joe + Jane jointly | Jane | — | COO Office |
Once written down, 80% of the disagreement evaporates — they were arguing about decision rights, not about the deal.
If after 60 days the interface is still failing, the cost is real and growing. With documented evidence, bring a structural recommendation to the COO: re-org so they don't share an interface, or — rarely — accept that one needs to move. Two strong-but-incompatible operators on the same surface is a structural problem, not a people problem.
Anti-patterns
- Group hug. Forced public reconciliation. Adults find it insulting.
- Picking a side, even quietly. The loser becomes a flight risk; the winner becomes harder to manage.
- Ignoring it. "They're both senior, they'll figure it out." They won't — and the team beneath them is already paying the cost.
05Global team alignment & brand consistency
Q 5.1
30 BD/Sales are hired around the world, different timezones and languages. How can they all represent the quality of our brand and product offering correctly?
▾
The three-layer system
- Single source-of-truth library — positioning, ICP cards, competitive battle cards, product one-pagers
- Localised content — translated and culturally adapted, not just literally translated
- Compliance overlay per region (critical for crypto)
- Onboarding bootcamp + 90-day certification, redone yearly
- Standardised training on company values and product — values are part of brand, not separate from it
- One CRM, one set of pipeline stages, one qualification framework
- Pricing rules + a deal desk for non-standard requests
- Regional pod structure — APAC, EMEA, ME, LATAM — each with a regional lead acting as quality gatekeeper
- Time-zone aware coverage for follow-the-sun on enterprise prospects
- Sampled call coaching — every BD has 1–2 calls per month reviewed against a consistent rubric
- AI-assisted call analysis — talk-ratio, ICP fit, key topics covered, objection handling, compliance phrasing
- Win / loss analysis loop — why did we win, why did we lose, fed back to product and marketing
- Customer NPS & CSAT on the BD experience itself
Two often-missed elements
Quality at scale follows the regional lead. Ideally a regional team leader is staffed in each major region with strong personal alignment to the brand and quality bar. They have daily contact with the BD/Sales staff in their region — that is where culture and standards travel.
Globally distributed BDs in non-office locations are at risk of feeling disconnected and drifting off-brand emotionally before they drift off-brand procedurally. Deliberate inclusion activities — regional gatherings, peer pairings, async culture rituals, visible recognition — are not nice-to-haves; they are how distributed staff stay part of the company rather than a sales arm.
Off-script positioning in the wrong jurisdiction isn't just a brand miss — it can be a licence issue. Compliance per region needs hard sign-off authority on regional deviations, and the call-quality rubric (Layer 3) must score regulatory phrasing as a first-class metric, not a nice-to-have.
How AI makes this scale
AI is what makes brand consistency at 30-people-across-time-zones achievable rather than aspirational. Two highest-leverage uses:
- Auto-scored call analysis — every call rated on product accuracy, compliance phrasing, on-brand positioning. Coaching focuses on the 10% flagged, not random samples.
- Real-time canonical answers — BD asks "what's our position on stablecoin payouts in Vietnam?" and gets the approved answer with the source. Stops cowboy positioning at source.
Governance — who owns what
| Decision area | Owner | Cadence |
|---|---|---|
| Global brand & positioning | Marketing + COO Office | Quarterly; ad-hoc updates |
| Sales playbook + qualification | Head of Sales with regional input | Quarterly |
| Compliance & regulatory phrasing | Legal / Compliance per region | Continuous; hard sign-off on regional deviations |
| Local execution & deal exceptions | Regional pod lead | Daily ops; deal desk weekly |
| Quality / call review | COO Office + Sales enablement | Monthly with Head of Sales |
06Closing
Foundations to build
Two cross-cutting projects underpin the rest of this document. Foundation 1 is the basis — the data layer that every tracker, diagnostic, and recommendation here depends on. Foundation 2 is a strong productivity driver — multiplying what existing teams can produce.
Connect what exists. Build what's still required.
Most of what's described above depends on a data layer that doesn't yet exist in fully usable form. The work is two-track: connect existing systems (HRIS, project tools, finance, surveys), and build lightweight collection tools for what isn't captured today (capability profiles, qualitative project status, manager time-use). Where data can only be collected manually for now, do so — but the goal is to move every input toward automated capture over time.
Expand the leverage of AI tools.
The proposal: make AI productivity a workstream this role drives, instead of leaving uptake to each team.
Two levers. Automation removes repetitive low-value work, Augmentation makes existing people more productive: coding assistants for engineering, agent-assist for support, call analysis and prospect research for sales, transaction matching for finance, content & design creation for marketing, interview automation for recruiting, etc.
Operating cadence I would establish
Weekly
- 1:1 with COO — exception report across the seven trackers
- Hiring funnel pulse on critical & open roles (faster than the monthly tracker review)
- Working sessions with department heads on flagged metrics — frequency varies by team
Bi-weekly & monthly
- Bi-weekly Start / Stop / Continue with department heads (layered into existing forums)
- Monthly workforce plan + productivity & efficiency review
- Monthly attrition & engagement review — deep-dive triggered by flags, not scheduled
Quarterly & biannual
- Staffing & structure diagnostic refresh with COO (Q1.1 framework)
- Goal achievement (OKR) review across the company
- KPI office hours — leaders can retire, change, or argue for any metric
- Compensation & benchmarking deep-dive (biannual)
90-day plan — first three months in role
Sequenced so trust and evidence are built before any structural recommendation lands. Listen first, instrument second, co-design third.
| Workstream | Days 1–30 · Listen & baseline | Days 31–60 · Diagnose | Days 61–90 · Co-design & ship |
|---|---|---|---|
| 1. Stakeholder map & trust Foundation |
1:1 with COO, CEO, every dept head, HR lead, and key people across the organization. Goal: understand the company's mandate, history, decision-making norms, where the real friction sits, and who actually drives outcomes vs the org chart. | Second-round 1:1s on specific friction points raised. Skip-levels with a representative cross-section. Map the informal influence network alongside the org chart. | Working relationships established with each dept head and key cross-functional partners — recurring cadence on the calendar. Quarterly KPI office hours opened so leaders can retire, change, or argue for any metric. |
| 2. Data infrastructure Foundation 1 |
Inventory existing systems — HRIS, project tools, finance/ERP, engagement surveys. Identify the gaps (capability profiles, manager time-use, qualitative project status). No new tooling yet. | Connect what exists into one canonical view. Build lightweight manual templates for the gap data. Pilot AI natural-language query on the data with 2 dept heads. | v1 dashboard live. Roadmap published for moving every manual input toward automated capture over the following two quarters. |
| 3. Org & staffing diagnostic Q1.1 Q1.2 |
Pull baseline: HC plan vs actual, span of control, layers, cost per FTE, attrition, OKR achievement by team. Read across the 5 lenses on the data already available. | Apply the 5-lens diagnostic per department. Place each team on the Build/Run × productivity matrix. Triangulate leader view against workload, shape, and cross-team flow. | Deliver a staffing & structure diagnostic by department to the COO with named moves (capacity adds, reductions, redeployments, structural changes). Each move has a name, a date, and change-management measures attached. |
| 4. Performance & KPI system Q2.1 Q2.2 Q2.3 |
Audit the existing 2024 KPI cascade with HR. Identify which of the 7 trackers already exist in some form, which need building, which need rewiring. | Co-author KPIs role-by-role with each dept head — 3–6 KPIs per role, each with owner, cadence, target, and the decision it triggers when red. Audit metrics for the COO Office itself first. | Trackers 1–4 in pilot with dept heads (workforce plan, org shape, OKR achievement, productivity); 5–7 (hiring funnel, attrition, comp) scoped for the following quarter. Bi-weekly Start/Stop/Continue rhythm running. |
| 5. AI productivity Foundation 2 |
Map current AI/automation usage by function. Identify 2–3 highest-leverage augmentation use cases (likely: support agent-assist, sales call analysis, recruiting interview automation). | Run pilots on the top 2 use cases with willing department heads. Define success criteria up front (output per FTE, cycle time, quality). Frame as augmentation, not headcount substitution. | Publish an AI productivity roadmap by function, with sequencing and an estimated capacity-creation profile. Embed measurement into Tracker 4 so the lift is visible. |
| 6. Global BD/Sales quality Q5.1 |
Diagnose the current state across the 30-person distributed BD/Sales team — knowledge, process, and quality layers. Sample call review against a draft rubric. Compliance per region in scope from day one. | Stand up the v1 single-source-of-truth library and CRM/pipeline-stage standardisation. Confirm regional pod leads as quality gatekeepers. AI-assisted call analysis tooling selected. | Bootcamp + 90-day certification curriculum drafted. Sampled call coaching cadence piloted with one region against the rubric. Compliance sign-off process for regional deviations agreed with Legal/Compliance. Inclusion rituals piloted in one region. |
| Milestone | End of month 1: Listening complete; baseline pulled; data inventory done; trust banked. | End of month 2: 5-lens diagnostic applied across departments; tracker design signed off; 2 AI pilots running. | End of month 3: Staffing & structure diagnostic delivered to COO; v1 data layer live; trackers 1–4 in pilot; AI roadmap published; BD/Sales quality system v1 piloted in one region. |
A note on this document
This is a working draft of how I would operate based on the case questions and my current limited knowledge of BTSE. The charts and visualizations are illustrative and could look differently in actual BTSE reality.