Protected document

Please enter the password to continue.

Incorrect password — please try again.

Best viewed on a desktop computer — some tables are wide and read more easily on a larger screen.

Jih-Ming @ BTSEBusiness & People Strategy · Case study
01 Org design 02 Performance 03 Diagnose 04 Dynamics 05 Global Closing
Navigate
Approach & principles 01 · Org design & resourcing 1.1 Org chart & staffing 1.2 Cut 15% labor cost 02 · Performance tracking 2.1 Trackers I would build 2.2 KPIs by role 2.3 Monitoring without friction 03 · Diagnosing performance 3.1 High turnover 3.2 Joe hired 7 3.3 Three leaders on Joe 04 · Team dynamics 4.1 Joe & Jane 05 · Global brand consistency 5.1 30 BD/Sales worldwide Closing
Controls
Expand all questions Collapse all questions
COO Office · Case Study

Bringing structure, data, and judgement to BTSE's people strategy

Operating philosophy I would bring to this role

Principle 01
Diagnose before you decide

Most "people problems" are actually process, design, or measurement problems. Decisions made without a diagnosis are expensive to reverse.

Principle 02
Trackers are conversations

A dashboard's purpose is to start a structured discussion with a department head, not to replace one.

Principle 03
Be a partner, not a cop

The COO Office must earn the right to ask hard questions. I co-author KPIs with leaders and bring solutions, not just findings.

Principle 04
Differentiate, don't flatten

A 15% cost target is not a 15% cut everywhere. Treat each team as its own diagnosis within its purpose for the overall organization.

In the closing section, after the answers to the case questions, I sketch out a 90-day plan for how I would translate this operating philosophy into the first three months in role.

01Organizational design & resource allocation

Two questions on how to size, shape, and reshape teams as BTSE scales — without breaking what is already working.

Q 1.1
How do you approach staffing and org chart design, including assessing team needs and identifying over- or understaffing?
▾
Read across five lenses — strategic context, process flow, workload, shape, capability. Often, "people problems" are process or design problems hiding inside the org chart.

Five-lens diagnostic

Sequenced from context to capacity. Each lens answers a different question; together they form a defensible picture of where each team sits.

Lens 1 · Strategic context

Does this team's mandate ladder to a company OKR? What is intentional vs path-dependent (founders, acquisitions, past leaders)? Much of any org chart persists for legacy reasons, not logic.

Lens 2 · Process & value-chain flow

Output is not only generated inside teams, but also across teams and departments. Map the end-to-end flow for the 3–4 outputs that matter most. Cross-boundary friction is where most "team capacity" problems actually live.

Lens 3 · Workload & capacity

Throughput per FTE, cycle-time trend, backlog age, the leader's own approval queue, and a one-week time-use sample (% strategic vs reactive).

Lens 4 · Org shape

Span of control (target 5–9 directs), layers (flag >5 between IC and CEO at BTSE's size), manager-to-IC ratio, single points of failure.

Lens 5 · Capability

Leader and team capability vs forward demand. Same role, different person — materially different output. The diagnosis has to name this honestly or it isn't useful.

The process lens, illustrated

Why Lens 2 earns its place. Often, "team capacity" problems show up at the boundary, not inside the box.

Marketing
leads in
→
BD qualifies
→
Compliance / KYC
→
Onboarding · Ops
→
Live
customer

If BD is "understaffed," the answer might be one more BD — or it might be that compliance review is the real constraint and BD only looks slow because of it. The lens decides.

Process

  1. Map. One row per role: function, cost, span, manager, charter, OKR linkage, key cross-team handoffs. Pulled from people / project / finance systems where they exist (see closing on data infrastructure).
  2. Listen. 1:1 with each department head (45 min): "where would one more person change the curve, where would one fewer not be missed in 90 days, and where in the flow with other teams do you lose the most time?"
  3. Triangulate. Cross-check the leader's view against workload, shape, and the cross-team flow. Disagreements are the most useful information.
  4. Co-design moves. Each recommendation has a name, a date, and change management measures attached. A re-org without change management is what damages morale, not the re-org itself.

Output — staffing & structure diagnostic by department

A view I would maintain quarterly with the COO. Each team gets both a staffing recommendation (capacity adds, reductions, redeployments) and, where relevant, a structural recommendation (process redesign, span/layer changes, scope shifts, automation). Click a department to see the team-level diagnosis. Sample data only.

BTSE
Go-to-Market
TeamMandateWorkloadOrg shapeMove
Institutional BD Build & Scale Under-cap (cov 1.8×) Wide span (1:11) Hire 3 BDs and 1 regional lead; split book by region.
Marketing — Content Run & Optimize On plan Top-heavy (span 1:3) Flatten one layer; reallocate one role into higher-leverage growth work.
Engineering
TeamMandateWorkloadOrg shapeMove
Trading Engineering Build & Scale Stretched (cycle +18%) Healthy (1:7) Add 2 SREs for on-call relief; no additional IC headcount.
Infrastructure Build & Scale Stretched (incidents +25% YoY) SPOF on lead architect Hire 1 senior platform engineer for redundancy; invest in observability tooling and AI-assisted incident triage.
Operations
TeamMandateWorkloadOrg shapeMove
People Ops Run & Optimize Overloaded (SLA 78%) Gap — no APAC HRBP Invest in tooling first; add 1 HRBP for APAC. Not raw headcount growth.
Customer Support — Tier 1 Run & Optimize Overloaded (ticket vol +35% YoY) Wide span; no specialisation Managed reduction over 2 quarters as AI agent-assist deploys: −8 FTE on L1, retain 2 senior for escalations.
Compliance
TeamMandateWorkloadOrg shapeMove
KYC Ops Build & Scale Stretched (volume +40% YoY) Healthy (1:6) Automate L1 review; hire 1 senior reviewer for escalations.
Finance
TeamMandateWorkloadOrg shapeMove
AR / Billing Run & Optimize On plan SPOF on lead role Cross-train backup; hire 1 mid-level for resilience.
Q 1.2
The company wants to cut 15% labor costs but maintain output. How would you analyze team productivity and propose changes without damaging morale?
▾
A 15% target is met by removing labor cost — not by efficiency alone. Diagnose by team, set differentiated targets, choose the right lever per team, execute in a way that protects morale.

The five levers — what each actually does

Only two of these cut labor cost; the others create capacity. Every plan starts from this honest base.

LeverSpeedEffectMorale risk
Managed reductionFastCuts costHigh if poorly executed
Hiring freeze + attritionSlowCuts costLow–medium (boiling-frog risk if uncertainty drags)
Performance managementMediumCuts cost; lifts teamOften morale-positive — top performers resent weak ones being protected
Re-allocationMediumCreates capacityLow
AI & automationSlow–mediumCreates capacityLow if framed as augmentation

How I would diagnose productivity — two layers

  1. Team layer. KPI achievement vs goals over 2–4 cycles, plus cost per FTE and per unit of output. Pulled from existing systems; light manual collection only where data is missing. Place each team on the matrix below.
  2. Individual layer (within affected teams). Performance vs goals + strategic fit. Used to make per-person moves defensible.
Strategic mandate →
Build & Scale · Low productivity
Process or leadership fix before any cuts
Build & Scale · High productivity
Protect — possibly invest more
Run & Optimize · Low productivity
Restructure or automate — efficiency is the mandate
Run & Optimize · High productivity
Hold lean; reallocate spare capacity
Productivity verdict →

Team level — what should happen to each team as a whole.

From team to individual

For teams in the "process / leadership fix" or "restructure" quadrants, per-person moves have to be defensible. Two inputs, mirroring the team layer: strategic fit (does the role / skills / seniority match the team's forward direction) on the Y, and performance (KPI vs goals, behaviour, impact) on the X — same axis ordering as the team matrix.

Strategic fit →
High fit · Low performance
Coach with a defined plan and timeline
High fit · High performance
Protect, develop, retain — these are the engines
Low fit · Low performance
Performance management — clear plan or exit
Low fit · High performance
Reskill or reallocate to better-fit role — flight risk if ignored
Performance →

Individual level — what should happen to each person within the team.

From analysis to execution

Differentiate

No flat 15%. Growth-engine teams may need to grow. Run & Optimize teams may deliver 25%+. The portfolio averages to 15% — the per-team number is set by quadrant placement, not the corporate target.

Sequence per team

Surgical (managed reduction + performance management, 1–2 quarters) for teams whose roles are no longer strategic. Gradual (hiring freeze + attrition + automation, 3–6 quarters) for any mature, stable function with predictable workload — back-office, but equally platform engineering, infrastructure ops, established support functions.

At BTSE's scale, attrition alone is unlikely to deliver fast enough. Realistic blend: surgical-heavy, gradual-supporting.

Execute & protect morale

Narrative. "Structural fitness for the next phase," not "we are in trouble." One CEO / COO communication; answers to the questions employees actually have.

Packages. Exit / reallocation for those leaving; upskill / reskill for those staying. Survivor disengagement costs more than generosity.

Leading indicators tracked for 2 quarters (Q3.1). Damage shows months 3–6, not week 1.

What breaks morale

Inconsistency — different rules for different teams without explanation. Silence — leadership going dark for 2+ weeks during the process. Multiple rounds — far worse than one decisive move.

02Performance tracking & metrics

Three questions on the measurement layer — what to track, by role, and how to monitor without alienating the leaders being measured. Q1.1 and Q1.2 produce one-time diagnostics; the trackers here are the ongoing layer that operationalises them, so the insight doesn't go stale and the COO sees movement against the plan.

Q 2.1
After your conclusion in #1, what steps would you take to track this? What trackers would you build? What are the high-level parameters or metrics you'd benchmark?
▾
Seven trackers, one canonical data layer, three cadences. Built so a department head can self-serve, and the COO can see exceptions in 90 seconds.

The seven trackers

The "Source" column shows which Q1 diagnostic each tracker operationalises. Trackers without a source are independent monitoring layers that don't map back to a one-time Q1 output.

TrackerSourceCore questionHeadline metricsCadenceAudience
1. Workforce planQ1.1 staffingAre we hiring the right shape, in the right place, on time, on budget?HC plan vs actual; vacancy days; cost per FTE; office splitMonthlyCOO, CFO, dept heads
2. Org shapeQ1.1 Lens 4Is the structure itself healthy?Span of control; layers; manager-to-IC ratio; SPOF rolesQuarterlyCOO, dept heads
3. Goal achievement (OKRs)—Did teams deliver against the goals they committed to?OKR completion %; on-track / at-risk / off-track; missed-by-time vs missed-by-scopeQuarterly + mid-Q checkpointCEO, COO, dept heads
4. Productivity & efficiencyQ1.2 productivityIs each function producing more output per unit of cost or effort, over time?Output trend per FTE; cost per unit of output; function-specific (Q2.2)MonthlyCOO, dept heads
5. Hiring funnel—Is the TA team's pipeline efficient and high quality?Time-to-hire; pass-through; offer accept; quality-of-hire at 6 mo; source-of-hireMonthlyCOO, head of TA
6. Attrition & engagementQ1.2 moraleAre we losing the right people for the right reasons?Regrettable vs total; tenure-at-exit; eNPS; exit-driver categories; manager NPSMonthly + biannual deep diveCOO, dept heads, HR
7. Compensation fairness—Are we paying market, paying fairly, paying for performance?Comparison vs benchmark; band-placement distribution; pay-rating curve; gender/region pay parityBiannualCOO, CFO, HR

Tracker 3 (goal achievement) vs Tracker 4 (productivity): a team can hit all OKRs while being unproductive (over-resourced, soft targets), or be highly productive while missing OKRs (poorly set goals). Both are needed.

Operating cadence

  • Weekly 1:1 with COO — exception report only.
  • Bi-weekly with department heads — common dashboard view, short Start / Stop / Continue per owner. Layered into existing forums where possible — never a new meeting for a new dashboard.
  • Monthly — workforce plan + productivity review.
  • Quarterly — org shape + OKR review with the COO.

The data layer — likely a real project, not just plumbing

None of these trackers work without a canonical data view. Three workstreams:

  • Connect what exists — HRIS, project tools, finance / ERP, engagement surveys.
  • Build what's missing — capability profiles, manager time-use, qualitative project status. Lightweight templates and disciplined manual input where automation isn't realistic.
  • Layer AI as the analysis surface — leaders ask natural-language questions of their own data ("show me my at-risk OKRs and cycle-time trend") instead of waiting for a custom report.

I would scope this as an explicit foundational project in the first 90 days — see closing.

Q 2.2
Tell me KPI / OKR / productivity metrics for: an engineer, a product manager, an accountant, a business operations associate, a BD/Sales.
▾
Three layers per role: Delivery (did the work get done, on time, at quality), Impact (did it move the business or customer), Behaviour (does this person make the team better). Splitting delivery from impact stops "I did the work" being mistaken for "it worked."
LayerMetricWhy it mattersAnti-pattern
DeliveryPR cycle time; deployment frequency; lead time for changes; code-review turnaroundDORA-aligned — the industry standard for engineering velocity. Sustainable rhythm.Lines of code; commit count; story points without cycle context
ImpactChange-failure rate; MTTR; defect-escape rate; features shipped meeting success criteria; SLO / uptime contributionConnects engineering work to user / business outcomes — fast delivery of broken software is not impact"Tickets closed" without quality measure
BehaviourMentorship & coaching; design-review quality (peer-rated); cross-team unblocking; quality of incident learnings sharedMultipliers — the people who make engineering teams 2× betterStack-ranking individuals on a single composite score

Cadence: delivery weekly (auto from project tools); impact quarterly with PM & eng manager; behaviour reviewed half-yearly via 360s.

LayerMetricWhy it mattersAnti-pattern
DeliveryPRDs shipped on time; roadmap commit accuracy; decision turnaround; hypothesis kill-rateThe PM is the team's clock — slow PMs slow the whole product. Killing bad ideas matters as much as shipping good ones.Feature count (volume ≠ value)
ImpactOKR achievement on owned features; adoption / retention / activation lifts; revenue or usage attributable to launches; NPS on owned areaThe metric the company actually pays for — measured at the user / business level, not the launch levelVanity metrics with no segmentation
BehaviourEngineering & design satisfaction (quarterly survey); stakeholder alignment quality; PRD clarityBad PMs frustrate strong engineers — direct attrition riskAsking the PM to self-rate stakeholder satisfaction
LayerMetricWhy it mattersAnti-pattern
DeliveryMonth-end close days; reconciliation completeness; on-time regulatory filings; AR / AP cycle timeThe audit standard; close speed is a known maturity signalHours worked
ImpactAudit findings (count & severity); cost saved through implemented automations; working-capital improvement (DSO / DPO / CCC)Strong accountants prevent risk and free cash — this is where they pay for themselves"Tickets closed" — accounting is not a queue function
BehaviourCross-team collaboration rating; process documentation quality; quality of risk flags raisedAccountants who say nothing are usually a problem; healthy ones surface risk earlyRewarding "no surprises" — incentivises hiding issues
LayerMetricWhy it mattersAnti-pattern
DeliveryProject completion rate vs plan; SLA on cross-functional requests; analyses delivered with a recommended decisionBiz Ops's delivery is decision-making capacity, not deck productionCounting decks produced
Impact$ or hours saved through implemented improvements; OKR contribution on cross-team initiatives; decisions taken from their workThe right question is not "did you do the work" but "did the company move because of you"Self-reported "impact" without an attribution anchor
BehaviourStakeholder NPS; willingness to take the ambiguous brief; rigour in pushing back on weak hypothesesThe role lives in ambiguity by definitionPunishing the person who pushed back on a bad idea
LayerMetricWhy it mattersAnti-pattern
DeliveryQualified meetings within ICP; pipeline coverage (3–4× target); outreach activity within ICP. Split B2B (institutional) vs B2C (retail) — different funnelsActivity is a leading indicator — only if it's the right activity, on the right segmentRaw call/email volume detached from ICP and segment
ImpactQuota attainment %; volume / revenue signed; win rate by stage; time-to-first-trade; net retention. By region and by segmentWhat the business pays for. Quota attainment is the canonical headline; the rest qualifies the quality of attainment.Overweighting one quarter over trailing 4
BehaviourBrand-representation score from call samples (Q5.1) — incl. regulatory phrasing accuracy by jurisdiction; deal-review quality; CRM hygiene; cross-region collaborationFor BTSE in crypto, off-script positioning in the wrong jurisdiction is regulatory exposure, not just a brand missTop performers exempted from CRM / compliance discipline — corrodes the team and the licence
Cross-cutting design principle

Every role gets 3–6 KPIs, not 12. Beyond that, no one optimises for any of them. The tables above are menus per layer, not checklists — each role's actual scorecard picks 3–6 from across the three layers. Each KPI must have an owner, a cadence, a target, and a "so what" decision it triggers when it goes red. KPIs without a decision attached are dashboard art.

Q 2.3
How to monitor every leader's KPIs without every leader disliking you?
▾
Co-author the metrics. Audit yourself before you audit them. Be the first call when a number goes red — with a hypothesis, not a question. Invest in leaders' ability to use the system, not only in the system itself.

Four operating rules

  1. Co-author, don't impose. The first KPI conversation with every leader is a working session. Quarterly office hours after that — where leaders can retire, change, or argue for any metric — keep the dashboard a living tool, not a fixed one. Imposed metrics generate compliance, not improvement.
  2. Outcome over activity; tiered visibility. Outcome metrics feel like accountability; activity metrics feel like surveillance. Across the leadership tier, share high-level outcomes (department OKR achievement, direction-of-travel on key business metrics) — that breaks data silos, aligns department goals to company strategy, and turns the dashboard from COO oversight into peer accountability. Keep granular and sensitive data private to the leader and COO (individual performance, named cases, comp). The aim is constructive transparency, not a competitive scoreboard.
  3. Audit yourself first. The COO Office's KPIs (forecast accuracy, time-to-resolution on escalations, leader satisfaction with the office) sit at the top of the page. Reciprocity matters.
  4. Be the first call when a number goes red. "I noticed your eNPS dropped 8 points; here are three hypotheses — 30 min?" — not "explain yourself in tomorrow's review."

Invest in leaders, not only in the dashboard

A dashboard only works if leaders know how to use it. In parallel:

  • Goal-setting — many leaders set inputs, not outcomes. Coaching here changes the data the dashboard sees.
  • Coaching skill — KPIs become useful only when they fuel real coaching conversations.
  • Being a coachee — leaders modelling openness with their leader is the only way the cultural change travels down.
Building on what's already there

BTSE has had a KPI cascade since 2024 (CEO/COO → department heads → teams), and HR owns the performance review process. The trackers and rituals proposed here are designed to build on and evolve that foundation — sharpening what already works, partnering with HR on the diagnostic side, and adjusting only where there is a real gap. Leaders and their teams should experience this as added clarity, not as everything being changed at once. Consultant, not auditor: the day a department head sees me on their calendar and feels relief instead of suspicion, the function is working.

03Diagnosing performance issues

Three scenarios. Each one tests whether you can resist the instinct to act before you understand.

Q 3.1
You notice high turnover in one department. Walk me through how you would diagnose root causes and propose immediate and long-term solutions.
▾
Quantify, pattern, listen, triangulate, decide. The action menu must include both incremental fixes and structural moves — sometimes the right answer is that the team should not exist as currently scoped.

Five-step diagnosis

  1. Quantify. Annualised attrition, regrettable share, tenure-at-exit, vs company & industry benchmark. Confirm there is actually a problem and how big it is.
  2. Pattern. Cut by sub-team, manager, level, gender, geography, hire vintage, performance rating. Most "department-wide" turnover is concentrated.
  3. Listen. Exit interviews; stay interviews across a representative cross-section (performance levels, tenure, sub-team) so the signal isn't biased by who you talked to; confidential pulse survey to the whole department.
  4. Triangulate. Cross-reference with manager 360 scores, comp benchmarking on roles in question, workload signals, recent re-orgs, and — importantly — the cross-department interactions. Many "team" problems are actually friction at the touchpoints with another team.
  5. Categorise root cause. Manager / Compensation / Career path / Workload & burnout / Mission alignment / Hiring profile mismatch / Cultural friction / Process friction with another team / Scope of work itself. The action plan is different for each.

Decision tree → action

Turnover spike confirmed
Concentrated under one manager?
Manager coaching + skip-levels If unfixable: leadership change
Concentrated by tenure (e.g. 0–6 months)?
Hiring profile / onboarding fix Re-write the JD & interview loop
Concentrated by level (e.g. seniors leaving)?
Career-path & comp diagnosis Differentiated retention plan for at-risk seniors
Workload / burnout pattern?
Capacity model & work redesign Hiring + automation lever
Friction with another department?
Process & SLA redesign at the touchpoint Joint OKR with peer leader

Action menu — match move to diagnosis

Incremental — people & process
  • Manager: coach, redirect, or replace
  • Comp & career: benchmark review; ladder clarity (IC vs mgmt)
  • Hiring & onboarding: JD, interview loop, 30/60/90 redesign
  • Workload: capacity model + targeted capacity adds
Structural — design
  • Process: redesign within the team and at touchpoints with peer teams
  • Shape: restructure internally; reduce or shed scope where redundant
  • Leadership: replace the department head if it's the demonstrated root cause

Sequencing

Immediate (0–30 days) — stop the bleed
  • Anonymous pulse survey within 5 days
  • COO / HR skip-level 1:1s prioritised by flight-risk signal and role criticality (institutional knowledge, hard-to-replace expertise) — not by performance rating alone
  • Retention conversations — confirm path, recognise contribution; comp only where the diagnosis says so
  • Pause any structural changes pending diagnosis
Long-term (3–12 months)
  • Apply the matched move from the menu
  • Track for 2 quarters — most morale signal appears months 3–6
  • Close the loop — communicate what changed, what didn't, why
Q 3.2
A department head Joe hired 7 extra headcount 9 months ago. However, his output is still similar. What are the possible reasons? How would you explore this? What metrics would you track? How would you improve this?
▾
Adding people only lifts output if the bottleneck was the number of people. If the bottleneck is something else — Joe himself, an approval queue, an upstream team, or weak performance management — more headcount won't help. Find the actual bottleneck before any other move.

Hypotheses, grouped

TypeHypothesisWhat I'd expect in data
SystemThe bottleneck isn't headcount. It's somewhere else: Joe's approval queue, an upstream team's delay, unclear priorities, or the team's own process. Or — the new hires are creating coordination overhead (more handoffs, more meetings) instead of relief.Cycle time flat or rising; work piling up in queues; new hires idle while waiting on Joe; meeting load up
LeadershipGoals unclear, weak performance management (the most common cause), wrong hiring profile, or the workflow was never redesigned — bodies just added on top of an unchanged process.Team members describe the mandate differently; long-tenured weak performers carrying low loads; senior-heavy team on junior work
MeasurementOutput is actually up, the metric isn't capturing it.Joe gives examples of new work not on the dashboard
DemandLess real work to do — not a delivery issue.Backlog age dropping, inbound requests falling

Investigation plan

  1. Open conversation with Joe. "Walk me through the original headcount case. What changed?" Joe often knows more than the data shows.
  2. Process scan. Map the team's workflow end-to-end. Where does work pile up between steps? Where are people waiting? Don't assume where the bottleneck is — let the data show it.
  3. Performance distribution. Look at how output is spread across the team. Are a few people carrying disproportionately more than others? A long-tail pattern points to uneven capability, regardless of headcount.
  4. Cross-functional view. Interview the 2–3 peer teams that depend on Joe's. Their experience is valuable input.
  5. Onboarding audit for the 7 hires — time-to-productivity, 30/60/90 completion.

Metrics on the tracker

Output

Throughput per FTE; cycle time; WIP age

Bottlenecks

Queue length at each workflow step; upstream wait time

Composition

Skill-mix vs work-mix; senior:junior ratio

Performance spread

Rating distribution; long-tail share

Demand

Inbound volume; backlog growth

Onboarding

Time-to-productivity vs benchmark

Improvement levers — match to diagnosis

  • Decisions all funnel through one person → delegation coaching; deputy or RACI redesign so decisions don't all queue on the leader.
  • Performance management has been avoided → COO-backed coaching for Joe; HR partner for the difficult conversations. Hiring is not a substitute.
  • Workflow has bottlenecks (from the process scan) → adjust the process at the points where work piles up, before adding capacity. Adding people upstream of a bottleneck only grows the queue in front of it.
  • Skill mix doesn't match the work → redeploy or cross-train; pause further hiring until the mix is corrected.
  • Demand fell → repurpose the team. Don't fight the market.
Q 3.3
Three department leaders are all saying Joe is too slow. What do you do next?
▾
Three independent reports raise the prior, but don't replace verification. Listen, instrument, then decide — fast.
  1. Take it seriously, don't act yet. Three independent signals is meaningful. But "slow" is a subjective word — slow on what, vs what expectation, against what SLA?
  2. Get specific with each leader. 30-min 1:1 with each of the three. "Tell me the last three concrete examples." Specifics reveal whether this is a process issue, capacity issue, or capability issue.
  3. Understand the inter-departmental system. The 4 departments (Joe + 3 peers) form a system — the issue may be how they interact (handoffs, dependencies, intake quality), not Joe alone.
  4. Hear Joe's view — on the same situations. Take the specific examples the three leaders cited and ask Joe about each, without framing them as complaints. "Walk me through how the [specific project / decision / handoff] played out." Joe may describe the same events very differently — unclear briefs, shifting priorities, dependencies he can't control. Comparing both sides on the same facts is more useful than asking Joe to defend himself against a label. The framing-back to Joe (if needed) is downstream — after the diagnosis, by the COO.
  5. Verify with data. Joe's KPIs vs goals; cycle time on Joe's deliverables to peer teams; SLA breach rate on cross-functional requests; intake quality (are the asks well-specified to begin with?).
  6. Categorise the root cause and provide suitable measures:
    • Process within Joe's department → standardise intake, prioritisation, internal workflow
    • Process between departments → SLA agreements at the touchpoints; intake-quality requirements from peers
    • Capacity → either reduce commitments or add capacity (Q1.1 framework)
    • Capability → coaching plan with the COO; if not closeable in 90 days, role change
What I would not do

I would not take the three peers' framing at face value, summon Joe to a meeting, and tell him "you're slow." That is one of the fastest ways to lose a competent leader and learn nothing about the actual problem. The job is to bring evidence and structure to a complaint — not amplify it.

04Interpersonal & team dynamics

Q 4.1
Joe and Jane do not get along or cooperate efficiently — but both are good at their job and need to work together. What do you do?
▾
Don't try to make them friends. Make them an effective interface. Tuckman's Forming–Storming–Norming–Performing is the diagnostic frame; structural redesign comes first, relational work second.

Most "Joe and Jane" problems are stuck between storming (open friction) and norming (no agreed way to work together). The objective is not to skip storming — it's to move through it deliberately. Two strong professionals don't have to like each other; they have to know how to work together.

Five-step intervention

  1. 1:1 with each, separately. Surface the friction — style, values, credit / territory, a specific past incident. Often one holds the bigger grievance.
  2. Joint, facilitated session — interface design, not therapy. Outputs: shared definition of the work overlap; RACI on overlapping decisions; rules of engagement (no public disagreements; escalate to me before each other's manager). Make explicit that differences in style are normal — a broken interface for the company is not.
  3. Structural moves to reduce unnecessary collaboration. Most "Joe vs Jane" friction is over-collaboration on work that should have a clean interface and async cadence.
  4. Aligned shared OKR. One outcome they both win or lose together. Forces practical cooperation without pretending to like each other.
  5. Manager coaching for both. Each gets specific feedback on how their behaviour is read by the other. Usually neither knows.

Concrete example — what the RACI looks like

Worked example: Joe leads BD/commercial, Jane leads Customer Onboarding / Ops. They share an interface around enterprise pricing exceptions and integration scope. The most common "Joe vs Jane" failure mode is both arguing on every deal.

DecisionResponsibleAccountableConsultedInformed
Standard pricing within bandJoe (BD)Joe—Jane
Pricing exception >X% off listJoeCFO / COOJane (delivery cost)—
Custom integration scope & SLAJane (Ops)JaneJoe (commercial impact)—
Go-live date commitment to clientJoe + Jane jointlyJane—COO Office

Once written down, 80% of the disagreement evaporates — they were arguing about decision rights, not about the deal.

When to escalate

If after 60 days the interface is still failing, the cost is real and growing. With documented evidence, bring a structural recommendation to the COO: re-org so they don't share an interface, or — rarely — accept that one needs to move. Two strong-but-incompatible operators on the same surface is a structural problem, not a people problem.

Anti-patterns

  • Group hug. Forced public reconciliation. Adults find it insulting.
  • Picking a side, even quietly. The loser becomes a flight risk; the winner becomes harder to manage.
  • Ignoring it. "They're both senior, they'll figure it out." They won't — and the team beneath them is already paying the cost.

05Global team alignment & brand consistency

Q 5.1
30 BD/Sales are hired around the world, different timezones and languages. How can they all represent the quality of our brand and product offering correctly?
▾
Standardise the playbook, localise the delivery. Build three layers — knowledge (what they say), process (how they sell), quality (how we know it's working) — and invest in regional leadership and inclusion so distributed staff stay culturally close to the brand.

The three-layer system

Layer 1 · Knowledge
  • Single source-of-truth library — positioning, ICP cards, competitive battle cards, product one-pagers
  • Localised content — translated and culturally adapted, not just literally translated
  • Compliance overlay per region (critical for crypto)
  • Onboarding bootcamp + 90-day certification, redone yearly
  • Standardised training on company values and product — values are part of brand, not separate from it
Layer 2 · Process
  • One CRM, one set of pipeline stages, one qualification framework
  • Pricing rules + a deal desk for non-standard requests
  • Regional pod structure — APAC, EMEA, ME, LATAM — each with a regional lead acting as quality gatekeeper
  • Time-zone aware coverage for follow-the-sun on enterprise prospects
Layer 3 · Quality
  • Sampled call coaching — every BD has 1–2 calls per month reviewed against a consistent rubric
  • AI-assisted call analysis — talk-ratio, ICP fit, key topics covered, objection handling, compliance phrasing
  • Win / loss analysis loop — why did we win, why did we lose, fed back to product and marketing
  • Customer NPS & CSAT on the BD experience itself

Two often-missed elements

Strong regional leaders

Quality at scale follows the regional lead. Ideally a regional team leader is staffed in each major region with strong personal alignment to the brand and quality bar. They have daily contact with the BD/Sales staff in their region — that is where culture and standards travel.

Inclusion of non-office staff

Globally distributed BDs in non-office locations are at risk of feeling disconnected and drifting off-brand emotionally before they drift off-brand procedurally. Deliberate inclusion activities — regional gatherings, peer pairings, async culture rituals, visible recognition — are not nice-to-haves; they are how distributed staff stay part of the company rather than a sales arm.

In crypto, brand inconsistency is regulatory exposure

Off-script positioning in the wrong jurisdiction isn't just a brand miss — it can be a licence issue. Compliance per region needs hard sign-off authority on regional deviations, and the call-quality rubric (Layer 3) must score regulatory phrasing as a first-class metric, not a nice-to-have.

How AI makes this scale

AI is what makes brand consistency at 30-people-across-time-zones achievable rather than aspirational. Two highest-leverage uses:

  • Auto-scored call analysis — every call rated on product accuracy, compliance phrasing, on-brand positioning. Coaching focuses on the 10% flagged, not random samples.
  • Real-time canonical answers — BD asks "what's our position on stablecoin payouts in Vietnam?" and gets the approved answer with the source. Stops cowboy positioning at source.

Governance — who owns what

Decision areaOwnerCadence
Global brand & positioningMarketing + COO OfficeQuarterly; ad-hoc updates
Sales playbook + qualificationHead of Sales with regional inputQuarterly
Compliance & regulatory phrasingLegal / Compliance per regionContinuous; hard sign-off on regional deviations
Local execution & deal exceptionsRegional pod leadDaily ops; deal desk weekly
Quality / call reviewCOO Office + Sales enablementMonthly with Head of Sales

06Closing

Foundations to build

Two cross-cutting projects underpin the rest of this document. Foundation 1 is the basis — the data layer that every tracker, diagnostic, and recommendation here depends on. Foundation 2 is a strong productivity driver — multiplying what existing teams can produce.

Foundation 1 · Data infrastructure

Connect what exists. Build what's still required.

Most of what's described above depends on a data layer that doesn't yet exist in fully usable form. The work is two-track: connect existing systems (HRIS, project tools, finance, surveys), and build lightweight collection tools for what isn't captured today (capability profiles, qualitative project status, manager time-use). Where data can only be collected manually for now, do so — but the goal is to move every input toward automated capture over time.

Foundation 2 · AI as a productivity lever

Expand the leverage of AI tools.

The proposal: make AI productivity a workstream this role drives, instead of leaving uptake to each team.

Two levers. Automation removes repetitive low-value work, Augmentation makes existing people more productive: coding assistants for engineering, agent-assist for support, call analysis and prospect research for sales, transaction matching for finance, content & design creation for marketing, interview automation for recruiting, etc.

Operating cadence I would establish

Weekly

  • 1:1 with COO — exception report across the seven trackers
  • Hiring funnel pulse on critical & open roles (faster than the monthly tracker review)
  • Working sessions with department heads on flagged metrics — frequency varies by team

Bi-weekly & monthly

  • Bi-weekly Start / Stop / Continue with department heads (layered into existing forums)
  • Monthly workforce plan + productivity & efficiency review
  • Monthly attrition & engagement review — deep-dive triggered by flags, not scheduled

Quarterly & biannual

  • Staffing & structure diagnostic refresh with COO (Q1.1 framework)
  • Goal achievement (OKR) review across the company
  • KPI office hours — leaders can retire, change, or argue for any metric
  • Compensation & benchmarking deep-dive (biannual)

90-day plan — first three months in role

Sequenced so trust and evidence are built before any structural recommendation lands. Listen first, instrument second, co-design third.

Workstream Days 1–30 · Listen & baseline Days 31–60 · Diagnose Days 61–90 · Co-design & ship
1. Stakeholder map & trust
Foundation
1:1 with COO, CEO, every dept head, HR lead, and key people across the organization. Goal: understand the company's mandate, history, decision-making norms, where the real friction sits, and who actually drives outcomes vs the org chart. Second-round 1:1s on specific friction points raised. Skip-levels with a representative cross-section. Map the informal influence network alongside the org chart. Working relationships established with each dept head and key cross-functional partners — recurring cadence on the calendar. Quarterly KPI office hours opened so leaders can retire, change, or argue for any metric.
2. Data infrastructure
Foundation 1
Inventory existing systems — HRIS, project tools, finance/ERP, engagement surveys. Identify the gaps (capability profiles, manager time-use, qualitative project status). No new tooling yet. Connect what exists into one canonical view. Build lightweight manual templates for the gap data. Pilot AI natural-language query on the data with 2 dept heads. v1 dashboard live. Roadmap published for moving every manual input toward automated capture over the following two quarters.
3. Org & staffing diagnostic
Q1.1 Q1.2
Pull baseline: HC plan vs actual, span of control, layers, cost per FTE, attrition, OKR achievement by team. Read across the 5 lenses on the data already available. Apply the 5-lens diagnostic per department. Place each team on the Build/Run × productivity matrix. Triangulate leader view against workload, shape, and cross-team flow. Deliver a staffing & structure diagnostic by department to the COO with named moves (capacity adds, reductions, redeployments, structural changes). Each move has a name, a date, and change-management measures attached.
4. Performance & KPI system
Q2.1 Q2.2 Q2.3
Audit the existing 2024 KPI cascade with HR. Identify which of the 7 trackers already exist in some form, which need building, which need rewiring. Co-author KPIs role-by-role with each dept head — 3–6 KPIs per role, each with owner, cadence, target, and the decision it triggers when red. Audit metrics for the COO Office itself first. Trackers 1–4 in pilot with dept heads (workforce plan, org shape, OKR achievement, productivity); 5–7 (hiring funnel, attrition, comp) scoped for the following quarter. Bi-weekly Start/Stop/Continue rhythm running.
5. AI productivity
Foundation 2
Map current AI/automation usage by function. Identify 2–3 highest-leverage augmentation use cases (likely: support agent-assist, sales call analysis, recruiting interview automation). Run pilots on the top 2 use cases with willing department heads. Define success criteria up front (output per FTE, cycle time, quality). Frame as augmentation, not headcount substitution. Publish an AI productivity roadmap by function, with sequencing and an estimated capacity-creation profile. Embed measurement into Tracker 4 so the lift is visible.
6. Global BD/Sales quality
Q5.1
Diagnose the current state across the 30-person distributed BD/Sales team — knowledge, process, and quality layers. Sample call review against a draft rubric. Compliance per region in scope from day one. Stand up the v1 single-source-of-truth library and CRM/pipeline-stage standardisation. Confirm regional pod leads as quality gatekeepers. AI-assisted call analysis tooling selected. Bootcamp + 90-day certification curriculum drafted. Sampled call coaching cadence piloted with one region against the rubric. Compliance sign-off process for regional deviations agreed with Legal/Compliance. Inclusion rituals piloted in one region.
Milestone End of month 1: Listening complete; baseline pulled; data inventory done; trust banked. End of month 2: 5-lens diagnostic applied across departments; tracker design signed off; 2 AI pilots running. End of month 3: Staffing & structure diagnostic delivered to COO; v1 data layer live; trackers 1–4 in pilot; AI roadmap published; BD/Sales quality system v1 piloted in one region.

A note on this document

This is a working draft of how I would operate based on the case questions and my current limited knowledge of BTSE. The charts and visualizations are illustrative and could look differently in actual BTSE reality.