Domiciliary care provider — bid audit
23-point quality score improvement in 48 hours with evaluator-style audit and targeted fixes.
The Starting Point
An established domiciliary care provider with 12 years' bidding experience had drafted a major tender response but suspected the quality wasn't competitive. 5 days to deadline, they needed expert eyes to identify what would lose marks before submission.
- 15,000-word draft response with 8 quality questions completed
- Self-assessed at 75% but suspected evaluator scoring would be lower
- Competing against 12 providers for 3 contract awards
- Previous tender scored 71% and missed shortlist — couldn't afford repeat
- Limited time for wholesale rewrite — needed surgical fixes only
What We Delivered
Comprehensive 48-hour audit identifying 23 scoring gaps, 4 compliance risks, and 6 high-priority rewrites — resulting in 91% final quality score and contract award.
- Evaluator-style scoring simulation: rated each question against criteria
- Compliance risk report: 4 critical gaps including insurance validity
- Priority matrix: 6 questions needing rewrite vs 2 needing polish only
- Scoring uplift guidance: specific changes to add 15-25 points per question
- Evidence gap analysis: 8 missing proof points that evaluators expected
- 48-hour delivery with daily check-ins and final submission checklist
Overview
The story
A short narrative of what the buyer needed, how we structured the response, and what we delivered.
The challenge
This wasn’t an inexperienced bidder. The managing director had written tenders for 12 years and led his in-house team through dozens of submissions. But their last major tender had scored 71% and missed the shortlist. With a £1.5M contract on the line and 5 days to deadline, they couldn’t afford a repeat.
The draft was substantial: 15,000 words across 8 quality questions, covering:
- Service delivery model (20% of score)
- Workforce and recruitment (15%)
- Quality assurance (15%)
- Safeguarding (15%)
- Mobilisation (20%)
- Social value (15%)
The team thought it was strong. They’d thought that before.
They needed external validation — not a cheerleader, but an evaluator. Someone who would score it honestly and show them what needed fixing in the time available.
Our approach: The evaluator simulation
Hour 0-6: Blind scoring exercise
We approached the draft cold, as an evaluation panel would. Using the published scoring criteria and weightings, we rated each question 1-5 across:
- Understanding: Do they grasp the requirement?
- Approach: Is the solution credible and appropriate?
- Evidence: Is proof provided and convincing?
- Outcome focus: Are benefits measurable?
Initial scores were sobering:
- Service delivery: 3.2/5 (64%)
- Workforce: 3.8/5 (76%)
- Quality: 3.5/5 (70%)
- Safeguarding: 4.1/5 (82%)
- Mobilisation: 2.9/5 (58%) — major concern
- Social value: 3.6/5 (72%)
Projected total: 68% — below the 75% threshold typically needed for shortlisting.
Hour 6-18: Gap analysis and root cause
We dug deeper into the scoring gaps. The problems weren’t capability — they were presentation.
Mobilisation (58%): Generic timeline, no TUPE detail, weak risk assessment. Written like an operations plan, not a transition strategy.
Service delivery (64%): Operationally accurate but evaluator-unfriendly. Outcomes were buried in paragraph four. Internal jargon that commissioners wouldn’t recognise.
Workforce (76%): Good retention data but weak on recruitment crisis response. Missing evidence on hard-to-fill roles.
Quality (70%): Processes described but not proven. No comparative data against benchmarks.
The good news: most issues were fixable in 48 hours. We identified 6 questions needing significant rewrite, 2 needing targeted improvements, and none requiring a fresh start.
Hour 18-36: The audit report
We delivered a comprehensive audit pack:
Section 1: Compliance risks (Critical)
- Insurance certificate expiry: 2 weeks before contract start
- Missing DBS policy attachment referenced in narrative
- Financial evidence: accounts filed but not audited — acceptability needed confirming
- Mandatory declaration: signatory not listed on Companies House as director
Section 2: Scoring analysis (By question)
Each of the 8 questions rated with:
- Current projected score
- Target achievable score
- Specific gaps preventing full marks
- Rewriting guidance (word count, structure, evidence needed)
Section 3: Priority matrix
Prioritised by impact × effort:
- High impact, low effort (Fix first): Mobilisation structure, outcome signposting, evidence placement
- High impact, high effort (Consider carefully): Service delivery rewrite (needed but time-intensive)
- Low impact, low effort (Quick wins): Jargon removal, table formatting, cross-references
Section 4: Evidence checklist
8 missing proof points that would add 10-15 points:
- Staff retention trend (12 months, not just current figure)
- Complaint resolution times vs target
- Recruitment time-to-hire for hard roles
- Training completion rates with verification
Hour 36-48: Final validation
The client implemented fixes over 36 hours, with daily check-ins. We reviewed the revised mobilisation plan and service delivery response and confirmed scoring improvements. The final submission checklist validated all compliance points.
The results
Score by section:
- Mobilisation: 58% → 94% (+36 points)
- Service delivery: 64% → 88% (+24 points)
- Workforce: 76% → 92% (+16 points)
- Quality: 70% → 89% (+19 points)
- Safeguarding: 82% → 93% (+11 points)
- Social value: 72% → 90% (+18 points)
Contract outcome: Awarded 1 of 3 available contracts. The managing director later learned they placed 2nd overall — the audit moved them from also-ran to winner.
Post-award feedback: The evaluation panel noted the “exceptional mobilisation planning” and “strong evidence base” — exactly the areas the audit had targeted.
What the audit revealed
Three things changed how this provider approaches tenders:
-
The curse of knowledge: They knew their service so well they forgot to explain it. What was obvious internally was opaque to evaluators. The audit forced translation from operational language into something an outsider could score.
-
Evidence positioning: They had the evidence but buried it. Case studies sat in appendices rather than answers. KPIs appeared in tables rather than narrative. Where evidence lives is as important as having it.
-
Scoring criteria blindness: They’d written what they wanted to say, not what evaluators needed to score. Every answer has to map to criteria explicitly. The audit built that discipline into their process.
The ongoing impact
This wasn’t a one-time fix. The audit became their standard approach:
- Template library: Rewrite guidance from this audit now shapes their standard response templates
- Team training: The managing director used the findings to train his team on evaluator perspective
- Regular partnership: They now commission audits for every tender above £500K
The ROI was immediate: contract secured worth £1.5M over 3 years. The audit cost was recovered in the first month of delivery.
Could this work for you?
If you’ve drafted a response but aren’t confident in it, you’re not alone. Even experienced bidders have blind spots. The audit isn’t about finding fault — it’s about finding points.
The pattern is consistent: most drafts score 15-25% below their potential because of fixable presentation issues. An audit catches those before they cost you the contract.
Have a draft response that needs validation?
We’ll confirm timeline, deliver a scorer’s-eye review, and give you a prioritised action plan to maximise your marks.
Client note
Testimonial
"I've written tenders for 12 years and thought I knew the game. The bid audit showed me gaps in our evidence presentation I never spotted. Quick turnaround, practical feedback, and we jumped from 68% to 91% on quality scoring. Now we use them for every major bid."
Send the tender pack
Share the tender pack (or link) and deadline — we’ll confirm fit, timelines, and recommend the most cost-effective scope.