Headway Housing Ltd — £380K supported living tender
Had 11 days, a strong service, and a draft that read like an internal report. Scored 92% on quality.
At a glance
Summary
Where the client started and what we delivered — at a glance.
The Starting Point
Headway Housing Ltd had delivered supported living services for six years with a Good CQC rating but had never scored above 70% on a quality submission. This £380K contract had 11 days to deadline.
- Responses read like internal process documents, not answers to scored questions
- Safeguarding and mobilisation sections didn't match the structure evaluators mark against
- Strong evidence existed but was buried inside paragraphs, not mapped to criteria
- Three previous submissions had scored between 58% and 68% on quality
What We Delivered
We rebuilt the entire response from the evaluation model outward, mapping every answer to what evaluators actually score rather than what the client wanted to say about their service.
- Scoring map built from the 23 quality questions before any drafting started
- Safeguarding section rewritten around real incidents and policy application, not policy existence
- Mobilisation restructured as a week-by-week timeline against the brief's specific requirements
- Two compliance failures in the original draft caught and resolved via our 47-point checklist
Background
The challenge
What the client was dealing with before we got involved.
Headway Housing Ltd had been running supported living services across the Midlands for six years. CQC rating: Good. Staff retention above sector average. Referral pipeline strong. And none of it was showing up in their tender scores. Three previous submissions had scored between 58% and 68% on quality. Close enough to be considered, never close enough to win.
The problem wasn't the service. It was the translation. What Headway was submitting was accurate. Detailed, thorough, genuinely reflective of how they operated. But it was written for someone who already understood their service, not for an evaluator with 50 bids to score in a week. The strongest evidence was buried inside paragraphs that didn't match the questions being asked. A Good CQC rating appeared in the supporting notes. The incident management data that would have scored well was in the appendix. The evaluator would have had to dig for proof that should have been in the first two sentences.
Method
Our approach
How we structured the work to address the brief.
We ignored the draft entirely and started with the evaluation model. Quality was weighted at 60% across 23 scored questions. That's where the marks were, so that's where we started. For each question we mapped what a high-scoring answer contains, in what order, and how it evidences rather than describes. Then we rebuilt every response against that map.
The safeguarding section was the biggest lift. The original answer led with the name of the policy, the date it was last reviewed, and who the designated safeguarding lead was. None of that is what evaluators score. They score application: not whether a policy exists, but whether staff know how to use it when something goes wrong. We restructured the section around three real scenarios. What happened, how the policy guided the response, what changed as a result. The mobilisation answer had a different problem. It was a wall of text describing the process accurately but giving the evaluator nothing to hold against a timeline. We broke it into a week-by-week delivery schedule mapped directly to the contract start date and the three specific mobilisation requirements in the brief.
After drafting, every response went through our 47-point compliance checklist. Word counts, attachment requirements, formatting rules, cross-references. Two responses in the original draft would have triggered automatic compliance failures. One exceeded the word limit. The other referenced a supporting document that wasn't included in the pack. Both caught before submission.
Outcome
The result
What changed after the engagement.
They scored 92% on quality. That's a 24-point jump from their previous best, completed in 11 days from first call to submission. They won the contract. The annual value is £380K.
The evidence library we put together during the engagement, structured and mapped to common evaluation themes, has already been reused across two further bids. That's the part that compounds. The work done for one submission doesn't disappear when the deadline passes.
We'd spent years delivering a genuinely good service and couldn't understand why our tender scores didn't reflect it. Working with JC Tenders showed us the problem wasn't what we were delivering. It was how we were describing it. We went from mid-60s to 92% on quality and won the contract. We know now exactly what evaluators are looking for, and we're not going back to submitting without that lens.
In a similar position?
If you're delivering a strong service but your tender scores don't reflect it, the problem is almost always structure, not substance. Send us the tender pack and we'll tell you exactly where the marks are being lost.
Send the tender pack
Share the tender pack (or link) and deadline — we’ll confirm fit, timelines, and recommend the most cost-effective scope.