Writing & Scoring

How to write to evaluation criteria: transform average responses into high-scoring bids

Evaluators score against criteria, not effort. Learn the structure that turns your expertise into winning marks.

The scoring-first structure

When a question is scored, evaluators need to find evidence fast.

Use this pattern:

  1. Answer in one line (what you will do)
  2. Method (how you will do it, step-by-step)
  3. Proof (policies, KPIs, audits, case studies)
  4. Controls (governance, QA, escalation)
  5. Outcome (what improves, how you measure it)

Understanding the evaluation criteria

Most bidders miss this: evaluators don’t score your expertise. They score your demonstration of criteria.

You can be the best provider in your region with 20 years of experience. If your response doesn’t explicitly map to the evaluation criteria, you’ll score 60% while a weaker competitor scores 85% because they wrote to the scoring framework.

This guide teaches you to write for evaluators, not for yourself.


Decoding the scoring framework

Before writing a single word, decode the evaluation criteria:

Where to find criteria

  • ITT/ITQ Section 3: Usually titled “Evaluation Criteria” or “Award Criteria”
  • Attachment B: Sometimes separate scoring matrix document
  • Clarification responses: Buyers sometimes refine criteria post-clarification

What to extract

For each quality question, note:

  1. Total marks available (usually 5, 10, 15, or 20)
  2. Scoring categories (Excellent/Good/Acceptable/Poor/Unacceptable)
  3. Specific requirements the buyer lists
  4. Weighting of question within overall score

Example criteria breakdown

Question: “Describe your approach to service delivery” (20 marks)

Scoring framework:

  • Excellent (16-20): Comprehensive approach with innovative elements, clear outcome measurement, strong evidence base, and demonstrated continuous improvement culture
  • Good (12-15): Detailed approach with clear methodology, some outcome focus, adequate evidence
  • Acceptable (8-11): Basic approach described, limited evidence, generic response
  • Poor (4-7): Incomplete or unclear approach, weak evidence
  • Unacceptable (0-3): Fails to address requirements or no evidence
Your target
Write for “Excellent.” Not “Good.” Not “Acceptable.”

The 4-part response structure

High-scoring answers follow a consistent pattern. Use this structure for every quality question:

Part 1: Understanding (15% of word count)

Demonstrate you understand:

  • The buyer’s objectives
  • The service user group needs
  • Local context and challenges
  • Regulatory/legislative requirements
Example (Supported Living)

“We understand this supported living service must balance independence with safeguarding for adults with complex needs, including learning disabilities and autism. The local authority’s priority is person-centred outcomes, measured through reduced support hours, increased community participation, and progression toward independence markers. We recognise the safeguarding challenges in this cohort, including capacity assessments, MCA compliance, and least-restrictive practice requirements.”

Why this scores: Shows you’ve read the specification and grasp the nuance. Evaluators can tick “understands requirements.”

Part 2: Approach/Methodology (40% of word count)

This is your core response. Structure it:

a) Overview statement One sentence summarising your distinctive approach.

b) Key components (3-5 elements) Break your approach into digestible sections:

  • Assessment and referral process
  • Service delivery model
  • Staffing and supervision
  • Quality assurance
  • Continuous improvement

c) Specifics, not generics Replace “We provide high-quality care” with:

Strong example

“Our competency framework requires all support staff to complete 5-day induction, quarterly safeguarding refresher, and annual MCA training. Staff are observed monthly using our competency checklist covering communication, risk assessment, and person-centred planning.”

d) Differentiators What makes you different from 10 other competent providers? Be specific:

  • “We assign consistent staff teams (average 2-year tenure with individual service users)”
  • “Our digital care planning allows real-time outcome tracking”
  • “We involve service users in staff recruitment through interview panels”

Part 3: Evidence (30% of word count)

Every claim needs proof. Structure evidence as:

Quantitative:

  • “87% staff retention (vs 68% sector average)”
  • “94% service user satisfaction (2024 survey, n=156)”
  • “12 service users reduced support hours in past 18 months”

Qualitative:

  • Case studies (anonymised): “J, 24, autism, moved from 35hrs/week to 15hrs with increased community access”
  • Testimonials: “The consistency of staff has transformed my confidence” — Service user, 2024
  • Inspection feedback: CQC “Outstanding” rating, “Caring” and “Responsive” domains

Process evidence:

  • “Our incident learning process requires root cause analysis within 48 hours, with quarterly trend review by registered manager”

Part 4: Outcomes and Assurance (15% of word count)

Close with:

  • How you’ll measure success — KPIs, review frequency
  • How you’ll assure quality — audits, feedback mechanisms
  • How you’ll improve — continuous development approach
Example

“Success is measured through monthly KPI review: staff retention, service user outcomes (independence markers), safeguarding incidents, and complaints. Quarterly service user and family feedback informs service adjustments. Annual external audit against CQC standards. Continuous improvement is embedded through monthly team learning sessions and annual policy review.”


Common mistakes

Common mistakes
  • Writing a marketing paragraph instead of a method statement
  • Missing the buyer’s terminology (evaluators can’t match it to criteria)
  • No proof points (no KPIs, audits, case studies)
  • The capability dump: Listing everything you can do regardless of relevance. Filter every sentence through: “Does this help score the criteria?” If not, delete it.
  • The jargon trap: “Our multi-disciplinary, outcomes-focused, person-centred provision leverages best-practice frameworks…” Write for intelligent non-specialists instead.
  • The passive voice problem: “Quality assurance is conducted through regular monitoring processes…” Use active voice: “We conduct monthly quality audits using our 25-point checklist.”
  • The unproven claim: “We are the leading provider in the region.” Prove it: “We support 340 service users across 28 settings, the largest provider footprint in [County].”
  • The copy-paste generic: Text recycled from previous bids without localisation. Every bid needs local context.

Word count strategy

Allocation by criteria weighting

If a question is worth 20% of the total quality score, allocate 20% of your writing effort to it. Simple, but often ignored.

Example tender structure:

  • Service delivery: 30% → 900 words (of 3,000 total)
  • Workforce: 20% → 600 words
  • Quality: 20% → 600 words
  • Safeguarding: 15% → 450 words
  • Mobilisation: 15% → 450 words

Maximising marks per word

Low-value words (avoid):

  • “We are committed to…” (prove it, don’t claim it)
  • “We have extensive experience…” (demonstrate with years, contracts, outcomes)
  • Generic adjectives: “high-quality,” “excellent,” “outstanding” (replace with specifics)

High-value words (use liberally):

  • Numbers: percentages, dates, quantities
  • Processes: “We conduct…”, “Our system requires…”
  • Evidence references: “As shown in Appendix 3…”
  • Outcome language: “Resulting in…”, “This achieved…”

Practical exercise: Transform a weak response

Before (would score 9/20) 9/20

“We provide high-quality supported living services. Our staff are trained and experienced. We focus on person-centred care and positive outcomes. Safeguarding is a priority. We work closely with families and commissioners.”

Problems: No specifics, no evidence, generic claims, no structure.

After (would score 17/20) 17/20

Understanding: We recognise this service supports adults with autism and learning disabilities to live independently, with safeguarding balanced against least-restrictive practice. Success means measurable independence gains and community participation.

Approach: Our model assigns consistent staff teams to build relationships and reduce anxiety. Each service user has a named key worker and consistent team of 3 support staff (average 2.1 years’ tenure). We use visual care plans and structured routines to support communication and predictability.

Evidence: Our approach delivers results: 87% staff retention (vs 68% sector average), 94% service user satisfaction (2024 survey, n=45), and 12 service users reduced support hours in past 18 months. Case study: K, 28, autism, moved from 24/7 support to 8-hour days with independent cooking and travel skills.

Quality assurance: Monthly KPI review tracks outcomes, safeguarding, complaints, and staff competency. Quarterly service user council shapes service development. CQC ‘Good’ rating (2023) with ‘Outstanding’ for caring.”


Sector-specific criteria tips

Supported Living

Common criteria: Person-centred outcomes, safeguarding, MCA compliance, community integration

Writing tip: Use specific independence markers: “travel training,” “money management,” “meal preparation with supervision.”

Domiciliary Care

Common criteria: Continuity of care, recruitment resilience, visit punctuality, medication management

Writing tip: Emphasise logistics: rota systems, travel time calculations, backup staff ratios, electronic call monitoring.

Patient Transport

Common criteria: Vehicle compliance, safeguarding vulnerable passengers, punctuality, specialist equipment

Writing tip: Detail fleet specifications, driver training (including dementia awareness), real-time tracking, contingency planning.

Children’s Services

Common criteria: Safeguarding, educational outcomes, placement stability, contact arrangements

Writing tip: Balance care with education/training focus. Show understanding of leaving care transitions.


Care sector scoring: what separates 5/5 from 2/5

In care tenders, evaluators look for evidence that goes beyond generic competence. Here’s what different scores look like for a typical safeguarding method statement:

2/5 — Poor 2/5

“We take safeguarding very seriously. All staff receive safeguarding training and we have a comprehensive safeguarding policy. We work closely with local authorities to ensure service users are protected.”

Why it scores 2/5: No evidence, no specifics, no process detail. Every bidder claims to take safeguarding seriously.

5/5 — Excellent 5/5

Culture: Safeguarding is embedded through monthly team briefings, quarterly scenario-based training, and a speak-up culture supported by anonymous reporting. 100% staff current on Level 2 safeguarding (last audit: January 2026).

Process: All concerns escalated within 2 hours to our Designated Safeguarding Lead, with CQC notification within 24 hours for notifiable events. Root cause analysis completed within 5 working days.

Evidence: In the past 12 months, 3 safeguarding concerns raised by staff (demonstrating a reporting culture), all investigated and resolved within 10 working days. One led to a revised lone working procedure, reducing risk for 12 service users.

Partnerships: We attend [Local Authority] Safeguarding Adults Board quarterly briefings and have a direct referral pathway to the MASH team.”

Why it scores 5/5: Specific numbers, timelines, and processes. Evidence of learning from incidents. Partnership evidence. CQC regulatory awareness throughout.

Care evaluators weight CQC evidence heavily. Reference your inspection findings, PIR data, and quality statements. Show how your service delivery maps to CQC’s five key questions (Safe, Effective, Caring, Responsive, Well-Led) — this alignment demonstrates regulatory awareness that generic bidders lack.


The evaluator’s perspective

Picture yourself as an evaluator with 50 tenders to assess in two weeks. You have a scoring rubric. You want to:

  1. Find the criteria match quickly — clear structure helps
  2. Verify evidence — specific numbers and examples prove claims
  3. Score confidently — no ambiguity, clear demonstration

Write to help the evaluator give you the marks.


Summary: The criteria-writing rules

  1. Decode first — understand exactly what scores marks
  2. Structure consistently — Understanding → Approach → Evidence → Outcomes
  3. Allocate by weight — spend effort proportionally
  4. Evidence every claim — numbers, dates, examples
  5. Write for evaluators — clarity over cleverness
  6. Localise — show you understand this specific opportunity
  7. Proof for compliance — word counts, formatting, attachments

Follow these rules and you’ll outscore competitors who write what they want to say instead of what evaluators need to score.


Ready to improve your tender writing?

We build responses around evaluation criteria, with compliance and scoring as standard. Every answer maps explicitly to what evaluators need to award maximum marks.

View our tender writing service

Want a fast, practical steer on your next bid?

Send the tender pack (or link) and deadline — we’ll confirm fit, risks, and recommended scope.