lyndsey@ukbidwriter.com

What do points make? Confusion, mostly

What do points make? Confusion, mostly

One of the most baffling aspects of tendering is the points and weighting system. I am still on my epic search for a clear and simple written guide that explains how weighting and points relate to each other and how they affect the bidder – until then, here’s my own very much condensed overview, which gives some insight but could probably not be considered the ultimate guide!

First of all, there is no definitive way of scoring a tender. There are rules that contracting authorities must follow depending on the type of tender but even then, there are options within those rules that allow for different methods of scoring and evaluation. Just as no tender is exactly the same as another, scoring mechanisms also differ to reflect this. Even where tenders seem, on the surface, to be identical, inevitably there will be differences, major or minor. The nature of those differences may then dictate differences in weighting and scoring, even from the same contracting authority. For example:

Lowest cost Vs. ‘Most Economically Advantageous Tender’ (MEAT)

Scoring on lowest cost is pretty self-explanatory; if you can do the job in exactly the same way as everyone else and other factors aren’t a consideration, then scoring simply on lowest cost seems like a no-brainer.

That said, even with the simplest supply of products, contracting authorities often want reassurance in other areas than item cost alone.

‘MEAT’ means considering other markers of ‘best value for money’ in addition to cost, such as quality, capacity, responsiveness, health and safety, innovation, ‘added value’, project management and many, many more. What those markers are in any given tender depends on the nature of the product or service and on the importance the tendering organisation gives to each area.

Points Vs. Weighting

Adding to the confusion is the dual evaluation method of points Vs. weighting.

Points:

The majority of tenders that land on my desk give a point scoring mechanism of 0 to 5 (although some are 0-10, some are 0-100). It’s easy to assume that 0 is rubbish, 5 is good and 3 is probably ok. Don’t do that. It’s essential to understand what each of those scores actually relates to, as the criteria for scoring higher marks is not always the same from bid to bid. Here are two examples from recent bids I’ve worked on:

Table 1

Score Criteria
5 Excellent answer which meets all of the requirements and provides all of the required detail.
4 Good answer which meets all of the requirements but lacks some minor detail
3 Satisfactory answer, which meets the requirements in many aspects, but fails to provide sufficient detail in some areas.
2 Limited answer which satisfies some aspects of the requirements, but fails to meet the specification in the whole.
1 Poor answer which significantly fails to meet the requirements.
0 The response is not considered relevant. The response is unconvincing, flawed or otherwise unacceptable. Response fails to demonstrate an understanding of the requirement.

 

Table 2

Score Criteria
5 Superior response. Exceeds the specified requirements and is well substantiated.
4 Full compliance. Good response to requirements and appropriately substantiated.
3 Satisfactory. The response is acceptable with reservations.
2 Marginal compliance. Evidence is deficient in certain areas.
1 Poor response to question. Answered incorrectly or was inappropriate.
0 Non-compliant / Unacceptable or No Response

 

At first glance, there doesn’t seem to be much difference in these two scoring tables. However, when you look more closely at the wording for each of the criteria it becomes clear that, in Table 1, to score a ‘5’, you need only to meet the requirements and provide the requested details. In Table 2, to score a ‘5’ you need to exceed the requirements, e.g. state how you will provide added value over and above the minimum obligations.

So, in one tender response, you could state how you will meet the requirements and back that up with some evidence and you will score a ‘5’. Giving the exact same answer on another tender, you would only score a ‘3’, possibly a ‘4’.

If your competitors have provided that additional information then they will score more highly than you.

Here is my four step process for scoring the highest possible marks:

  1. Read the scoring criteria carefully before answering any questions
  2. When writing your responses, aim for NO LESS than the highest score
  3. Re-read your responses (or have someone else read them) and honestly and objectively evaluate whether the answers do meet those high scoring criteria
  4. Rewrite your response to include any additional information that will improve your score, against the criteria given

What happens if you and your competitors all score 5s on the questionnaire sections of a MEAT driven tender? In those circumstances, it may come down to price (usual) or some kind of tie-breaker (rarer).

Weighting

Weighting is an entirely different bucket of frogs and adds extra layers of complexity, which I’m sure is as much fun to figure out for the people publishing the tender as it is for the people responding to it, e.g. none at all. Annoying as it is, it does serve a purpose, which is to focus the tendering exercise towards finding suppliers that reflect the contracting authority’s own priorities.

Again, it all depends on the nature of the product or service being procured, the sector, the type of contracting authority, whether the moon is in Aquarius and various other factors.

If you routinely score the highest points for all responses across the board then weighting is not going to be that much of an issue. However, it’s rare that any supplier is beyond perfect in every way and so contracting authorities must decide what’s most important to them for the purposes of each tender.

Two examples:

Table A

Criteria Weighting
  1 Ability to provide a service that is flexible in planning and delivery, with options that can be tailored to specific needs. 30%
  2 Ability to deliver a service within the time frame and to meet the needs of the project. 30%
  3 Ability to be flexible and responsive to changing needs throughout the contract. 30%
  4 Cost and value for money in relation to the above criteria. 10%

 

Table B

Main Criteria Weight Weight Scoring elements 
Value for Money 40% 35% Guide pricing and actual quotes
5% Discounts for value/length of contracts, etc.
Quality of Service Provision 60% 5% Service Level Agreement
10% Human Resources
10% Delivery capacity, contract support and responsiveness
5% Customer complaints
5%PASS/FAIL Safeguarding children and vulnerable adults
10% Implementation plan
15% Compliance with specification

 

In Table A, it’s clear that the authority’s focus is on flexibility and responsiveness. Equal importance is given to two elements that deal with tailoring the services provided and to one element that deals with delivery within timescales. Slightly less importance is put on cost and value for money which shows that, while cost is still important, the authority will probably consider paying more for a service that meets those higher weighted needs more effectively.

In Table B, it’s more complex. ‘Quality of service provision’ is the clear focus but even within that headline weighting, there are elements that are considered more vital than others and these are given their own weightings-within weightings.

How do points and weighting work together?

Assuming that neither you nor your competitors are perfect in every conceivable way but that each of you have areas of strength, which is reflected in the points you score on your bid, the contracting authority needs to be sure that those areas of strength match up with their areas of highest importance.

So, if you score highly on your responses to how you recruit, induct and manage staff, but the contracting authority is focused on the quality of your widgets, it’s likely that the weighting for staff resources will be low, while the weighting for quality assurance will be high.

I won’t give any specific formulas for how points and weightings are then calculated together, as they too change from tender to tender but it should be fairly obvious that if you have weaknesses on a high weighted area, you should concentrate on mitigating those within your answer to gain the highest points possible, as the weighting will then further intensify your ultimate score. Likewise, if you are awesome at a low weighted activity then answer as much as you need to, to gain those high points, then leave it alone, focus on something more important.

Yet more strangeness

Looking back at Table B, even more intriguing is the fact that while ‘safeguarding’ only attracts a lowish weighting of 5%, that 5% will only apply to those bidders who have met or exceeded the pass/fail points baseline for that question or section. If this score is not met, the bid is rejected entirely. So in this case it’s weighted low, not because it’s not important, but because if you can’t meet the minimum score you won’t get any further anyway and therefore weighting it highly is largely irrelevant. The fact that it’s weighted at all – rather than being a simple pass/fail – indicates that the contracting authority will allow (and probably expects) a basic but compliant response but will value a compliant and comprehensive response more highly.

In addition, it’s not always the case that all your responses, once the weighting formula is applied, will be translated into an overall aggregated score. Some tenders require you to reach a minimum aggregated pass mark for one or more entire sections (almost a PQQ within an ITT).  Some require you to reach a minimum score on pricing before evaluating the remainder of the bid; some others require the exact opposite.

Weighty stuff

There so many different ways to calculate the end score that listing them here just isn’t realistic. Because of this, it’s vital that you read through the tender guidance and understand where the contracting authority’s priorities lie for each individual tender.

That’s not fun (even for me and I do this for a living) and so the best way to stop yourself from waking up in a cold sweat during the night, with visions of axe-wielding giants booming ‘thou shall not pass’, is to focus on the points above all else, with an underlying understanding of the importance that the contracting authority is giving to certain areas.

Read every question, understand and respond to all of the elements within each question, then read it back objectively and see where it falls down and where it can be improved.

 

Now I’m going to have a lie down.

–        Lyndsey

 

 

 

 

 

1 Comment

Mark Trotter
Hi Lyndsey, Nice approach to explaining the evaluation and marking process. As both a bid writer and a buyer it is important to understand that the scoring criteria is designed to suit two very different needs. Therefore, we have to expect complexity and many of our clients will struggle to understand. However, a high scoring tender is only part of winning the contract. Two suppliers can secure the same score so how will a buyer decide? The answer is also qualitative, so we have the structure and then we have to allow for the subjective marking of the buyers. This is often offset by weighting certain elements. Now the workshop on scoring is often the most enlightening, get ten suppliers in a room and ask them to score two bids for a specific service. That is when they truly understand the system. It is great fun and can be an inhouse workshop. I enjoyed the read. With the development of procurement few buyers are using a PQQ any longer. Regards Mark

Tell me what you think

This site uses Akismet to reduce spam. Learn how your comment data is processed.