Welcome to my blog for all things related to business quality (processes, systems and ways of working), products and product quality, manufacturing and operations management.

This blog is a mixture of real-world experience, ideas, comments and observations that I hope you'll find interesting.



Control your cosmetics – “because you’re worth it”!

No, I’m not talking about lipstick and blusher despite the tongue-in-cheek introduction. Cosmetic Inspection refers to the quality of the surfaces of products or equipment, especially those surfaces that are visible to the customer.

Nothing is perfect; no material is entirely defect-free (especially if you look hard enough), so what scratches or blemishes or discolouration or badly-formed edges are acceptable to your company and its customers? What defects are unacceptable? How do you decide?

This is somewhat of a minefield for the Quality Inspector and the supplier alike, because you are trying to quantify and form a go / no-go decision on something that is highly subjective.

So how do you decide what cosmetic quality is acceptable versus unacceptable? I suggest that you write, then evolve and continuously improve, a Cosmetic Inspection Specification or Standard.

Now, I can’t simply write one for you here and now because (a) I haven’t got the space and (b) I don’t know what products you produce for what markets. What I can do is to give some guidelines about the things that you should think about putting in this document to make it as useful – and as consistently applicable – as possible.


You will have different requirements for self-coloured plastics as opposed to clear plastic windows or sheet or cast metal. Plated surfaces can have discolouration and can show underlying flaws, paint can have blemishes, runs, different textures and embedded specks.

Different materials and surface finishes can show defects in different ways so it’s unlikely that one simple rule will suffice; be prepared to have different sections of the specification for different materials.


Although no-one likes defects anywhere, surfaces that face the user are usually more critical than bottoms or back panels. You can categorise your surfaces and allow more blemishes on the less important faces whilst imposing higher standards on front panels or display windows or anything frequently seen by the user.

For many types of surface it is common to permit certain specified minor defects so long as the integrity of the coating isn’t breached, e.g. so that you don’t let moisture in which could lead to corrosion.

Time, distance and lighting

Given enough time, light and a magnifying glass you’d be astonished what defects you can find, but that’s hardly a fair test. Will your customers apply the same techniques? (If they will, you should too.)

It is usual to have a standard inspection distance – perhaps 1 metre, specified lighting of a certain luminous intensity, and a time limit (such as 10 seconds) for finding the blemishes; these can all vary depending on the products, surfaces and materials involved.


Which is worse, a deep, short scratch or a very fine long one, a patch of noticeably different texture on a surface or a fine scratch across it? You will want to quantify what is acceptable and what is not acceptable; how many defects of what size will you permit?

Defect density and orientation

Very fine marks parallel to a panel edge (often caused by tooling) are often less objectionable than scratches at random angles; a group of small scratches concentrated in a limited area can be more noticeable than a few very fine hairline defects distributed over a large panel. You will need to specify what is acceptable for your products, surfaces and customers.

Colour and Texture

These are some of the most difficult parameters to get right because colour and texture often interact and can be greatly affected by lighting conditions. Getting dissimilar materials to match in colour, or plastics to match with metals, can reduce even the best engineers to tears; often the answer is not to try – deliberately use different textures or colours for the different materials, although sometimes you have no choice and detailed specifications and samples… and tenacity… are required.

Burrs, dents and manufacturing marks

Will you allow weld or solder marks to show? Glue seepage? Dents where spot welding has been done? What about the finish of edges, how much trimming or burrs or grinding marks will you let show and will you allow any sharp edges to be present even on normally hidden surfaces?

What about sink marks, or flow or ejector marks, or voids? What marking or burring of screw or bolt heads, or the surfaces they mate up against, will you allow?

At what point will you decide there have been lapses in workmanship and the defect is unacceptable?


What misalignment will you allow between panels or other mating surfaces? What gaps? How straight must labels be, and will you allow any bubbles under them or curling, overlaps or smudges?

Examples and photographs

In the end, you can follow all these guidelines in great detail and write a really thorough, objective, detailed specification but still end up with ambiguous results. Why not take a leaf out of well-established standards such as the electronics workmanship standard IPC-A-610 and include photographs of what is acceptable and unacceptable to clarify your requirements?

‘Golden samples’ i.e. reference parts that are used to define exactly what you require, can also form a useful part of your standard.

How you define cosmetic defect acceptability depends on your products, your markets and your customers. But, if you haven’t got a written specification already, wouldn’t it be useful to have an agreed cosmetic standard to work to? Of course it will have to change over time, and sometimes you will have to concess or deviate from it, but at least you and your suppliers and customers can all be ‘singing from the same hymn sheet’ and that has to be a good place to start.

The real meaning of MTBF

Ignore some of the more disparaging descriptions of what ‘M.T.B.F.’ means; it actually stands for Mean Time Between Failures (or, for products that can’t be repaired, the term Mean Time To Failure is often used instead). It’s the inverse of the annual failure rate if the failure rate is constant.

And it isn’t quite what you might think.

What is the MTBF of an 25 year old human being? 70 years? 80? No, it’s actually over 800 years which highlights the difference between lifetime and MTBF. Take a large population of, say, 500,000 over a year, and seeing how many ‘failed’ (died) that year – e.g. 600 – so the failure rate is 600 per 500,000 ‘people-years’, i.e. 0.12% per year and the MTBF is the inverse of that which is 830 years. An individual won’t last that long, they will wear out long before then (unless they are Doctor Who), but for the population as a whole, in that ‘high reliability’ portion of their lifespan, it holds true – in a typical year you will only have to ‘replace’ 600 of them.

So why measure MTBF? “If you can’t measure it you can’t manage it” – knowing your MTBF allows you to benchmark yourself against competitors and can be a marketing asset; many customers expect you to know and disclose your figures. It also allows you improve the weak spots in your product range, and is useful feedback for the design process.

There are two main methods for calculating MTBF:

MTBF Prediction is a mathematical model of reliability, based on accumulating the individual MTBFs for the product’s constituent parts and subassemblies, gleaned from manufacturers data or libraries of standard figures and mathematically combining them into an overall figure. MIL-HDBK-217 (MIL-STD-217) was one of the first methods and is still very well known although other schemes have since come into common usage such as Telcordia’s SR-332, BT’s HRD5, and others; there are software tools available, from free to megabucks, that help you make the calculations.

These theoretical methods are supposedly based on empirical evidence but have a number of flaws, primarily that (a) the individual parts never actually have the MTBFs you expect of them, and (b) combining them mathematically ignores many of the real-world effects that dominate the MTBF of the whole product. I once designed a large audio mixing desk whose predicted MTBF according to MIL-STD-217 was less than 8 minutes; I’m glad to say that, in practice, it was a great deal longer than that!

MTBF Measurement sounds simple in principle; count how many failures you have in a given period of product usage and some easy maths gives you the MTBF. The Devil is in the detail, though – doing statistically meaningful averages over large volumes and long periods is easy, but what about small populations, and what if you need answers quickly rather than waiting for several years?

In practice you have to make some assumptions, the main one being that your failure rate is constant. Now this may not be true; if we take the classic bathtub reliability curve you may have a long drawn-out leading edge with a high level of infant mortality, or you may have a long trailing edge where products start to fail prematurely after relatively little life in the field, but both of these are problems that you would need to do something about urgently. The norm is to have a fairly long period of constant reliability – bumping along the bottom of the bathtub – and in this zone the failure rate over a short period can be extrapolated to the rate that would be achieved over a much longer period… as long as it is within the published lifetime of the product (the MTBF of an 80 year old human is not 830 years!).

So take the date that you shipped a unit to a customer, add a little time for the customer to put it into service, then open up a ‘sampling window’ in time of, say, 6 months to look for any failures. If the failure rate is constant then the annual failure rate is twice the number of failures in the 6 month window. If the units are used 24/7 the MTBF in years equals the number of units built divided by the annual failure rate (back to 500,000 25 year old humans, divided by 600 failures, equals 830 years MTBF). Periodic use, say 8 hours a day, would require the MTBF to be scaled down accordingly (because it has clocked up fewer operating hours per failure, hence a lower MTBF).

Don’t be too harsh on yourself, by the way; you wouldn’t normally expect to count units returned as faulty but that turned out to be No Fault Found, or units damaged by the customer or in transit, or units that were prototypes and not expected to have the performance and longevity of production units, or units that had not been properly serviced or maintained or had reached their published end of life, so you can normally exclude these from the calculations.

And how do you define a failure – does the malfunction of a single dashboard bulb in a car mean the whole vehicle has failed? You will want to have a sensible, defensible criteria for “fail”.

Now, I plead guilty to dramatically simplifying the subject; what about Mean Time To Repair, what about non-linear failure rates, what about the difference between constant failure rate and constant failure density, what about adding normalising or scaling factors to match different environments? All valid questions and, I’m sorry to say, beyond the scope of this short blog.

However, the key message is that you can calculate MTBF quite easily with a little patience and a simple spreadsheet, and it’s a very useful figure to have.

Assess yourself

No, it isn’t a Madonna song or anything written by New Order for the England football team!

In order to go somewhere that you want to be you need to know where you are now, otherwise how do you know in which direction to travel?

A key element of many Quality Management Systems – for example, ISO 9001 – is the idea of self-assessment, often called ‘internal audit’. This differentiates itself from the external audit where an expert body such as BSI or LRQA or others comes in and assesses you against an official Standard. The internal audit is done by members of an organisation for its own benefit and is seen as more frequent, less formal, and hugely beneficial in that it helps both the auditor and auditee equally – everyone learns something.

Although some purists insist on acting in loco Assessment and Certification body, making the internal audit indistinguishable from its external cousin, I have always taken a more flexible, friendly and interactive approach to get the best out of the process. (Not to imply that external auditors can’t be friendly too, of course!).

As an internal auditor (whether a member of the organisation’s staff or not) I think you are there not only to assess whether the Standard or the processes are being implemented as it ‘says on the tin’, but also to help the auditee understand the system and to improve the processes or procedures where they fall short of what is required, preferably before they lead to problems in products or services or other areas. It allows people to express their concerns and views about how work is done in the organisation, and it helps you to identify best practice. In other words, this is a key mechanism for ensuring Continuous Improvement.

To start the internal audit you need to agree its scope, i.e. what specific areas of the business and what processes or systems are you covering. You will want to examine documentation or computer records, perhaps looking for specifications, diagrams, standards, procedures, records of processes or tests being done, checklists, analysis, Corrective Action records, and so on. You will certainly want to ask questions of the auditee and please make these open questions (what, why, how, etc) not closed or leading ones (“do you follow the process?”).

Look at the way that information flows and processes interact. How do people know when to start a process or a procedure? How do they know what to do? What are the steps they take and how do they know when and how to take them? Where are the records of them completing their actions; can they find documents and records easily? How do they know they have done the actions correctly and what happens when they have finished? Does everyone always use the processes they have described or are things sometimes done in a different way? How could the processes or procedures or systems work better or be easier to use?

It is important to de-personalise the process and make it objective rather than subjective wherever possible; look for specific evidence of something being done or not done. Use the auditor’s favourite phrase “show me…” to search for objective evidence not merely opinion, although for an internal audit both are important. Make sure you write down the details of documents or facts presented to you, then what else that led you to, then in turn what that led to, and so on, as a record of what you saw (‘audit trails’).

Your audit report can be very short – mine are rarely more than two pages and usually only one – and cover who was audited by whom and when, the scope and the standard being assessed against, the audit trail i.e. reference number and name of any documents, files or other material that you looked at, and a summary of what you found including Non-Conformances (i.e. not doing ‘what it says on the tin’), agreed Corrective Actions, or areas where you both think that improvements might be appropriate. By the way, the words ‘agree’ and ‘both’ are critical here, the findings should be fully and willingly agreed to by all parties as it’s your joint work rather than an examiner’s report!

If you see areas that do need improvement the Corrective Actions (or maybe Preventive ones) should also have the buy-in of the people who are affected and responsible for that area of business as well as the auditee.

Most of all, just to re-iterate, the Internal Audit is not a test. It is a way of helping the organisation and the people in it improve their ways of working and should be seen as a constructive and collaborative act not an assessment; it should contain no element of blame.

In other words, it’s about seeking improvement not criticism.

Internal Auditor Training How could the processes or procedures or systems work better?

How to manage risk

Another preventive technique I recently promised to blog about was risk review and analysis. This is an approach used to reduce or manage risk; we aren’t necessarily trying to achieve zero risk (if there’s no risk at all you often get little benefit) although in areas such as safety or security a zero-tolerance approach to risk is necessary.

The risk review can be used in general business management – strategy development, marketing or sales initiatives, new product or service offerings, and so on – through to product development projects that have risk review sessions as part of their project management process. Special, rigorous instances of risk review are used in areas such as Health and Safety management.

I have found the best way to run risk reviews and risk management is as a group activity, i.e. in a workshop meeting, partly to get key people’s buy-in, partly to enhance creativity – bouncing ideas off each other and gaining different perspectives (one person on their own will always miss something) – partly because you will need volunteers, and partly because peer pressure can save you having to continually nag people to do their actions!

I suggest that ‘risk review groups’ meet regularly, e.g. every month. You may only need one group for the whole business, or you may choose to have project or function-specific groups as this can often be a better way of delegating authority and responsibility to those who can really make things happen.

The tool at the heart of the process is the risk register, which is a simple matrix or table. Each row in the table describes a different risk that has been identified. The columns are typically:

1. A reference number (so you can easily refer to that specific risk)

2. Description of the risk

3. Date the risk was first identified (and sometimes the name of the person who first identified the risk)

4. Type of risk, e.g. Technical, Project, Business, Health & Safety; alternatively, this could be changed to what or who is at risk

5. Probability of the risk occurring (e.g. L = <10%, M = 10-30%, H = 30-50%, VH = >50%)

6. Impact on the business or project, etc, if the risk did occur (e.g. L = <1 week delay or £10k, M = <1 month delay or £50k, H = < 3 month delay or £100k, VH = > 3 month delay or £100k)

7. Person who is responsible for managing the risk (this is where you need your volunteers)

8. Mitigating actions that are to be taken, i.e. actions that will eliminate or reduce the risk or its impact

9. Status or progress of each mitigating action

10. You may also find it useful to add an owner, or person/s responsible, against each mitigating action.

Some people also combine 4, 5, and 6 into an overall Risk Severity rating.

At the first risk review meeting in any particular area you work out, using structured creativity techniques such as brainstorming, what risks may possibly affect you, then agree on their type, probability and impact. For each of them, especially the medium/high impact, medium/high probability ones, you again use structured creativity techniques to decide what mitigating actions could be taken then choose the most suitable ones to implement. You may want to apply a hierarchy to the actions; e.g. you may prefer an action that eliminates the risk over one that simply reduces it, which in turn you may prefer over one that merely reports the risk if it occurs.

All High Risk / High Probability risk areas should be formally reviewed at each subsequent risk review group meeting, although for practical reasons it may not be worth reviewing all areas at every meeting. However, the responsible person must monitor his/her risks and warn Management immediately of any increase in the probability or impact of that risk, or of other related concerns. Any new risks should also be identified at each meeting.

The risk review matrix or table is usually reported upwards to senior management or, if this provides too much detail, simple Key Performance Indications can be derived.

A particularly useful measure is to show whether risks are reducing over time, as the mitigation actions start to kick in, or whether they are growing; a simple red / yellow / green traffic light colour code can be effective in drawing attention to risks that have suddenly worsened or don’t seem to be under control.

It is important for risk review and management to be a proactive process; that’s why it’s a preventive technique rather than a corrective one. The mitigating actions in the table should be seen a starting point rather than the only action ever required; the person responsible for managing each risk should continuously monitor their risk area and take further actions to reduce the probability or impact of the risk and report it to the risk review group.

I think you’ll find his approach to be simple, easily understood and effective. The management of risk in this way helps you to become more in control of your own destiny rather than continually responding to events as a knee-jerk reaction!

The secrets of Poka Yoke

Poka Yoke (“poh-ka yoh-kay”), translated as mistake-proofing, was developed by Toyota manufacturing engineer Shigeo Shingo in the 1960s. (Its original name, ‘fool-proofing’ was changed because some people were offended by its implications.) It’s another preventive technique that I recently promised to explain in more detail.

Poka Yoke is a simple but effective approach to reducing errors and defects in any business or manufacturing process by removing the opportunity to make the mistake in the first place; it eliminates the need for particular concentration or skill or memory to get the process right.

Often several different Poka Yoke techniques are used at the same time on the same process or assembly, each preventing a different potential error so that the process is robust and virtually impossible to get wrong. Having said that, some people also use it for early detection of errors by making them immediately obvious, although I’m not keen on that interpretation, I prefer to keep it purely for preventing the problem occurring in the first place.

Poka Yokes usually involve devices like fixtures, jigs, mechanical interlocks or switches, and warning mechanisms that prevent people from making mistakes even if they want to! They automatically stop machines or mechanisms, prevent components being assembled the wrong way round, guard the users against hazards or warn them if something starts to go wrong.

Yes, they could involve sophisticated computer vision systems, sensors and lots of software but more often than not they use something like a peg fitted to a block of plastic or a mechanical part that is asymmetrical so it only fits in its hole one way round. The most effective Poka Yokes are usually very cheap and very simple.

How to develop Poka Yokes

The great thing about this technique is that anyone can do it; once you have the right mindset it’s something that a bit of common sense, some creative thinking or brainstorming, and a little experimentation can deliver.

The first action is to look at what can go wrong, because – as per Murphy’s Law – anything that can go wrong will go wrong. What sort of mistakes can and should be prevented? Could the wrong number of parts could be used, or the wrong type of parts (e.g. too few screws of the wrong length), could you forget to apply thread-locking compound or miss out the adhesive from an assembly operation? Could you use incorrect machine settings, or make measurement or calibration errors? Could you work to the wrong assembly documentation, or miss a key stage of the assembly, or fit the wrong connectors or cables together or fit them in the wrong orientation?

Do you have examples of where things have already gone wrong? You could have a brainstorm or any other structured creativity session about what errors might possibly occur, however far-fetched. Try using ‘reversals’ – rather than looking at how to assemble it right, look at how you could assemble the item incorrectly if your life depended on doing so.

Then come up with the simplest possible mechanism, or technique, or tool, or jig that would eliminate each error at source. If you can’t possibly stop an error, as a second-best how can you show that it has occurred as quickly and obviously as possible?

Then test the mechanisms or techniques or jigs to see which combination works best, then put them in place and train your staff in their use.


Good Poka Yokes include things like ‘keyed’ plugs and sockets that prevent the wrong connectors being fitted together or fitted the wrong way round, or asymmetrical hole patterns in matching plates so they can only be screwed together the right way round, or cut-outs in Printed Circuit boards that only allow them to be fitted the correct way into an enclosure. The bevelled edge on a mobile phone SIM card is a good example, as it stops you inserting it the wrong way round, as is a guard over a button that stops it being pressed by accident.

As another example you may decide to pack exactly the right number of nuts and bolts for a given assembly in a container that travels with each assembly; if you have any left over at the end, or if you run out, you can easily see this and look into what has gone wrong – this can save you from leaving fixings off the assembly or from the ‘loose screw problem’ – spare fixings rattling round loose inside the unit because they were dropped in there.

Do a Poka Yoke on your Poka Yoke

Never underestimate the capacity of some folks to get things wrong! People can be ingenious. Although colour coding has its place, don’t over-depend on it as a significant proportion of the population is colour-blind. If you’re designing mechanical interlocks remember that people are stubborn and may force things even when you think it’s obvious they shouldn’t be fitted that way round. Safety interlocks can be defeated – try to defeat yours and see it it’s possible. Do a Failure Mode and Effects Analysis on your design – what would have to go wrong to render the Poka Yoke ineffective?

And when you’ve implemented them, review the effectiveness of your Poka Yokes; keep an eye on them and make sure they are delivering the results you expect after a period of time.

Is Poka Yoke difficult? Frankly, no; as with many Japanese quality techniques it’s mainly applied common sense but the trick is to actually apply it in a structured, planned way and make it stick. Mistakes will be made – people are only human – so find ways of preventing those mistakes leading to defects in your products or services. Educate your colleagues, set up some Poka Yoke sessions, get some quick wins under your belt and show how everyone can help reduce costs and reduce waste through the application of good, sound, common sense Poka Yoke techniques.

Then keep doing it!

In praise of Design Reviews

I worked in one of the large Cambridge-based technology consultancies for many years and was privileged to have clients from small, inexperienced start-ups to large, established mature enterprises. Sometimes we developed products from scratch but sometimes we were brought in late in the day to sort out a client’s project that had gone wrong.

One of the key tools that we used was the formal design review. We used it during complete product developments – it was an integral part of our ISO 9001 processes and we were trained how to do it – but it was of even greater benefit when we were parachuted in to rescue a project.

I have used the same technique with clients ever since.

However clever the designers, however sophisticated the design, this approach finds bugs. Peer-reviewing new designs before you commit a lot of time and money can be hugely beneficial in preventing problems further downstream… if done properly.


Your people will naturally be capable of finding design weaknesses if given the opportunity, environment and culture that encourages them to do so, even if – especially if – they aren’t personally involved in that part of the design.

The review is done by a selected group by peers (colleagues) from different disciplines; electronic engineers, mechanical engineers, system architects, manufacturing people, software experts, etc, under the chairmanship of an experienced reviewer not the designer/s themselves. The timing of the review is usually set by project management but is typically at a point in the project where a significant commitment of time, money or risk is about to be made e.g. release of design details into prototype manufacturing.

For a complex design the requirements specification, functional specification and the design documentation should be circulated in advance so the attendees can spend time understanding it and assessing it for themselves. The design review meeting then reviews and challenges these findings.

For a simple design, or an iteration, the findings can usually be derived on-the-fly during the meeting itself.

In both instances the meeting decides on the relative importance of the findings and identifies the actions that needs to be taken. These are documented in a meeting note or minutes, and the actions are progressed to a conclusion through project or line management.

Check List

To help guide the review, give it structure, and avoid omitting key questions, I have always found it beneficial to use a detailed checklist. This is added to over time so that it becomes a ‘superset’ of all possible questions. Many will be not applicable for any given circumstance so can be omitted, but it’s a way to avoid leaving anything out; it captures best practice for your products and industry.

There isn’t room here to reproduce a generic checklist – in any case it should be bespoke to you and your business – but, for illustration, I would expect an electronic or electro-mechanical checklist to cover:

Specifications, risks, safety-critical areas, design fail-safes, use of unproven technologies or new design techniques.

Schematic design: Gate and bus loading, I/O loading and protection, devices within Safe Operating Areas, production tolerancing, PSU monitoring / watchdogs, high current or voltage designs, spare IC pins (especially inputs), timing and synchronisation.
Thermal effects, heat generated and heat dissipation, power distribution. Design for Manufacture / Test / Environment / EMC. FMEA. Product costing. Production test design and coverage. PCB layout rules, RF design constraints, lay-ups, design for EMC, test points, mechanical interfaces, component sourcing.

Software / firmware design and prototyping, BITE, software to test hardware at different stages, GUI design, interoperability and standards compliance. ASIC design, timing analysis, hardware and/or software simulation. Industrial Design, mechanical design, tolerancing, tooling, robustness or life testing, mechanism optimisation, stress testing, HALT / HASS, pressure relief, fluid handling…
…and so on (the full checklist asks much more detailed questions, of course).

I suggest that you draw up a checklist specific to your own products and technologies, then evolve the list over time on the basis of experience and as a Corrective Action if you find that design shortcomings have slipped through its safety net.

In any case, it isn’t the list itself that’s important, it’s the things that going through the list – and asking questions of each other in a constructive way – brings up.

And, as a bonus, it’s a very effective way of addressing ISO 9001 Section 7.3.4.

They don’t like it…

Instinctively, some design engineers don’t like this process. If not done well they can feel like they are under unfair pressure or criticism. I had one engineer say to me recently “it was a waste of time, most questions were irrelevant, it took too long”. “Sorry to hear that”, I replied, “so you didn’t find anything that could be improved?” “Oh yes, we spotted some things we definitely needed to change…”


The fix for their reluctance? Make it constructive not critical, make it relevant, show how effective it can be as a design safety-net, and make them part of developing the process so they are passing on their experience and knowledge to others.

By finding and fixing the design shortcomings and risks at this stage you can prevent hugely expensive field failures or product recalls; I suspect that, given their recent experience, Toyota wish they had done better design reviews…

More ‘Snickers’ than Marathon…


Windsurfing4CancerResearch, Grafham Water 2 May 2010


Gosh that was hard…

After a month of lovely warm, dry weather, the day of the Sunrise Sunset event dawned with the thermometer well under 10 degrees C, heavy rain and blustery winds. Lovely!

Although only 15 of us were participating at Grafham, there were over 200 windsurfers across the UK all trying to raise money for the cancer charity. Everyone had their own goals; mine were 50 miles if the weather was grotty or 100 if it was great. I think we can safely say it fell into the grotty category…

We knew that however fast we went in a straight line the corners would slow us down; we had to do long straight runs. There was a big national dinghy sailing event at Grafham – 300 teenagers in little ‘Topper’ boats – so the windsurfers’ strategy was to get out early and clock up as many miles as we could before the lake got boat-logged and we were stuck in a corner. We took to the water at 9am with the dinghies due out at 10.30.

Within a few minutes I found the first problem. Yes, I could get up a decent speed but as soon as I hit the corner it all went pear-shaped. I could turn the board OK but when I grabbed the mast or boom on the other side I couldn’t grip it – my hands were too cold, so the sail just pulled itself out of my hands and I went for a little swim.

clip_image006 That was the pattern for the first hour and a half – blast along for a mile at something over 20mph then have a little swim for a few minutes. And again. And again.

The wind was very up and down. Sometimes it would go from a hardly-moving-at-all-5mph to a rip-the-sail-out-of-your-hands-30mph in just a second or two… or the reverse. Very difficult conditions as I could never settle on the board, I was continually moving around trying to get some control. The driving rain was stinging my face and hands so it was difficult to see as I had my eyes half shut!

Two and a half hours gone, wind dropping, time to come in and change to a larger board and sail; this might help to reduce my swimming time as it will give me longer to persuade my hands to work. Well, it was a good theory…

The wind decided to get back up again with a vengeance. clip_image008
The board was hardly controllable, I was bouncing all over the place as the water was really rough. I was limping into the beach almost completely out of control when the board and sail just got ripped out of my hands and thrown downwind. Oh great! The problem then was that the wind blew my kit away quicker than I could swim. I got tantalisingly close but then it went again. So I had a happy half hour swim to the bank. Hang on, I thought this was a windsurfing marathon, how come it has turned into a triathlon without so much as a by-your-leave?

OK, so I’m covered in mud but at least I’ve got the kit back. Limp across to the beach for a very necessary break. Pasta and soup although I couldn’t finish it. Too cold; starting to shiver continuously, even after diving – wetsuit-clad – into a hot shower. Not a good sign!

So I had a long break and that was probably a bad idea as it was all downhill from then on; longer breaks, shorter time on the water. I was completely shattered; I’d go out feeling OK but within 5 minutes I was falling off repeatedly and didn’t have the energy to get going properly. I know what the marathon runners mean by ‘hitting the wall’. You get into a vicious cycle of making a silly mistake, falling off, struggling to get going, making more mistakes, falling off again, etc.

clip_image010The wind was, by now, gusting like before but with very vicious strong peaks of more than 30mph. I had started on a 7.2m sail, changed up to 8.5m, gone back down to 7.2, and now rigged a 6.0 on a small board. It was very quick in the gusts but I’ve never really liked the 6.0, it doesn’t ‘rotate’ properly when you change direction, so each turn was accompanied by having to kick the sail, with my foot, at about chest-height to get it to rotate. Not exactly the best way to stay upright so, yes, it did add to the swimming and cursing tally more than somewhat.

I could only manage about half an hour without a break or I risked not being able to get back into the beach at all. I couldn’t stand the ignominy of being rescued! I hadn’t so much ‘hit the wall’ as run into it headlong and had it collapse down all over me!

But gradually, in little slow chunks, I ate into the 50 mile target. The dinghies had, by now, abandoned racing as the weather was much too vIMG_3690_modicious so we had the whole lake to ourselves again. Back to my large board with the 7.2m sail and 2 mile runs across the whole length of the lake to haul in the 50 mile target. And by late in the afternoon I got there; just over 51 miles when I got back to the beach!

I don’t think I could have gone another hundred yards, but I made it. Some windsurfers did less, some did more, but given the conditions, the cold, the numbness of the hands, the swimming, the exhaustion, the lack of fitness despite hours in the gym, I thought that 50 was OK for an unfit old git of 55 with little windsurfing ability!

Total distance covered = 51.2 miles

Top speed recorded = 28mph, although my average speed was clearly a lot lower than that!


Calories burnt over 8 hours = 4600 (although I really can’t recommend it as a viable diet). Maximum heart rate = 162. Average heart rate, over the whole 8 hours = 122.

But, most importantly, money raised for cancer research = £250 (and about £15,000 in total by everyone participating in the event across the country).

clip_image016If you feel moved by my efforts, however humble, and feel that you can contribute just a little to Cancer Research, please visit http://www.justgiving.com/tom-gaskell

Quality is a strategic issue

I’d like to take an overview of what quality is… and why it’s strategically important to your business.

What is Quality?

Quality means meeting requirements. It isn’t about providing more features, or complexity, or performance that increases cost, takes longer to provide or makes it more difficult to use and may not be required. A good quality product or service or business process, in the words of Ronseal, “does exactly what it says on the tin”.

The business leader and academic Peter Drucker explains that “Quality in a product or service is not what the supplier puts in. It is what the customer gets out and is willing to pay for. A product is not quality because it is hard to make and costs a lot of money, as manufacturers typically believe. This is incompetence. Customers pay only for what is of use to them and gives them value. Nothing else constitutes quality.“

The quality guru W. Edwards Deming tells us “quality is everyone’s responsibility” but, of course, it needs leadership and example-setting from the top as nothing will undermine a quality improvement initiative more than management paying lip-service to the initiative whilst not following it themselves.

Quality needs to become part of the organisational culture and part of the product lifecycle; it needs to be built into the product from the start, it isn’t something that can be ‘sprayed on’ later. It has to be automatic and implicit; as Henry Ford said, “quality means doing it right when no one is looking.”

A huge benefit of improving quality is that you can save both time and money by producing quality products in a quality way – keeping things consistent and simple, doing the work correctly once rather than badly several times, and not wasting money or development time.

A Quality Strategy

I believe that quality should be a critical part of a company’s strategy. Quality of product and of business operations is key to satisfying customer needs and expectations and also to a company’s success and profitability.

Philip Crosby’s Quality Management Maturity Grid gives some very clear pointers as to the goals for quality. His most advanced stage of quality management has six preventive, consistent and assured characteristics:

Management understanding and attitude: Consider quality management an essential part of the company system.

Quality organisation status: Quality manager on board of directors. Prevention is main concern. Quality is a thought leader.

Problem handling: Except in the most unusual cases, problems are prevented.

Cost of Quality as % of sales: Reported 2.5%; actual 2.5% (i.e. the company knows exactly what the CoQ is, and it is very low).

Quality improvement actions: Quality improvement is a normal and continued activity.

Summary of company quality posture: “We know why we do not have problems with quality.”

Few companies match all of these characteristics but Crosby’s approach can help you develop a strategic quality route-map to move in the right direction.

So what should you include in your Quality Strategy? Here are some suggestions, in no particular order:

  • What are your customers’ quality requirements and expectations? How can you work together with your customers to improve quality rather than rely on the traditional supplier/buyer relationship?
  • What recognised industry quality standards do you need to have? What do your competitors offer? How can you ‘punch above your weight’ and gain strategic advantage in a crowded marketplace through improved quality?
  • What quality management standards will you adopt – ISO 9001? TL 9000? TickIT? Good Manufacturing Practice? Etc. How will you ensure these really benefit the company and are not merely badges? If you are simply going it alone, how will you ensure that you adopt best practice?
  • Quality ownership and management; who will provide leadership and management and continuous improvement in this area? How will you train your staff to contribute to quality? Who will devise and improve your operational processes and systems, and how?
  • Preventive Action processes and escalation paths; how do you prevent things going wrong before they cause you a problem? Philip Crosby says that “quality has to be caused, not controlled”. How are you going to design inherent quality and reliability into your products and services?
  • Corrective Action processes and escalation paths; what do you do when things go wrong and how do you make sure the problems have really been fixed and the lessons learnt?
  • Measurement and feedback; what are your quality Key Performance Indicators, what actions will you take to meet them, how will these change over time? What levels of defects on delivery, or in warranty, are acceptable – 5%? 1%? Zero Defects?
  • How can you ensure that your supply chain manages quality to your expectations? How can you work with your suppliers to improve quality rather than rely on the traditional buyer/supplier relationship?
  • And consider how the strategy will change – and how your quality will be continuously improved – over time.

Quality is Free

Improved quality does not need to be a cash-drain on the company. It should not slow things down or make things more difficult. In fact, the converse; business management expert Tom Peters tells us that “almost all quality improvement comes via simplification of design, manufacturing… layout, processes, and procedures.“

Philip Crosby’s book ‘Quality is Free’ is based on the premise that, by improving quality, you can save far more than you spend doing it; it can directly lead to increased profits. He explains that, if you don’t yet analyse and understand it, your Cost of Quality is probably around 20% of your turnover; possibly more than your margin. Even if you do analyse it, you are very possibly under-valuing it by several percentage points.

In many companies there is, therefore, a huge opportunity for improvement. The most quality-mature organisations know what quality really costs and can drive it down to below 5%. Can you afford not to improve quality?

Prevention is better than cure

Many moons ago I was blogging about Corrective Actions and said that, whilst they were invaluable, taking Preventive Actions was even better, as it should stop the problems occurring in the first place, but is considerably more difficult!

I thought I should elaborate…

It is obviously more difficult to say whether it will rain tomorrow than to say if it is raining now. For Preventive Actions you are trying to predict future problems so that you can take action to prevent them occurring.

A number of preventive techniques are available for incorporating into your normal working practices on a regular planned basis – say monthly or quarterly – or at key stages of projects; make them part of the way you do business. The techniques include:


Failure Mode and Effects Analysis, or it’s process equivalent, is a well established technique for identifying what might go wrong with a product or a design or a process, what the probability is and what the consequence would be if it did go wrong. You can then look at the most damaging and take preventive action, perhaps by changing the design or process parameters.

Risk Analysis / Risk Reviews

A little like FMEA, risk management involves looking at where the risks are in an activity (such as an R&D project) and their likelihood of occurrence and impact. Once you have carefully evaluated what might go wrong you can devise mitigating actions to reduce their likelihood or impact. FMEA is really part of risk management, as are activities like Health and Safety and Fire Risk assessments.

SPC trend analysis

I blogged about Statistical Process Control last October and it is highly relevant to preventive techniques. At its heart lies the Process Chart, data that shows the variation in parameters and enables you to get processes under control. SPC helps to spot trends in data that aren’t causing current problems but, if left unchecked, could lead to future problems.

Customer satisfaction trend analysis

Just as SPC can tell you if your manufacturing processes are starting to drift out of control before they actually go out of spec, customer satisfaction monitoring can spot emerging discontent before it needs a knee-jerk reaction. Do you know what your customers really think about you? Are any of them becoming less content? Do you need to do something about it?


‘Highly Accelerated Life Testing’ works on the basis that high stresses applied for a short time will cause the same failures as low stresses over a long time. By applying increasing amounts of stress to a product you can reveal hidden shortcomings in the design which you can iteratively improve until they are no longer weaknesses; see my blog about HALT (June 2009).

Design Reviews

My old friend Nick Goy (sadly no longer with us) was a no-nonsense technology consultant who pooh-poohed management fads but was absolute master of the design review; he taught the rest of us how it should be done. Whenever he went to help new clients he did a formal design review; however clever the designers, however sophisticated the design, Nick would find the bugs. You can do the same, peer reviewing new designs before you commit a lot of time and money can be hugely beneficial in preventing problems further downstream if done properly.

Design For Manufacture

DFM aims to optimise a product’s manufacturability. Instead of designing the product first then working out a way to manufacture it, you start by optimising the production and test processes that are repeated hundreds or thousands of times then make the design (which you only do once) fit with them. It’s a great discipline to build in to your processes. Production Tolerancing (including techniques such as Monte-Carlo Analysis) is one of the best known but, too often, least well applied parts of DFM.


We had to get a Japanese buzz-phrase in somewhere! It’s a method for ‘mistake-proofing’ a process so that it can’t be implemented incorrectly through lack of skill or concentration or random error. For instance, you might ‘key’ connectors so that only the correct combinations fit together, or you might safety-interlock doors so they cut power when a door is opened, or provide assembly jigs so that components can only be fitted the right way round. Poka-Yoke is a ‘fail-safe’ technique.

…and I haven’t even touched on Product or Service Readiness Reviews, Preventive Maintenance, Competitor and Market Analysis, Lessons Learnt exercises, and a host of other valid preventive approaches.

Preventive Action is difficult to justify with a conventional cost/benefit analysis because how do you know what would have happened if you hadn’t used it? But if the alternative is simply to wait for problems to strike, then react when they do, you can see how taking Preventive Action can be attractive.

The quality gurus say that if you rely purely on corrective (Quality Control) techniques rather than preventive ones (Quality Assurance) you will suffer from problems that are expensive and damaging but can never be completely eliminated; a sort of ‘background radiation of quality problems’ that keep you in fire-fighting mode.

So, over the next few weeks, my plan is to expand on some of these techniques; I hope you will find the blogs interesting or, at least, a little thought-provoking.

Windsurfing 4 Cancer Research


It seemed like a good idea to contribute to the 2010  Windsurfing 4 Cancer Research event, having lost both an uncle and a friend to the disease recently and with a member of the family currently undergoing treatment.

And yes it will be cold and knackering but it wouldn’t mean as much if it was easy, would it? The most I’ve sailed in a day is round Hayling or round Mersea or round Rutland which I guess is about 30 miles so it should be easy enough to beat that; I’d like to do 50, or even 100 miles if my dodgy back holds up. (Victoria the Osteopath is lined up for the Monday…)

If you feel like donating through JustGiving it’s simple, fast and secure: http://www.justgiving.com/tom-gaskell

Your details are safe – they’ll never sell them on or send unwanted emails. Once you donate, they’ll send your money directly to the charity and make sure Gift Aid is reclaimed on every eligible donation by a UK taxpayer. So if you can spare even a small amount it would be hugely appreciated. Thanks



Postscript: 2 hours of practice yesterday showed how rusty I am! It’s 10 years since I was last on a long raceboards and it showed in my complete incompetence. 50 miles will be a challenge! At least I didn’t fall in – the water is very cold at this time of year – and to my great surprise I can actually walk today.