Job Shop SIG Forum

Welcome to the Job Shop Group Forum. All Visitors may look at all topics and their associated threads on this Forum. However only AILU Members and Registered Users are allowed to create new topics or contribute to an existing topic. Thus to participate in this Forum, please remember to first log-in. If you are not a Registered User, please follow this link to register.

 

Topic: Job Shop forum / Quality - Measurement and Benchmarks

Page: 1

Posted: 23-09-2008 18:57 by Microm1

Bretheren

We recently discussed our quality performance here and wondered if what we did to measure quality was common and how our results compared with others.

We do have our share of cock-ups!

Would people be prepared to share how they measure quality - particularly the more imaginative, not just numbers of scrap?

Also what is average (mean, SD ) for this industry. Perhaps this might be best done as one of those anonymous questionnaires like gas prices.  If there was enthusiasm and a common method of measurement then I would be prepared to analyse the results if Mike Green would anonymize the responses.  What do people think?

To kick off:-

We count parts and jobs that have some scrap. We categorise reasons, for instance, bad material, programming faults, machine faults and subcontract faults and show monthly as a graph. Obviously the biggest problems are tackled first (Pareto). The categories are not set in stone and as problem areas are 'solved' the bigger problem areas are split up into separate new categories. This makes improvement continuous and dynamic but means that comparing last year with this is difficult.

We also assign a cost to each failure; made up of cost of the parts, re-doing the job, material, etc. plus an overhead charge of £50 for each NCR. This does give us a base line year on year.

In common with many what we do is controlled to a great extent by our production control software.

So, who does it better and is willing to say so.  

Neil 

Posted: 24-09-2008 10:20 by Lpro

Good morning Neil,

By the sound of it we handle ours in much the same way as you. It is very useful to log the type of non conformance, and show the results graphically, so that the common ones can be quickly eliminated (wrong thickness etc.). The question of cost is a difficult one because, generally, whether the resulting cost is £10 or £10,000 it was probably only one mistake therefore it is the number of mistakes that has to be tackled (obvious or what?).

When we apportion costs to 'guilty' operatives we allocate the entire cost to everyone in the chain who could have (should have) spotted the error and put it right. In the case of something that went wrong at the quoting stage, for example, this could involve any mix of estimators, contract review, CAD, operatives or inspectors.In this analysis it means that the total cost shown is in excess of the actual cost but it does bring home to the staff exactly what they are costing us, and utlimately, them.

I think your question should throw up some interesting results and I look forward to reading further.

Dave

Posted: 24-09-2008 11:16 by Subcon Laser

Hello Chaps,

We follow the same pattern as you guys and I guess anyone with the same quality accreditations would do likewise.

We do however on long running jobs or potentially expensive jobs indepentantly check things at the programming stage as everything stems from them. If the programmers make a mistake more often than not everyone else down the chain makes exactly the same mistake.  If however inspection detect an error, it is to late by then as the parts have already been cut.  We have saved a lot of money by taking care at the front end rather than at the back end.

However it does not matter how good your system is because the people working it are human. As we all know that is the variable element that can send our best control plans into meltdown. 

Regards

Tom

 

 

Posted: 03-10-2008 17:21 by Microm1

Wat, no other contributions?

 

To summarise, everybody will collect some stats on what goes wrong (a la ISO9000)

Testing or measuring the program (or at least the CAD data before post processing) is recommended particularly for long runs/expensive bits.

 

To pass on one thing that I saw on a customers quality board in their workshop along side 'internal rejects' was 'rejects we missed and our customer found'  Rather better than 'Customer Returns' I think; puts the blame where it should be.

Neil


Back to top