According to the Little’s law (TH = WIP/CT) same production can be achieved with long cycle time and large WIP or short cycle time and small WIP. I think that later option is better choice for everyone.

Example line 1 in the previous post achieves its maximun throughput when WIP level is as much as critical WIP which is 4 pieces. If line works like PWC we need WIP level of 27 to achieve 90 % throughput and for 95 % TH we will need WIP level of 57 pireces. What is causing this big difference? Answer to that question is variability.

Are these machines equal?

Lets take an example of two almost identical machines. Both machines can produce 4 pieces per hour and daily demand for both machines is 69 pieces which is 2,875 pieces per hour. Both machines break irregularly. A breaks down less often but is down longer. B breaks down more freguently but is down shorter period of time. Availability for the both machines is 75 % which means that capacity for the both machines is 4 products/hour * 75 % = 3 products per hour.

Because both machines have same capacity, availability and demand, their cycle time (CT), WIP, throughput (TH) and service level should be equal. Right? Wrong! Machine A is worser in almost all the meters. Why? Again the answer is variability.

Variability is present in all production systems and it causes decrease in throughput. This is why measuring, understanding and controlling the variability is crucial in order to effectively manage production.

Controlling the variability

Variability is almost the same (but not identical) than randomness. In order to understand causes and consequences of variability we must first understand randomness and probability. Especially the mean and standard deviation of the random variability.

Controllable variability is caused by decisions made in production. Example factory that makes multiple products faces different sized and shaped parts and variability in production times. Also if the products are moved from workstation to another in batches will first product have to wait longer in the queue than last part and therefore queuing times are different.

Random variability is caused by something that we can’t influence directly. Example demand changes and machine breakdowns. Because breakdowns cant’ be fully predicted they will always increase products cycle time.

Intuition

Intuition plays huge part in our every day life. When driving the car human will automatically slow down for intersects and curves. Not because we would understand physics but because our intuition is developed when practicing the driving over the years.

In many cases our intuition works fine for the first phase. Example when we speed up the bottleneck in production we expect bigger throughput. We expect the world to be deterministic without any randomness. In the scientific sense our estimate is based on the mean value of the probability distribution. If we increased machine’s average speed enough compared to its variability our intuition will usually work fine.

Our intuition doesn’t usually work anymore in the second phase. Example can you tell which is more variable; time spent to produce one product or batch of products? Which of the machines above (A or B) is more disturbing for the production line? Which will improve our line more; improving the speed of the machines near the beginning or the end? Many of these questions that include variability need more sophisticated intuition than needed for just speeding up the bottleneck.

Varibility in process time

Our interest is focused on effective process time. With this I mean the time that is “seen” as work in workstation. In logistical view it doesn’t matter why workstation is on break. Because it is waiting parts from the previous station, machine is broken, setups are made or worker is on brake. Thinking about the workstation or machine the effect of these are same. Workstation has no production so we will combine all of these into one factor.

Meters and formulas

Variability is usually denoted with σ2 (sigma square) and is the meter of absolute variability. Often absolute variability is not as important than relative variability. Example 10 micrometer variability in the length of the bolt is relatively small if the length of the bolt is 3 centimeters. But if the length of the bolt is 50 micrometers it is quite big variability.

Reasonable measure for the variability is medium distribution divided by mean value and it’s called coefficient of variation (CV). If we use time (t) as an mean value and sigma (σ) as an variability our CV can be expressed like this:

Coefficient of variation

Coefficient of variation

Often it is more convenient to use squared coefficient of variation (SCV).

Squared coefficient of variation

Squared coefficient of variation

With CV and SCV we can easily separate different kind of productions. We call it for low variability (LV) if CV is under 0,75, medium (MV) if CV is 0,75 – 1,33 and high (HV) if CV is over 1,33.

Variability Classes

Variability Classes

Talking about process time we will often think the actual time spent on production without outages and setup times. This kind of time is usually distributed like classic bell shaped distiribution:

Classical bell shaped distribution

Classical bell shaped distribution

CV for this particular production is 0,32 and is clearly in low variability class. Lets take an example of another production which mean time is same but CV is 0,75. This job is quick and easy but random faults causes long production times. Most of the production times are under the mean like you can see from the diagram below (red line).

Production times in MV and LV production

Production times in MV and LV production

For the practical example think about the LV process feeding the MV process. At first MV process keep up with good pace but when the problems and long production times occur will parts stack up in front of the second station. Because most of the production times are under average time which is same with both processes can second station catch up the queue. After this second station will idle because it can’t do jobs in advance.

High variability process times

Lets think about the machine which average process time is 15 minutes and CV 0,225 without outages. Now think about outage of average 248 minutes after every 744  minutes. This will increase the mean process time to 20 minutes and CV to 2,5.

Underneath is the diagram of the probability of process times for high variability (blue) and medium variability (red). Red process is from previous example which process time is 20 min and CV 0,32.

HV and LV production times

HV and LV production times

Because the high variability diagram is tall and narrow you could assume it to be less variable than red process. But we can’t see what is happening after the 40 minutes or so. Here is magnification of the lines after the 40 minutes:

HV and LV production times after 40 minutes

HV and LV production times after 40 minutes

From the diagram we can see that low variability process will almost immediately drop into the nonexistent probabilities when blue line is decreasing only slightly. So there is small probability for the long process times to occur. Process time for the example is 15 minutes but every 50th process will take 17 times longer. This leads to average value of 20 min and CV of 2,5.

Variability this big will have a significant impact on production. Think about throughput of product every 22 minutes. Based on capacity this should be doable because mean process time with outages is 20 minutes.

However outage of 248 minutes will increase the queue to over 11 products (11,27). When machine is back in production it will catch up the queue with the speed of  1/15 – 1/22 = 7/330. So discharging the queue will last 11,27 / (7/330) = 531 minutes assuming that no more outages will occur.

Probability for outage during the queue discharging, when time spent on outage is exponentially distributed, is 1 – e-531/744 = 0,51 (where time spent on discharging and mean time between failures are in power). So we have over 50 % probability that new outage will occur before previous queue is discharged and our average WIP would be over 12 parts.

Causes of variability

Natural variability

Natural variability means changes in natural process time that does not include outages, setup times and other external factors. Because natural variability is mostly caused by worker, automated processes are less variable than manual processes. However this doesn’t mean that automated process doesn’t have natural variability.

Coefficient factor for natural variability goes like this:

Coefficient of variation for natural variation

Coefficient of variation for natural variation

Preemptive outages, breakdowns

Breakdowns and other outages that can happen in the middle of the job are all combined into same category when computing MTTF and MTTR values (mean time to failure, mean time to repair).

Lets have an closer look for the machines A and B mentioned above. Naturall process time t0 = 15 min for the both machines and natural standard deviation σ0 = 3,35 min. So the natural coefficient of variability for the both machines is c0 = 3,35/15 = 0,223 and SCV c20 = 0,05.

Long term availability for both machines is 75 %. Outages for the A are long but happen seldom when outages for the B are shorter but more frequent.  MTTF (mf) for the A is 12,4 hours or 744 minutes and MTTR (mr) 4,133 hours or 248 minutes. For the B mf = 1,9 h / 114 min and mr = 0,633 h / 38 min. Notice that mean time to failures are triple than repair time. For the calculations we suppose that repair times are variable and their CV = 1.

Availability (A) can be calculated like this:

Availability

Availability

Effective process time te = naturall process time / A

Effective process time

Effective process time

Effective process time te is 20 minutes for the both machines. Both effective process time and effective capacity are same for both of the machines (re = 3 products/hour). because most of the systems used to analyze outages are based on these factors are both machines considered equal. But if we add variability we will have quite different results.

Lets think these machines as an part of the production line. After the machine A we will need WIP equivalent of 4,13 hours of production in order to survive from outage. Correspondingly if we have machine B we need only one sixth of this inventory. (In reality we would need more, because these numbers are only average times.)

SCV-factor can be calculated from this formula:

SCV for preemptive outage

SCV for preemptive outage

And for the machine A it goes like this:

c2e = 0,05 + (1 + 1) * 0,75 * (1 – 0,75) * 248/15 = 6,25

which we can result ce = 2,5

And for machine B:

c2e = 0,05 + (1 + 1) * 0,75 * (1 – 0,75) * 38/15 = 1,0

and ce = 1

We can see that production line with machine A has more variability. So the machine that is broken shorter times more frequently is better option than machine with seldom and long outages. This can be contrary for your intuition. You would think that it is better to have infrequent outages than fight with them every day. But logistically speaking it is easier to manage short daily outages.

Nonpreemptive outages

Nonpreemptive outages are those that are unavoidable but which we can somehow control the time that they occur. Preemptive outages are big breakdowns that happen regardless of what stage the work currently is. Nonpreemptive outages occur slowly, example the blade will become dull. We can wait for the job to finish before changing the blade.

Setup times can also be thought to be nonpreemptive outages when they are caused by changes in the process, example changing the blade. Product changes are not included in this because it is totally in our control.

Other nonpreemtive outages are caused by repairs, breaks, worker meetings, shift changes etc. These are often done between the jobs not during them.

Another example of two machines. More flexible machine 1 produces 1 product every 1,2 hours and needs no setup times. Machine 2 produces one product per hour and needs 2 hour maintenance after 10 pieces.

re for the machine 1 = 1 part / 1,2 hours = 0,833 part/hour ja for the machine 2 re = 1 part/ (1 + 2/10) hours = 0,833 part/hour. This is exactly same for both machines so we could consider these equal. But machine 1 has less variability so it would be better in the production environment decreasing the variability of the process times if all the other factors would be equal.

This equality means that natural variability would be same for the both machines. If natural variability would be bigger for the machine 1 we would have to calculate this further.

In order to calculate the variability of the effective process time which has setup times we need information about natural process time which means mean values of t0 and σ20. After this we calculate setups so that Ns stands for the pieces produced between the setups and ts is the mean value of setup times. Setup times coefficient of variation (CV) is cs. Now we can use these formulas:

Formulas for nonpreemptive outage

Formulas for nonpreemptive outage

From these formulas we can compute variation for the first machine c2e = c20 = 0,25 and for the second machine c2e = 0,31. So the variability in the first machine is lower. If we could reduce the second machine’s setup time from 2 to 1 hour and it would happen after every 5 pieces, variation c2e for the second machine would be only 0,16. Then second machine would have lower variability and would be better choice for the production.

Variability from rework

Variability is increased by inspections caused by poor quality and possible rework. This can be compared to the nonpreemptive outages which means that we do rework between the jobs. It affects the production same way than setups meaning that it steals capacity and adds variability of the effective process time.

Facebooktwittergoogle_pluslinkedinmail
Jesse Uitto

Written by Jesse Uitto

Entrepreneur, Purchasing Professional

Leave a Reply

Your email address will not be published. Required fields are marked *