Wednesday, June 2, 2010
Process Improvement, Problem Solving & Cost of Poor Quality (COPQ)
Process improvement is the concept of measuring the output of a particular process or procedure, then modifying the process or procedure to improve the output, increase efficiency, or increase the effectiveness of the process or procedure.
Problem solving is a mental process and is part of the larger problem process that includes problem finding and problem shaping. Considered the most complex of all intellectual functions, problem solving has been defined as higher-order cognitive process that requires the modulation and control of more routine or fundamental skills.
Process improvement and effective problem solving strategies are two weaknesses of many companies. Every company has problems. That's right, every single one. The difference between the outstanding performers and the average companies is how well they react to issues when they arise. I have found that the outstanding performers are proactively implementing measures before problems arise and become costly to correct. The better performing companies have found ways to get better at what they do. The average companies are like the photo. Is this your company?
Some companies believe that scrap, waste and defects are just the cost of being in business. They're right! It is a cost ― and it is called Cost of Poor Quality! (COPQ). These are dollars, euros, pesos, etc. that can and do affect your company's ability to remain profitable and stay in business.
A company that wants to thrive can’t continue to throw money away. More and more organizations are realizing the bottom-line effect of COPQ, and taking the necessary steps to enhance their performance by measuring cost of poor quality and understanding these costs to improve profitability.
Remember, you cannot manage what you do not measure. Measurement is critical! The best quality improvement initiatives are driven by data! Why? How else are you going to know how much you have improved if you don’t measure it?
All of you have been exposed to measures in many situations and most of them were important. In school, you were graded. Perhaps you own shares of stock; how do you measure the success of the stock – its increase in value, a measurement. How do you know if your team wins? The outcome of a competition is its final score (a measurement). The fact is that many daily activities in life have accompanying measures to judge their success.
You might argue that you know if things are getting better; you can just tell. I am sure you can, but this is not enough – it is important to measure improvement. One reason to measure improvement is to prove to others that things are improving. Another reason to measure improvement is to demonstrate the time and money savings. We need to find ways to lower costs to save the company a great deal of money. Besides, by lowering costs the staff morale usually improves. I hope that I have convinced you that to make significant quality improvements, you should be actively measuring.
What should you be measuring? The most common measurements are in design and manufacturing. There are several problem solving structures that are currently popular. One of them is a process improvement tool that can be used in any industry, manufacturing, service, or information. No matter the complexity of a problem, your organization may choose to use our patented software platform, Explicore. Why you ask? Explicore enables companies to test the robustness of their manufacturing and design processes. Explicore takes all parameters related to a product, process, and/or system and within minutes this product identifies the parameters in need of correction or improvement. Remember, a parameter does not need to fail to require improvement. The output of Explicore is a statistically based report which identifies the Key Process Indicators (KPIs) so a company can quickly identify where to put their resources to fix problem areas before they become too costly to fix.
Once you reach your goals, continue collecting new data on the variables to ensure the health of the product, process, and system. Doing so will help keep you from backsliding to the old ineffective state. Backsliding is all too easy. It is hard to break old habits, but keeping data will help you. This is, in fact, what drives the success of many programs.
I hope we have convinced you that data driven change is the best way to approach quality improvement. It will concretely demonstrate how you are progressing and will prove to others that your site is doing much better. It will help you earn your just rewards!
Monday, May 3, 2010
Handling Statistical Variation in Six Sigma
The term "Six Sigma" is defined as a statistical measure of quality, specifically, a level of 3.4 defects per million or 99.9997% high quality. To put the Six Sigma management philosophy into practice and achieve this high level of quality, an organization implements a Six Sigma methodology. The fundamental objective of the Six Sigma methodology is the implementation of a measurement-based strategy that focuses on process improvement and variation reduction through the application of Six Sigma improvement projects. The selected projects support the company's overall quality improvement goals.
A Six Sigma project begins with the proper metrics. Six Sigma produces a flood of data about your process. These measurements are critical to your success. If something is not measured, it cannot be managed. Through those measurements and all the data, you begin to understand your process and develop methodologies to identify and implement the right solutions to improve your process. Six Sigma's clear strength is a data-driven analysis and decision-making process – not someone's opinion or gut feeling.
Metrics are at the heart of Six Sigma. Critical measurements that are necessary to evaluate the success of the project are identified and determined. The initial capability and stability of the project is determined in order to establish a statistical baseline. Valid and reliable metrics monitor the progress of the project. Six Sigma begins by clarifying what measures are the Key Process Indicators (KPIs) to gauge business performance, and then it applies data and analysis to build an understanding of key variables and optimize results. Fact driven decisions and solutions are driven by two essential questions: What data/information do I really need? How do we use that data/information to maximize benefit?
Six Sigma metrics are more than a collection of statistics. The intent is to make targeted measurements of performance in an existing process, compare it with statistically valid ideals, and learn how to eliminate variation. Improving and maintaining product quality requires an understanding of the relationships between critical variables. Better understanding of the underlying relationships in a process often leads to improved performance.
To achieve a consistent understanding of the process, potential key characteristics are identified; the use of control charts may be incorporated to monitor these input variables. Statistical evaluation of the data identifies key areas to focus process improvement efforts on, which can have an adverse effect on product quality if not controlled. Advanced statistical software, such as Explicore, is very useful if for gathering, categorizing, evaluating, and analyzing the data collected throughout a Six Sigma project. Explicore automatically captures, characterizes, evaluates, and analyzes all parametric data very quickly. Explicore will have the analysis performed within a few minutes to validate the robustness of manufacturing and design processes. Special cause variation is automatically documented and analyzed. When examining quality problems, it is useful to determine which of the many types of defects occur most frequently in order to concentrate one's efforts where potential for improvement is the greatest. A classic method for determining the "vital few" is through Explicore and pareto the “significant few”.
Many statistical procedures assume that the data being analyzed come from a bell-shaped normal distribution. When the data to be analyzed does not fit into a normal bell-shaped distribution, the results can be misleading and difficult to discern. When such data distribution is encountered, other statistical techniques can be used to assess whether an observed process can reasonably be modeled by a normal data distribution. In such cases, either a different type of distribution must be selected or the data must be transformed to a metric in which it is normally distributed. In many cases, the data sample can be transformed so that it is approximately normal. For example, square roots, logarithms, and reciprocals often take a positively skewed distribution and convert it to something close to a bell-shaped curve. This process will uncover significant statistical variation, separating the important data from meaningless data or, if you will, "noise."
Once the data is crunched and a problem's root causes are determined, the project team works together to find creative new improvement solutions. The data is used and relied upon – it is the measurements of the realities you face! Yet it is smart measurement and smart analysis of the data – and above all the smart creation of new improvement solutions and their implementation – that create real change. The Six Sigma statistical tools are only the means to an end and should not be construed as the end itself. Using tools properly is critical to getting the desired result. Through a successful use of statistics in uncovering significant data, Six Sigma methodology and tools will drive an organization toward achieving higher levels of customer satisfaction and reducing operational costs.
Monday, April 12, 2010
Lean Manufacturing – Are You Ready For Process Improvement?
Now, we are deeply entrenched in lean manufacturing, most small to medium businesses are of the same mindset and are abandoning Lean Manufacturing. Just as we saw company after company abandon the ISO certification process because it was time consuming, costly, and wasn't bringing all of those customers to our doorstep, many companies are abandoning Lean, or in some cases, taking a random or unsystematic approach to this concept.
The major stumbling block in achieving a successful lean approach is usually shortsightedness. We mean well, we want to improve, but we really don't have a genuine understanding of what is involved. For lean to work we need to approach it as a long-term solution that requires continuous attention, involvement, and commitment.
Improvement Practices must:
- be viewed as a major overhaul and not a short-term fix
- be forward thinking
- be driven from the top down
- empower people and involve all departments, especially Leadership (upper management)
- account for processes, procedures and tasks that do not currently exist, but are essential to success
- develop and follow a detailed plan
- be well documented and controlled
- be allowed to grow and adapt to your changing needs
- have a committed, long-term, budget and resource pool
- have well defined goals that are reasonable and achievable
- celebrate success, and
- share the wealth
Lean manufacturing is incremental continuous improvement and a living entity. To approach it as a quick fix that will bring increased profits and efficiencies is a setup for failure. Although the lean process has been documented in countless papers, lectures, seminars, and accredited training programs, many organizations lack the basic fundamental processes and tools required to implement lean manufacturing. Most failures can be attributed to this lack of a solid foundation on which lean is built, and as a result, we are quickly overwhelmed by the amount of work and capital that must be spent before we can even begin to implement lean initiatives. In a relatively short time we lose enthusiasm, get increasingly bogged down in seemingly mundane tasks, eventually succumb to immediate time and budget constraints, and before we realize it, the program has been pushed to the back seat or shelved altogether.
Commitment
To be successful, any program which requires us to stray from our normal day-to-day behavior patterns has to be embraced and driven from the top down. This approach lends credibility to our actions, provides visibility, and it assures motivation and direction will be maintained. Anyone who has tackled lean manufacturing will tell you it is a constant uphill struggle that requires good leadership with a strong will and an ambitious outlook. The major players must be forward thinking and self motivated that understand the big picture and effectively break it down into smaller manageable tasks to move the project forward. Team leaders must possess an inherent ability to fully understand the day-to-day operational procedures of the company and to pull all departments together in a common purpose toward the same goal. Managements' direct participation is essential in providing support, motivation, and cooperation of everyone involved. Make no mistake – everyone must be involved.
Foundation
Though the fundamental procedures outlined in the lean initiatives seem simple and systematic, implementation of Lean principles can be overwhelming. For instance, taking a close look at those processes that are too long or wasting valuable time and improving on those numbers through proven manufacturing methods is in itself somewhat obvious, but exposing the processes often is not. How do you know what processes are questionable? Do you have valid, proven process sequences? Do you even have documented process models? Are these sequences measured against valid cost studies or accurate estimates? Before you can begin to solve problems you will need to put into place all those procedures, time studies, controls, records, documents, etc. Who will perform this work? Are they capable of managing these tasks in addition to the daily workload? This are just a few of a multitude of tasks that will be required before the actual cost saving can begin, and it is precisely at this point that the project begins to lose effectiveness.
You are suddenly faced with mountains of work that drain resources and don't show results on the bottom line. Experienced personnel are reduced to clerks, measuring and documenting data. The forms and documents produced are not universal across all departments. There is no system in place to file, maintain or control the mountains of paper and the endless sea of data collected. More often than not, as the day-to-day problems continue to pile up, you are forced to commit your resources to keeping the company afloat and lean manufacturing fades into the background.
Goals
Only by having well defined goals can you hope to succeed. Outlining precisely each and every task that is required, and not just on the improvement side, but in developing the basic infrastructure that will be required to implement those improvements, will put the project in perspective.
Once you have a complete and concise understanding of what is truly involved you can determine reasonable time frames, budget resources accordingly, assign the appropriate personnel, and develop acceptable expectations. Break the processes down into smaller, more manageable tasks that can be scheduled into reasonable time frames. Commit to addressing these tasks on a regular basis. And most importantly, understand that it is going to take time.
Take advantage of the projects you've identified to groom your employees, set short- and long-term goals, involve the entire organization and truly promote a team approach. As goals are met, publish your success and reward the participants.
Taking small steps in the early stages has several key advantages. We need small successes to be realized to foster a sense of worth in those involved as it provides a means of gaining experience and confidence to take you to the next level. Smaller projects take less time, tie-up significantly less capital, and still provide the ability to handle your day-to-day business. These smaller projects facilitate two major advantages which are often overlooked: first, they generate savings that can be used to offset current losses in poor productivity and inaccurate estimates which can defer the costs of overtime, additional equipment and customer dissatisfaction; and second, they provide a means to identify and finance your future projects.
Continued Change
No matter how accomplished you become in the short-term, continued success can only be achieved by the realization that the system complete with procedures, documentation and control is a living entity and must be allowed to grow and adapt. If you think your organization will get it right the first time, every time, you are destined to fail. As you improve your processes you will inevitably become aware of opportunities that were previously unforeseen. Changes in company direction or market demands will move you into new territories and you must adapt quickly. If something doesn't work as well as it should, change it. Don't be afraid to admit your shortcomings. Audit your system, evaluate your control, assess your need, and when warranted, adapt your system so it serves your purpose.
The Devil is in the Details
Documentation, documentation, documentation, it can't be said enough. If there is one place that begs to be overlooked it is proper documentation. Record keeping is a mundane and time consuming effort that is most often incomplete or ignored altogether. Documentation should be approached as a tool that provides many advantages: examples of our successes and failures; milestones of our progress; foundations for value stream mapping, process mapping and work instructions; quality, safety and process controls; and an excellent means of disseminating information within the organization.
Documenting successes provides substantiated evidence of cost saving which can be utilized to bankroll future projects, new equipment, and personnel in addition to being a great source of direction in moving the program forward. If a particular solution to a problem in one area yields impressive results, it only makes sense to adapt this solution to other processes or problems within the company. We are very good and documenting what "went wrong" and trying to prevent it from happening somewhere else, but all too often we overlook the opportunity to take what "went right" and apply it to the entire company.
No matter how successful the company becomes at improving the processes, without properly documented controls in place, things will inevitably return to their old problematic ways. Controls are our best means of ensuring what was put in place remains in place. It also provides an avenue to review our accomplishments from time to time to ensure things haven't gone awry. When new problems arise, the first place we look is at the controls. How did this happen? What can we do to prevent it from happening again? Properly developed controls will tie us back to supporting documents which can be a great source of trouble shooting. They give us concrete data to determine if the process itself needs changing or if the process is not being followed. It will illustrate lapses in training, inspection, materials, maintenance, etc., and will quickly point us toward the root cause. In addition, properly documented procedures will alert us to other areas that might be prone to the same setbacks.
Celebrate Our Successes
Success is important to the progress of any endeavor and, therefore, should not be ignored. Success should be celebrated, and celebrated often. The tendency is to wait until a project is complete and all the savings have been tallied before we think of celebrating the accomplishments. This will usually be seen as too little, too late by many of the individuals involved, and in some cases, invariably tends to overlook some of the participants. Every person involved did his or her part. Each individual will view their contribution as important and every participant will consider their input as having come about through hard work and added effort, above and beyond their day-to-day responsibilities. By celebrating success as it happens, no matter how big or how small, everyone involved will not only feel appreciated and important, but will be motivated and driven to succeed further. Acknowledgment of a job well done promotes the spirit we all want to see in our employees and co-workers, it instills pride in their work, and it fosters a sense of worth that culminates in a workforce that looks for problems and willingly brings forth solutions. Participation on a team will no longer be viewed as an added burden to an already heavy workload, but an honor and a responsibility. This ethical mindset will bring more to the bottom line and the future success of your business than you can imagine.
Reap the Rewards
Don't stop at simply celebrating your success, reap the rewards. The old school approach has always been that since the company footed the bill to complete the projects, purchase the equipment, rearrange the facility, etc. they should reap the rewards. You'll discover that sharing the savings will go much farther and pay higher dividends than pocketing the profits. Sharing the wealth shouldn't be seen as distributing the cash, but reinvesting in the company. Reinvesting in equipment, facilities, personnel, and let's not forget, reinvesting in our customers.
As your success mounts, your business will grow, your profits will increase, and so will the workload placed on your employees. You've already invested heavily in the training and development of your personnel; you don't want to lose them now. In all actuality, you should be looking to your employees to take on more of management roll than a laborer attitude, to mentor and train your new hires, and to apply their experience in finding and minimizing defects in the system.
Nothing will stop a person in their tracks and send them packing than the thought that their hard work was taken for granted. So make sure the raises you give out are commensurate with the employees' worth. Increase your perks and benefits and promote from within. Make certain that it is understood that when a person embraces the responsibilities imposed upon them, the experience gained has value, and as the company grows so will its' employees. It is important to keep this in mind when setting employee goals and determining what training you will provide. Make a commitment to grow your employees just as you have committed to growing your business.
Spend some of that money on improving your equipment. Get that newer, faster, more accurate equipment on the floor – it increases productivity, keep you on the leading edge of technology, and open new doors to capabilities and customers you didn't have before. Purchase software to reduce the documentation burden, improve the look and conditions of your facility, and hire a higher caliber of employee.
And don't forget about your customers. Passing the savings onto the customer will pay back exponentially. By reducing your costs you demonstrate a commitment to improvement and cost control, you also lend credibility to your organization much more than you might think. When everyone else is raising prices and tacking on surcharges, if you can hold, or better yet, reduce prices, who are your customers going to deal with? You'll not only retain the customers you already have, but you'll attract new ones looking to increase their own bottom line. Additionally, when you inevitably underestimate that one quote, you'll stand a better chance of having your customer accept an adjustment. If you can demonstrate time-after-time that you are reducing your price, when the time comes that you need to raise one, your customers will be much more willing to accept it and still feel confident that overall, they are still getting the best value.
Do not shy away from lean manufacturing – embrace it. It is extremely vital to your future. If you approach Lean with an open mind, look at the long-term, and strive for incremental continuous improvement you will be successful. Keep in mind there is no finish line to reach; true success is found in the continued pursuit of improvement.
Saturday, March 6, 2010
Master the Power of Your Data
Performance management helps companies reach that "top third" status by enabling them to clearly articulate their business strategy, align their business to that strategy, identify their Key Performance Indicators (KPIs) and track progress, and deliver the information to the decision-makers. Just how do organizations monitor and manage this process?
Like anything else, a data analysis scorecard is an effective technology for measuring and monitoring product, process, and system performance. It enables the alignment of KPIs, and provides the ability to track and optimize performance based on those indicators. KPIs are measured based on a set of metrics that consider multiple interdependent perspectives, and they help organizations balance their focus on more than just the "bottom line." This approach ensures design, manufacturing, quality (six sigma), and customer service are weighted appropriately which results in well-rounded successful companies. Until recently, however, a data analysis scorecard was limited in reference to the level of information it provided and the degree to which it enabled drill down of presented information for the purpose of heavy analysis. Data warehouse is helping to eliminate those limitations.
The data analysis scorecard and data warehousing are a perfect combination to achieve the greatest, most efficient, and accurate performance analysis and management yet. The combined technologies enable organizations to balance their resources and manage their business functions according to process and key performance indicators. Data warehouse is the effective infrastructure that supports the performance management process and it provides a means for collecting and storing the data. The data analysis scorecard and data warehousing technology collaboration provides decision-makers with the ability to drill down on the data delivered by the data analysis scorecard.
The data warehouse provides companies with the ability to compare and analyze identified KPIs against actual data, allowing for benchmarking and performance improvement tracking. This is a critical piece of performance management – knowing where you have been, where you are now, and where you are heading. Unlike the annual performance appraisal, the data warehouse-supported data analysis scorecard utility enables organizations to monitor their product, process, and system progress continuously.
There are a few critical components in an effective data analysis scorecard utility:
1. Getting last month's results 30 days late doesn't cut it in today's rapidly changing world. Some measures should be tracked to the day or shift; some are acceptable weekly, while others are acceptable at month end. Determining what is needed when and why should be driven by product volume.
2. Presenting the measures of manufacturing test data is one element of the data analysis scorecard utility, but determining and correcting the problematic areas in the product, process, and/or system is a terrific communications tool that supports the upward and downward feedback at management level and across business functions will turn the scorecard into a true management cockpit.
3. A good data analysis scorecard utility should contain the finest level of detail in order to craft the complete picture of all the parametric measures. For instance, it should contain measures (i.e. variable or attribute parametric test data), design measures (i.e. when developing specifications prior to and after performing a Monte Carlo) and sub-tier data (i.e. supplier test data, etc.).
There is considerable analysis involved in implementing a performance management solution. Businesses must be carefully examined, and the metrics must be broken down by:
• Time dimensions
• Data availability
• Data access requirements
• Completeness and requirements
• Frequency of use
That information translates into the architectural components that deliver the warehouse and, ultimately, the data analysis scorecard utility. Therefore, the initial commitment to defining the appropriate strategies and measures at every level of the business is crucial.
KPIs are the primary means of communicating performance across the organization. KPIs should be balanced and not just focus on the traditional failure mode measures used to monitor performance. The combination of data warehousing and data analysis scorecard technology maintain KPIs when and where they are needed most, helping companies better satisfy customers, monitor progress, benchmark process and activities, drive change, show signposts of improvement, create balance and provide relevant information.
The Data warehouse and data analysis scorecards are where strategy, corporate performance management and business intelligence come together. When implemented properly, they communicate how executing strategy should become manifest. They display results and progress. The metrics should translate an organization’s strategy into observable outcomes and allow performance to be confronted with goals. This is where the strategy rubber meets the road.
By incorporating a data analysis scorecard into manufacturing (and design), and not just financial and operational indicators, but including complete measures on all the manufacturing test data, data analysis scorecards amplify the voice of the customer within the corporation. Companies are only as good as the compounded decisions its people make. Data analysis scorecards not only supplies a set of software tools, but also enables better decision processes to allow management to act on product, process, or system data. When implemented well, data analysis scorecards and proper data warehousing help steer the business.
Data analysis scorecard solutions promote performance visibility of the product, process, and system. By providing insight, they allow for optimization and alignment of corporate resource planning, improved product reliability, and better control of their processes, product, and system. R&D will benefit from the use of the data analysis scorecard as well – it will allow them to deliver manufacturing a robust product since the data would be characterized prior to production release. Otherwise, manufacturing may have pre-production/production issues that become increasingly difficult to solve and sometimes a detriment to product release.
In light of these realities, senior management must get a grip on this large chunk of discretionary corporate spending. The fact that results are known to be structurally variable and hard to predict makes the quest for valid measurement even more critical. On top of that, companies don’t just operate to make a profit today; they must be constantly focused at ensuring sustainable profits in the future as well.
The idea behind data analysis scorecards is that noteworthy fluctuations in performance become instantly visible. When change occurs, the underlying root cause needs to be made visible, too! By drilling down or through, explanations for change are surfaced. By disaggregating the numbers in the scorecard along critical dimensions, new insights emerge as well. This is where the data analysis scorecard really adds value.
Otherwise, any blip on the radar might send managers on a search frenzy, looking for an (underlying) explanation what caused this change. Conversely, lack of such drill down functionality with a scorecard (and fruitless searches in the past), will lead managers to ‘learn to ignore’ some failures. This may lead to poor product performance once fielded.
The whole idea of business metrics is that they should drive the business forward, and provide the best possible implementation of corporate strategy. However, translating strategy into action is never ‘done’ and one must look at the KPIs drivers on a regular basis to ensure they remain important.
About the Companies
TestSoft, Inc. and HELM Analytics have teamed up to provide manufacturers with comprehensive manufacturing quality control, production process, and supply chain management tools.
HELM Analytics system integration and data capture capabilities coupled with TestSoft’s Explicore™ software provide powerful solutions to increase manufacturing efficiency and identify cost cutting measures. Explicore ™ software, a lean Six Sigma based tool, is a Data Analysis Scorecard Utility that provides professionals with a powerful, reliable, and easy-to-use utility to fully analyze data and manage large and complex projects. Customers use a data parser as part of Explicore™ to capture data more efficiently and effectively. Explicore™ automatically characterizes successful and failed test data providing a solid evaluation of the manufacturing process. TestSoft’s patented process enables users to quickly capture, characterize, analyze, assess and measure product parameters. This enables users to determine the reliability of their processes and products avoiding costly design and testing mistakes and production downtime.
Leveraging these solutions creates cost savings, increases efficiency and ensures product quality is supported throughout the manufacturing process. Design flaws and testing issues can be identified early in the process and adjustments can be made quickly limiting cost overrun and production downtime. Developing cost saving initiatives is critical. HELM solutions help manufacturers track customer orders, production information and historical use to provide an overview of the total customer relationship and create opportunities for additional sales or product placement. Integrating disparate systems and providing a single access point for comprehensive business information creates opportunities and positions manufacturers to capture both operational and supply chain actionable data to help drive their business forward.
HELM Analytics and TestSoft, Inc. solution help cut internal costs and maximize production efficiency.
HELM Analytics™, a division of CCS Global Tech delivers Comprehensive Enterprise Applications powered by Business Intelligence solutions. The company automates business processes and workflows within complex supply chain relationships. Our solutions include compliance, predictive analytics, data security, regulatory automation, reporting, and enterprise-wide business analytic capabilities for companies of all sizes. For more information, visit www.helm360.com
TestSoft delivers problem-solving technologies through software, training, and comprehensive services. The company also provides automated data capture and characterization to the customer when and where they need it most. TestSoft delivers world-class service and customized support solutions designed to reduce costs and uncertainties associated with managing, manufacturing products and components. Our services are designed to help solve specific problems encountered during the manufacturing process. For more information, visit www.testsoftinc.com.
Thursday, February 4, 2010
Statistical Process Control (SPC) and Beyond
Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes seldom produces a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (for example, Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process (common causes of variation), while others display uncontrolled variation that is not present in the process causal system at all times (special causes of variation).
In 1989, the Software Engineering Institute introduced the notion that SPC can be usefully applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5 practices of the Capability Maturity Model Integrated (CMMI). This notion that SPC is a useful tool when applied to non-repetitive, knowledge-intensive processes such as engineering processes has encountered much skepticism, and remains controversial today.
The crucial difference between Shewhart’s work and the purpose of SPC that emerged is that SPC typically involved mathematical distortion and tampering, is that his developments were in context, and with the purpose, of process improvement, as opposed to mere process monitoring (i.e. they could be described as helping to get the process into that “satisfactory state” which one might then be content to monitor.) W. Edwards Deming and Raymond T. Birge took attention to Dr. Shewhart’s work. Deming and Birge were intrigued by the issue of measurement error in science. Upon reading Shewhart’s insights, they wrote to wholly recast their approach in terms that Shewhart advocated. Note, however, that a true adherent to Deming’s principles would probably never reach that situation, following instead the philosophy and aim of continuous improvement.
Statistical process control (SPC) is an effective method of monitoring a process through the use of control charts. Control charts enable the use of objective criteria for distinguishing background variation from events of significance based on statistical techniques. Much of its power lies in the ability to monitor both process center and its variation about that center, by collecting data from samples at various points within the process. Variation in the process that may affect the quality of the end product or service can be detected and improved or corrected, thus reducing waste as well as the likelihood that problems will be passed onto the customer. With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over quality methods, such as inspection, that apply resources to detecting and correcting problems in the end product or service. In conjunction with SPC, TestSoft’s automated lean six sigma product, Explicore, enables companies to test the robustness of their manufacturing and design processes quickly. Explicore takes all parameters related to a product, process, or system and within minutes the product identifies the parameters in need of correction or improvement. The output of Explicore is a statistically based report which identifies the key process indicators so a company can quickly identify where to put their resources to correct or improve problem areas.
In addition to reducing waste, SPC and Explicore can lead to a reduction in the time required to produce the product or service from end-to-end. This is partially due to a diminished likelihood that the final product will have to be reworked, but it may also result from using Explicore and SPC data to identify bottlenecks, wait times, and other sources of delays within the process. Process cycle-time reductions coupled with improvements in yield have made Explicore and SPC a set of valuable tools from both a cost reduction and a customer satisfaction standpoint. Explicore is able to harvest a vast amount of data automatically, effectively and consistently evaluates the product, process, and system health, and saves precious time and money – reducing the total cost of product ownership. That said, when we speak about SPC, we are also speaking about using Explicore to facilitate harvesting the data at an accelerated rate.
General
The following description relates to manufacturing rather than to the service industry, although the principles of SPC in conjunction with Explicore can be successfully applied to either. For a description and example of how SPC applies to a service environment, refer to Lon Roberts’ book, (SPC for Right-Brain Thinkers: Process Control for Non-Statisticians – 2005). SPC has also been successfully applied to detecting changes in organizational behavior with Social Network Change Detection introduced by McCulloh (2007). Paul Selden in his book (Sales Process Engineering: A Personal Workshop – 1997) describes how to use SPC in the fields of sales, marketing, and customer service, using Deming's famous Red Bead Experiment as an easy to follow demonstration. In conjunction with SPC, Explicore is able to be used in a number of different settings – simply submit the data to Explicore and the product will indicate the areas in need of improvement or correction.
In mass-manufacturing, the quality of the finished article was traditionally achieved through post-manufacturing inspection of the product; accepting or rejecting each article (or samples from a production lot) based on how well it met its design specifications. In contrast, Statistical Process Control uses statistical tools to observe the performance of the production process in order to predict significant deviations that may later result in rejected product.
Two kinds of variation occur in all manufacturing processes – Common Cause and Special Cause: both these types of process variation cause subsequent variation in the final product. The first is known as natural or common cause variation and may be variation in temperature, properties of raw materials, strength of an electrical current etc. This variation is small, the observed values generally being quite close to the average value. The pattern of variation will be similar to those found in nature, and the distribution forms the bell-shaped normal distribution curve. The second kind of variation is known as special cause variation, and happens less frequently than the first.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of "free" product for the consumer, typically waste consists of rework or scrap.
By observing at the right time what happened in the process that led to a change, the quality engineer or any member of the team responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem.
SPC indicates when an action should be taken in a process, but it also indicates when NO action should be taken. An example is a person who would like to maintain a constant body weight and takes weight measurements weekly. A person who does not understand SPC concepts might start dieting every time his or her weight increased, or eat more every time his or her weight decreased. This type of action could be harmful and possibly generate even more variation in body weight. SPC would account for normal weight variation and better indicate when the person is in fact gaining or losing weight.
How to use SPC
Initially, one starts with an amount of data from a manufacturing process with a specific metric, i.e. mass, length, surface energy...of a widget. One example may be a manufacturing process of a nanoparticle type and two parameters are key to the process; particle mean-diameter and surface area. So, with the existing data one would calculate the sample mean and sample standard deviation. The upper control limits of the process would be set to mean plus three standard deviations and the lower control limit would be set to mean minus three standard deviations. The action taken depends on statistic and where each run lands on the SPC chart in order to control but not tamper with the process. The criticalness of the process can be defined by the Westinghouse rules used. The only way to reduce natural variation is through improvement to the product, process technology, and/or system.
Explanation and Illustration:
What do “in control” and “out of control” mean?
Suppose that we are recording, regularly over time, some measurements from a process. The measurements might be lengths of steel rods after a cutting operation, or the lengths of time to service some machine, or your weight as measured on the bathroom scales each morning, or the percentage of defective (or non-conforming) items in batches from a supplier, or measurements of Intelligence Quotient, or times between sending out invoices and receiving the payment etc.
A series of line graphs or histograms can be drawn to represent the data as a statistical distribution. It is a picture of the behavior of the variation in the measurement that is being recorded. If a process is deemed as “stable” then the concept is that it is in statistical control. The point is that, if an outside influence impacts upon the process, (e.g., a machine setting is altered or you go on a diet etc.) then, in effect, the data are of course no longer all coming from the same source. It therefore follows that no single distribution could possibly serve to represent them. If the distribution changes unpredictably over time, then the process is said to be out of control. As a scientist, Shewhart knew that there is always variation in anything that can be measured. The variation may be large, or it may be imperceptibly small, or it may be between these two extremes; but it is always there.
What inspired Shewhart’s development of the statistical control of processes was his observation that the variability which he saw in manufacturing processes often differed in behavior from that which he saw in so-called “natural” processes – by which he seems to have meant such phenomena as molecular motions.
Wheeler and Chambers combine and summarize these two important aspects as follows:
"While every process displays variation, some processes display controlled variation, while others display uncontrolled variation."
In particular, Shewhart often found controlled (stable variation in natural processes and uncontrolled (unstable variation in manufacturing processes. The difference is clear. In the former case, we know what to expect in terms of variability; in the latter we do not. We may predict the future, with some chance of success, in the former case; we cannot do so in the latter.
Why is "in control" and "out of control" important?
Shewhart gave us a technical tool to help identify the two types of variation: the control chart.
What is important is the understanding of why correct identification of the two types of variation is so vital. There are at least three prime reasons.
First, when there are irregular large deviations in output because of unexplained special causes, it is impossible to evaluate the effects of changes in design, training, purchasing policy etc. which might be made to the system by management. The capability of a process is unknown, whilst the process is out of statistical control.
Second, when special causes have been eliminated, so only common causes remain, improvement has to depend upon someone’s action. For such variation is due to the way that the processes and systems have been designed and built – and only those responsible have authority and responsibility to work on systems and processes. As Myron Tribus, Director of the American Quality and Productivity Institute, has often said: “The people work in a system – the job of the manager is to work on the system and to improve it, continuously, with their help.”
Third, something of great importance, but which has to be unknown to managers who do not have this understanding of variation, is that by (in effect) misinterpreting either type of cause as the other, and acting accordingly, they not only fail to improve matters – they literally make things worse.
These implications, and consequently the whole concept of the statistical control of processes, had a profound and lasting impact on Dr. Deming. Many aspects of his management philosophy emanate from considerations based on just these notions.
Why SPC?
The fact is that when a process is within statistical control, its output is indiscernible from random variation: the kind of variation which one gets from tossing coins, throwing dice, or shuffling cards. Whether or not the process is in control, the numbers will go up, the numbers will go down; indeed, occasionally we shall get a number that is the highest or the lowest for some time. Of course we shall: how could it be otherwise? The question is - do these individual occurrences mean anything important? When the process is out of control, the answer will sometimes be yes. When the process is in control, the answer is no.
So the main response to the question Why SPC? is this: it guides us to the type of action that is appropriate for trying to improve the functioning of a process. Should we react to individual results from the process (which is only sensible, if such a result is signaled by a control chart as being due to a special cause) or should we be going for change to the process itself, guided by cumulated evidence from its output (which is only sensible if the process is in control)?
Process improvement needs to be carried out in three chronological phases:
- Phase 1: Stabilization of the process by the identification and elimination of special causes:
- Phase 2: Active improvement efforts on the process itself, i.e. tackling common causes;
- Phase 3: Monitoring the process to ensure the improvements are maintained, and incorporating additional improvements as the opportunity arises.
An Introduction to Numerical Evaluation of Metrics
Numerical evaluation of metrics indicates that one must manage the input (or cause) instead of managing the output (results). A transfer function describes the relationship between lower level requirements and higher level requirements Y=f(X). The transfer function can be developed to define the relationship of elements and help control a process. By managing the inputs, we will be able to identify and improve multiple processes that contribute something to the variation of the output which is necessary for real improvement. Also, we will need to manage the relationship of the processes to one another. Another thought is that it becomes the process variation versus design tolerances where the center of the process is independent of the design center and the upper specification limits and lower specification limits. Before we discuss NEM further, let’s understand common cause and special cause variation. Common Cause Variation existis within the Upper and Lower Control Limits and Special Cause Variation exists outside the Upper and Lower Control Limits as see below:
Variation exists in everything. However...
"A fault in the interpretation of observations, seen everywhere, is to suppose that every event (defect, mistake, accident) is attributable to someone (usually the nearest at hand), or is related to some special event. The fact is that most troubles with service and production lie in the system. Sometimes the fault is indeed local, attributable to someone on the job or not on the job when he should be. We speak of faults of the system as common causes of trouble, and faults from fleeting events as special causes." - W. Edwards Deming
We are able to go beyond SPC by using the numerical evaluation of metrics (NEM).
- Measurements will display variation.
- There is a model (common cause – special cause model) that differentiates this variation into either assignable variation, are due to “special” sources, or common cause variation. Control charts must be used to differentiate the variation.
- This is the Shewhart / Deming Model
- Remember, a metric that is in control implies a stable, predictable amount of variation (of common cause variation). This, however, does not mean a “good” or desirable amount of variation – reference the Western Electric Rules.
- A metric that is out-of-control implies an unstable, unpredictable amount of variation. It is subject to both common cause and special cause of variation.
- What x’s and noise are changing within the subgroup
- What x’s and noise are not changing within the subgroup
- What x’s and noise are changing between the subgroups
- What x’s and noise are not changing between the subgroups
Our goal is to improve the processes – SPC/NEM helps by recognizing the extent of variation that now exists so we do not overreact to random variation. To accomplish this we need to study the process to identify sources of variation and then act to eliminate or reduce those sources of variation. We expand on these ideas to report the state of control, monitor for maintenance, determine the magnitude of effects due to changes on the process, and discover sources of variation.
Some typical numeric indicators are: Labor, material, and budget variances; scrap; ratios – inventory levels and turns, and asset turnover; gross margin; and schedules. Any measure will fluctuate over time regardless of the appropriateness of the numerical indicator. These numbers are a result of the many activities and decisions that are frequently outside the range of manager’s responsibility.
Some metrics including organizational metrics (the output “Ys”) are influenced by multiple tasks, functional areas, and processes. If we believe in managing the ‘causes’ (the input“Xs”) instead of the ‘results’ (“Ys in the transfer function Y = f(x)”) the identification and improvement of the multiple processes that each contribute something to the variation in the “Ys” is necessary for improvement. The management of the relationship of these processes to each other is also required. Otherwise, attempting to directly manage these “Ys” can lead to dysfunctional portions of the product, process, or system.
Currently, the judgments as to the magnitude of the variation in these numerical indicators are based on: comparison to a forecast, goal, or expectation; comparison to a result of the same kind in a previous time period; and intuition and experience. Judgments made in this context ignore: the time series that produced the number; the cross-functional nature of the sources of variation; the appropriateness of that measure for improvement purposes: and the complete set of ‘Xs” that must be managed to improve product parameters.
A review of the preliminary guidelines for NEM:
- Control limits are calculated from a time series of the metric
- Different formulas are available – depending on the type of data.
- The observed metrics (and control limits) are a function of the sampling and sub-grouping plan
- Variation due to ‘assignable cause’ is often the easiest variation to reduce
- The most commonly monitored metrics are the outputs (“Ys”)
- These metrics may be a function of one process or many processes
- Control limits are not related to standards nor are they specifications. Control limits are a measure of what the process does or has done. It is the present / past tense, not the future – where we want the process to be
- Control limits identify the extent of variation that now exists so that we do not overreact to ‘random’ variation.
Diagnosing the causes of variation
Charts that are ‘In-control’ tell you that variation is within the subgroup and charts that are ‘Out-of-control’ tells you that variation is between subgroups. We can use control charts and the related control limits to view and evaluate some of the potential causes of variation within a process. This view of the process can be used to estimate the improvement opportunity if the identified cause or causes of variation were removed or reduced. The technique that we use is called rational sub-grouping. Rational sub-grouping means that we have some kind of “rationale” for how we sub-grouped the data. In other words, we are conscious of the question we are asking with our sub-grouping strategy and will take appropriate action based on the results the control charts provide. This is where Explicore will be able to help identify where the problems, if any, exist. Explicore captures like data, characterizes that data, and performs a preliminary analysis of the data. This quickly and consistently identifies the parameters that require attention.
In rational sub-grouping, the total variation of any system is composed of multiple causes: setup procedures; the Product itself; process conditions; or maintenance processes. By sub-grouping data such that samples from each of these separate conditions are placed in separate sub-groups we can explore the nature of variation of the system or equipment. The goal is to establish a subgroup (or sampling strategy) small enough to exclude systematic non-random influences. The intended result is to generate data exhibiting only common cause variation within groups of items and special cause (if it exists) variation between groups.
Numerical Evaluation of Metrics is being usefully applied to non-manufacturing processes including the Capability Maturity Model (CMM). NEM is a useful tool when applied to non-repetitive, knowledge-intensive processes such as engineering processes. Remember, if we can control the input to a product, process, service, system, and the like, then the output should remain in control.
The overall goal we are interested in is the average of the process, the “long-term” variation and the “short-term” variation. To understand the variation as fast as possible is important to the success of a company. Essentially, how fast we get to the problems, improve or correct problems and stabilize the product, process, system, etc. leads to improved performance as a company and increases its financial health. This is where TestSoft’s automated product, Explicore, is able to help – Explicore consistently identifies the key process indicators so a company can quickly identify where to put valuable resources to correct or improve problem areas effectively and efficiently that ultimately reduces the total cost of product ownership.
Tuesday, January 5, 2010
Variation – Why Do We Measure?
Measurements may be influenced by the equipment used to make them or may be influenced by the process of making the measurements. The Measurement System includes the equipment and the process used in making the measurements. Therefore, we should perform a Measurement System Analysis (MSA) / Measurement System Evaluation (MSE) on a periodic basis (worst case it should be performed annually). In conjunction with an MSA / MSE we should capture, characterize, and analyze a sample set of manufacturing test data on a recurring basis, based on product volume, to ensure the continuing health of the product, process, and system. This ensures the product, process, and system remains in control during process changes, engineering change implementation, part changes, etc. throughout the product life cycle.
“We don’t know what we don’t know. If we cannot express what we know in numbers, we don’t know much about it. If we don’t know much about it, we cannot control it. If we cannot control it, we are at the mercy of chance.” – Mikel Harry
“If we cannot express what we know in terms of numbers then our knowledge is of a meager and unsatisfactory kind.” – Lord Kelvin
Variation exists in everything. However,
"A fault in the interpretation of observations, seen everywhere, is to suppose that every event (defect, mistake, accident) is attributable to someone (usually the nearest at hand), or is related to some special event. The fact is that most troubles with service and production lie in the system. Sometimes the fault is indeed local, attributable to someone on the job or not on the job when he should be. We speak of faults of the system as common causes of trouble, and faults from fleeting events as special causes." - W. Edwards Deming
To be competitive we must remember that statistical techniques have played an important role in maintaining competitive position for the manufacturing industry. In order to maintain a competitive position, companies must manufacture products with almost perfect consistency and repeatability. This requires an ability to measure the variability of highly complex manufacturing processes. Capturing, characterizing, and analyzing the data on a repetitive basis will enable companies to maintain control the product, process, and system.
There are a couple of methods in which we evaluate the data on a recurring basis. The first is using an MSA / MSE which may be performed semi-annually or annually and the other is Statistical Process Control (SPC) / Numerical Evaluation of Metrics (NEM) to be performed on a recurring basis, depending on product volume. We will discuss the application of these processes individually starting with the Measurement System Analysis. We will discuss the Statistical Process Control or Numerical Evaluation of Metrics in our next article.
Objectives of a Measurement System
- Understand the data required and analysis techniques of MSA
- Use Explicore, TestSoft’s Lean Six Sigma Tool (Data Analysis Scorecard Utility), to quickly identify the significantly few parameters in need of correction or improvement.
- Review Control Chart Methods and Data.
- Understand Common and Special Cause Variation (including the application of Western Electric Rules).
- Understand How to Use Information on the Variation Attributable to the Measurement System.
We can use an MSA which is a specially designed experiment that seeks to identify the components of variation in the measurement. Just as processes that produce a product may vary, the process of obtaining measurements and data may have variation and produce defects. An MSA evaluates the test method, measuring instruments, and the entire process of obtaining measurements to ensure the integrity of data used for analysis (usually quality analysis) and to understand the implications of measurement error for decisions made about a product or process. MSA is an important element of Six Sigma methodology and of other quality management systems. MSA analyzes the collection of equipment, operations, procedures, software and personnel that affects the assignment of a number to a measurement characteristic. Processing the data through TestSoft’s product, Explicore, will save valuable resource time and money to capture, characterize, and perform the initial analysis of the data.
An MSA considers the following:
- Select the correct measurement and approach.
- Assess the measuring device.
- Assess procedures & operators.
- Assess any measurement interactions.
- Calculate the measurement uncertainty of individual measurement devices and/or measurement systems.
- MSA identifies and quantifies the different sources of variation that affect a measurement system.
- Measurement Error: Variation in measurements can be attributed to variation in the item being measured or to the measurement system itself.
- The variation in the measurement system itself is measurement error.
MSA Elements
- The objective of an MSA/MSE is to learn as much as possible about the measurement process in a short amount of time (e.g., potential study).
- The strategy is to include equipment, operators, parts, etc. that will usually be elements of the measurement process.
- A random selection of parts representing inherent process variation from production should be made.
- The parts should be labeled in a way to record measurements and remove possible operator bias (blind marking).
- Each part will then be measured multiple times (at least twice) by each operator using the same equipment. This can be for each set of equipment.
Measurements may be influenced by the equipment used to make them or may be influenced by the process of making the measurements. The Measurement System includes the equipment and the process used in making the measurements.
As part of ISO9000:2008, the MSA is defined as an experimental and mathematical method of determining how much the variation within the measurement process contributes to overall process variability. There are five parameters to investigate in an MSA: bias, linearity, stability, repeatability and reproducibility. A general rule for measurement system acceptability is: Under 10 percent error is acceptable; 10 percent to 30 percent error suggests that the system is acceptable depending on the importance of application, cost of measurement device, cost of repair, and other factors; and over 30 percent error is considered unacceptable, and you should improve the measurement system.
Also, we need to understand the Western Electric Rules when reviewing the graphical data. In Statistical Process Control, the Western Electric Rules are decision rules for detecting "out-of-control" or non-random conditions on control charts. Locations of the observations relative to the control chart control limits (typically at ±3 standard deviations) and centerline indicate whether the process in question should be investigated for assignable causes. Their purpose was to ensure that line workers and engineers interpret control charts in a uniform way.
In addition to percent error and the Western Electric Rules, you should also review graphical analysis over time to decide on the acceptability of a measurement system.
Basic Parameters of the MSA
Any measurement process for a system typically involves measurement precision as well as measurement accuracy of the system variables subject to the constraints of the system. Requirement for statistically analyzing a system would involve a process to determine the variations from the mean (central) location which is imperative to analyze the measurement accuracy taking into consideration factors of bias, stability and linearity.
The parameters of MSA can be described as follows:
Bias refers to a probability of presence of certain factors in a system which can influence deviation from the standards in the system. Bias can lead to sampling of a data which on analysis appear to be different from the actual or anticipated data set. In order to measure the process measurement bias, for determinate measurement a process called calibration is needed which is of higher level than measuring the data average. In case of indeterminate measurement process owing to constraints, normally the data average values are compared with the standard values. In other words the Bias Effects is an average of measurements that are different by a fixed amount. Bias effects include:
- Operator bias - different operators get detectably different averages for the same thing.
- Machine bias - different machines get detectably different averages for the same thing, etc.
- Others - day to day (environment), fixtures, customer and supplier (sites).
Stability refers to processes which are normally free from special cause variations. Analyzing a system for stability typical involve the standard statistical processes such as SPC (Statistical Process Control), scatter plots, ANOVA techniques and other standard deviation measurement tools. Determination of stability standards in a system requires data sampled to cover a wide range of possible variation factors and intensive piece meal statistical tests covering variations in human resources, tools, parts, time, space and location factors.
Linearity refers to different statistical results from measurements when subjected to different metric spaces. Linearity in a system is determined using higher levels of calibrations in measurement standards which often guided by inferences drawn from various interaction factors influencing a system. For instance, a non linearity in a system may result from equipment (or tools) not calibrated for various levels of operating range or poor design of system or any other system constraint.
Discrimination is the ability of the measurement system to adequately differentiate between values of a measured parameter.
Accuracy or instrument accuracy is the difference between the observed average value of measurements and the master value. The master value is an accepted, traceable reference standard (e.g., NIST).
Variable Measurement Systems: Repeatability, Reproducibility
For measurement systems that result in quantitative measurements such as weight, concentration, or strength, it is important to determine the magnitude of any error in the resulting measurements. If the error is large, it may be impossible to determine whether or not an individual sample is within spec. In addition, designed experiments rely on the ability to separate real effects of making changes from the background noise and could be sabotaged by an inadequate measurement system.
When quantifying measurement error, it is common to separate the error:
- Repeatability (or measurement precision – the error due to the instrument or measurement procedure) and
- Reproducibility (the difference in the average of the measurements made by different persons using the same or different instrument when measuring the identical characteristic).
Attribute Measurement Systems
When the results of a measurement system are PASS or FAIL rather than a quantitative value, special procedures are necessary. There are three procedures to deal with such systems: risk analysis method, signal theory method, and analytic method. In the risk analysis method, multiple appraisers measure samples with known characteristics. Statistics are calculated based on how often the appraisers correctly characterize each sample and how frequently they agree with themselves and each other.
Assessing a Measurement Error
This can be evaluated by comparing the width of the control limits (average chart) with the spread of the plotted points (product variation).
- If all of the points fall within the control limits of the average chart, the measurement variation will over-shadow any product variation. Thus, any improvements to the production process may be undetectable due to extreme measurement error.
- If less than half of the average points are outside of the control limits, the measurement system may be inadequate for detecting product variation.
- If less than one fourth of the average points are inside the control limits, the measurement system is capable of assessing product parameters.
- Pick the measurement system to be evaluated
- Map the process
- Conduct data collection rigorously and use Explicore to quickly identify where to put resources to fix problem areas
- Think about the measurement process when drawing conclusions from the MSA/MSE using TestSoft’s product, Explicore. Remember, Explicore identifies the problems quickly up-front and gives a company the ability to correct problems before they become very costly to fix.
- Pay attention to the likely cause of measurement variation
- Implement the countermeasures
- Hold the gains: Control and improve the Measurement System
- Continually capture, characterize, and analyze the Product, Process, and System health to ensure the gains remain in control. The frequency of the evaluation depends on product volume.
Thursday, December 3, 2009
A Lean Six Sigma Strategy
No matter how you approach deploying improvement teams in your organization, they will all need to know what is expected of them. That is where having a standard improvement model such as DMAIC (Define-Measure-Analyze-Improve-Control) is extremely helpful. It provides teams with a roadmap. DMAIC is a structured, disciplined, rigorous approach to process improvement consisting of the five phases, where each phase is linked logically to the previous phase as well as to the next phase:
- Define the problem, the voice of the customer, and the project goals, specifically.
- Measure key aspects of the current process and collect relevant data.
- Analyze the data to investigate and verify cause-and-effect relationships. Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation.
- Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability.
- Control the future state process to ensure that any deviations from target are corrected before they result in defects. Control systems are implemented such as statistical process control, production boards, and visual workplaces and the process is continuously monitored.
There are many resources that describe the DMAIC process. Our purpose here is to focus on special considerations for using the Lean Six Sigma DMAIC process in a manufacturing environment, including TestSoft’s scorecard utility, Explicore, that is particularly helpful to root out areas in need of improvement or correction.
The root of both Lean and Six Sigma reach back to the time when the greatest pressure for quality and speed were on manufacturing. Lean rose as a method for optimizing automotive manufacturing; Six Sigma evolved as a quality initiative to eliminate defects by reducing variation in processes in the semiconductor industry. It is not surprising that the earliest adopters of Lean Six Sigma arose in the service support functions of manufacturing organizations like GE Capital, Caterpillar Finance, and Lockheed Martin.
A Key Concept: In short, what sets Lean Six Sigma apart from its individual components is the recognition that you cannot do "just quality" or "just speed," you need the balanced process that can help an organization to focus on improving product, process, system, and service quality as defined by the customer within a set time limit.
Lean Six Sigma is a business improvement methodology that maximizes shareholder value by achieving the fastest rate of improvement in customer satisfaction, cost, quality, process speed, and invested capital. The fusion of Lean and Six Sigma improvement methods is required because:
- Lean cannot bring a process under statistical control
- Six Sigma alone cannot dramatically improve process speed or reduce invested capital
- Both enable the reduction of the cost of complexity
Six Sigma:
- Emphasizes the need to recognize opportunities and eliminate defects as defined by customers
- Recognizes that variation hinders our ability to reliably deliver high quality services
- Requires data driven decisions and incorporates a comprehensive set of quality tools under a powerful framework for effective problem solving
- Provides a highly prescriptive cultural infrastructure effective in obtaining sustainable results
- When implemented correctly, promises and delivers $500,000+ of improved operating profit per Black Belt per year (a hard dollar figure many companies consistently achieve)
- Focuses on maximizing process velocity
- Provides tools for analyzing process flow and delay times at each activity in a process
- Centers on the separation of "value-added" from "non-value-added" work with tools to eliminate the root causes of non-valued activities and their cost
- The 8 types of waste / non-value added work
- Wasted human talent – Damage to people
- Defects – "Stuff" that’s not right & needs fixing
- Overproduction – "Stuff" too much/too early
- Transportation – Moving people & "Stuff"
- Waiting Time – People waiting for "Stuff" to arrive
- Inventory - "Stuff" waiting to be worked
- Motion – Unnecessary human movement
- Processing Waste – "Stuff" we have to do that doesn’t add value to the product or service we are supposed to be producing.
- Provides a means for quantifying and eliminating the cost of complexity
The two methodologies interact and reinforce one another, such that percentage gains in Return on Investment Capital (ROIC%) are much faster if Lean and Six Sigma are implemented together.
In short, what sets Lean Six Sigma apart from its individual components is the recognition that you cannot do "just quality" or "just speed," you need a balanced process that can help an organization focus on improving service quality, as defined by the customer within a set time limit.
Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside of Six Sigma. This is where TestSoft’s product, Explicore, is able to help. Explicore is a Lean Six Sigma data analysis scorecard utility that combines the use of Lean and Six Sigma.
Let’s define what Explicore, a Data Analysis Scorecard Utility, does for the organization. Explicore is a patented software solution that has been created and refined since 1998. It enables companies to test the robustness of their manufacturing and design processes. It takes all parameters related to a product, process, or system and within minutes Explicore identifies the parameters in need of correction or improvement. The output of Explicore is a statistically based report that identifies the Key Process Indicators (KPIs) so a company can quickly identify where to put their resources to correct problem areas.
Explicore will aid in tying Lean and Six Sigma together through the use of the tool. From a Six Sigma perspective, Explicore identifies which parameters require improvement or correction. From a Lean perspective, it will identify the defects quickly and will help in the identification of waste. For example, a company started shipping a product after a design change was installed. The product went through production without too much of an issue. However, several customers complained that the product did not function properly and the product was sent back to the factory. Production was stopped and we started the investigation of the customer complaints. We deployed Explicore and discovered there was a number of design and test related issues. Our discovery found three major issues with the design and several measurement problems in test. We were able to re-design the product, correct the test problems, and made improvement adjustments in production. The results were significant. The mean time between failures increased from ten hours to more than ten thousand hours, the first pass yield improved from sixty seven percent (67%) to ninety three percent (93%), and the warranty issues decreased from $9,300,000 to $600,000 year-over-year. The production line went through the lean process and reduced cycle-time from 48 hours to 18 hours.
Intuitively, we know that Explicore saves a company resource time, cost, and improve product reliability. We realize that a customer will achieve quick results with TestSoft’s balanced approach which helps protect your investment. We believe Explicore is an excellent Lean Six Sigma tool that should be in your toolkit.