Archive for the ‘Jackie Roberts’ Category

Product Information Management (PIM) Data Governance

Thursday, February 21st, 2013

One constant truth in the business of data is change. The most critical factor in master data management is agility, within both process and software design. Agility meansresponding to and managing never-ending changes in the critical data used to support operational decision-making. I firmly believe the ability to respond to changes in the data is a must in the world of Product Information Management, or PIM, as the specialty is sometimes known.

Master Data Management is a broad classification of processes, governance, and software tools used to manage information such as customer and/or product data. CRM (customer relationship management) applications can support the organization’s interactions with employees, members, clients, customers and supply base for marketing, customer services and technical support. Typical data elements in CRM include names, titles, email addresses, physical addresses,
phone and fax numbers, etc.

One crucial note is that there is a different expertise required to manage and structure a data governance policy for customer (CRM) data as compared to product (PIM) related data.

Product Data

The definition of PIM is managing information about products, which may also include services. Product data can include equipment, assemblies, spare part components, and commodity type items such as office supplies or hardware items, i.e. bolts, screws, nuts, etc.

The data management and governance challenges escalate with the vast number of variations used to describe a single product. Adding further complexity to product data management is that of sell-side vs. buy-side. Sell-side data is a
controlled data set of product information; as a result, the data and the governance is structured and owned internally by the manufacturer or supplier. If the governance is structured intelligently, the structure enables multiple data uses such as exports to web catalogs, print catalogs, engineering libraries, and more.

Buy Side of Production Information Management

Buy side is more complex, depending on the size of the operation. Buy side is the collection of transaction data (product or service) to support the operation of the plants and facilities. A critical aspect of collection and management of spare part information is to support the planning of physical inventory to enable the uptime of the facility. There is a fine balance of inventory cost versus ensuring the maximum uptime, which can be very challenging. It is not uncommon for a large global manufacturer to exceed 15,000 suppliers / OEMs equating over half a million submitted data records a year. These data records need to be reviewed, verified, classified, structured and referenced to ensure there aren’t any duplicate records created in the ERP system.

There are many variations in the data sources to support the operations of a facility. For instance, spare parts data is submitted from the equipment designers via engineering bills of material, or through individual purchasing requests from
maintenance staff. The spare parts data includes the manufacturer name, part number, classification name (noun), unit of measure representing how it is sold, and additional information to describe the physical characteristics. However that same OEM part could be submitted as a supplier part with a different supplier part number, a different classification name and no mention of the OEM part number.

The result is the same part data setup with two different master records, each with different contracts for purchases at different prices and stored in inventory multiple times. This results in either excess cost and inflated inventory or a spare part that is not available resulting in the shutdown of the plant line. This is why the data governance and master data management business processes are so critical to an efficient and streamlined operation.

PIM Data Governance

A product information governance project may appear to be a daunting effort when you’re beginning to structure the data rules.

My best advice is to take the time to develop a data roadmap to provide a clear and precise understanding of the data and its use within the organization. The road map should detail how data is required and submitted for use within the enterprise, account for the multiple uses of the data (purchasing, engineering, marketing, and maintenance), plus the required data elements and structure needed to accommodate each software system.

As an example, let’s explore how the data is provided to the enterprise. There are multiple sources of data, from engineering drawings created internally or provided by suppliers, maintenance requirements, buyer requests, and more. With an understanding of the source data, a clear data requirement enables an improvement in the quality of the data provided to the organization.

Starting at the initial contract to source a new piece of equipment for the plant, you may include with the equipment specification a spare part data requirement plus a template for the supplier to provide the spare part information. The contract deliverable should include the completed spare parts list that was required per data governance. Now your master data management process has been simplified and data quality has improved.

In the roadmap, the required data elements are defined to support the business requirements. For a large global manufacturer the governance may include a structure for equipment numbers, location structure for the equipment and basic data governance elements specific to the master record.

Commonly required elements will include:

  • Manufacturer Name,
  • Manufacturer’s Part Number,
  • Noun classifications,
  • Technical descriptive attributes,
  • Sequencing of attribute display order,
  • Units of measure (typically a purchasing rather than a use UOM, sometimes referred to as a disbursement UOM),
  • Price,
  • Volume purchase prices,
  • Purchasing category,
  • Lead time,
  • Warranty information,
  • Language translation requirements,
  • Other classifications such as ECCN, UNSPSC
  • Any other descriptive elements to ensure smart purchasing decisions and stock strategy for items inventoried.

Benefits as a Result of Data Governance

There are many benefits of implementing an innovative data governance and master data management system. Many of the basic benefits, both in process and cost, are:

  • Reducing inventory through identification of duplicate items,
  • Facilitation of inventory sharing and internal purchasing programs,
  • Reduced employee time spent searching for items,
  • Common spare part usage strategies,
  • Reduced downtime in manufacturing equipment due to lack of information availability,
  • Ability to manage inventory using a just in-time model.

Data Governance supports both indirect and direct cost savings. Businesses can begin to embrace the definition of operational data as an asset of the corporation, ensuring improved data accuracy and confidence of the data users.

Informational Data Handicap Score (IDHS) for your BI analysis and reporting

Thursday, October 20th, 2011

I believe that every Business Intelligence report or analysis should have an informational data handicap score (IDHS) listed as a reporting element. The handicap includes the sum total of data scores for accuracy of context, standardization, structure of use, completeness and ability to extract the information for reporting.  The Informational Data Handicap Score should be applied to all reporting and analytics used in every business decision where data is the foundation of information. The cold hard fact is that BI reports and analyses are used in critical business decisions, budgets and plans and are made from data that may be inaccurate, incomplete or unavailable. A report or analysis with your IDHS is a true informational element for BI.

I spend a lot of time analyzing the product data quality, missing data elements, system accessibility because the data elements are impossible to pull out of the system or not collected to support our clients’ enterprise requirements for purchasing, engineering and maintenance decisions. I have to admit, I am always astonished by what I see (or don’t see) and the time and cost to pull data from a system. The reality is the data entered in these systems and the systems themselves are considered a support function (indirect or non-product activity) and not the core revenue generating stream for the business however the data is the life support of BI, accurate and available data is critical for smart and efficient business decisions. The missing gap in most business intelligence programs is a foundational flaw, referred to as data integrity and data quality or the lack thereof.

A business has two options, augment their BI decisions with a data quality scoring model, IDHS, a simple example “I am confident that our inventory budget should be 1 million dollars this year, based on the IDHS (+/- 30%) the actual budget could range from 700,000 to 1.3 million.”  The easiest reality is to budget the 1.3 million, with the plan to come in under budget, .3 million provides a safe cushion. This also alleviates the over budget spending and the tedious tasks of re-budgeting or canceling other important initiatives mid quarter / year.

The other option is to incorporate a structured and standardized Master Data Management process with Data Governance to collect, manage, cleanse (legacy / new data), enrich and disseminate information to the various systems. The goal is to create one master record set to ensure that decisions are based on accurate and complete data sets to implement meaningful BI reporting and analytics.

The results of data quality improvements are because of the requirements and processes of MDM. My definition is “An MDM program includes the Data Governance to define data requirements (structure, format and content), and the data processes to manage data activities such as collecting (extraction of BOM data or the data request web form), evaluating, matching (auto and mismatch), structuring, verifying and enriching to minimum data requirements, tracking history of change and data use, quality-assurance, reporting and distributing data (MAXIMO, ORACLE, SAP or another client’s systems) throughout an enterprise to ensure consistency and control. The MDM program will also include an on-going data maintenance process to manage data updates for this information.”

The following elements of data quality should be part of the governance program for your master data. This is critical to support a global enterprise. The discussions and metrics should include:

Accuracy: We intellectually understand the meaning of accuracy. An email address is either right or wrong, however in the product information world it can be a little more complex, this is where data governance is instrumental. The same spare part can be purchase from the manufacturer (one part number) or maybe a supplier (another part number)? A part number can be many different versions; for instance, a master org record is setup with a part number to purchase safety gloves, except one part number can’t buy you safety gloves; you must include the size as a description element in order to purchase. The result of an inaccurate glove record is you may receive all small gloves, but you really wanted large or you may not receive any gloves. Different manufacturers and suppliers have different ordering and purchasing rules.

Standardization: Is absolutely critical to BI reporting. Standardization is the map to how data is entered, referenced and stored to support ease of data access. The data elements should include classification naming, attributes, part numbers including formats, unit of measures, manufacturer and supplier names, addresses, web urls, relationships to parent companies and so forth.

Structured to support multiple uses: If you have one master organization and are only concerned with purchasing systems then structure may not be a concern, but to a global enterprise with multi-systems, the structure of use is extremely important as the data is disseminated to maintenance or inventory systems. In a purchasing system a ’Bearing, Ball’, part number ‘12345’ should only be set up once but in an “end use” structured environment, that ’Bearing, Ball’ is referenced to many pieces of equipment located  and used on other equipment and in other plants, it is also listed in engineering drawings, etc. If the multiple use structure is set up correct you can report “where used” for inventory sharing, internal purchasing programs supporting reduction in inventory.

Completeness: Having all data elements entered into the system required for the safe and efficient use of each item. If your data set has some missing prices and a report is provided the value of the inventory, obviously the report is inaccurate. The governance requirements include minimum required data elements. In the world of product data, the process may require a special speedy set up for a critical item that is urgent, however the MDM processes includes going back to acquire the missing information.

Accessibility: The ability to pull information from a system is the foundation of reporting. This is a continual struggle when I am working with a new client. I often ask the questions, “Is the expertise available to be able to query and pull data as needed from existing systems?”, “Is the data stored parametrically or as concatenated text fields?”, “is the table structure extremely complicated?” Accessing the businesses information and providing the ability to slice / dice the information critical to BI.

In this fast moving, big data intense world of collecting and storing information for businesses, the reporting and analytics to enable meaningful decision making is critical, so I ask the question “What does data have to do with business intelligence? EVERYTHING”


The Master Data Management and Governance of Maintenance Data

Monday, March 14th, 2011

My strong belief in Master Data Management (MDM) incorporates the management of data from the entry point and multi-channel uses throughout the enterprise. This philosophy results in a holistic understanding of the data content and uses achieving data quality enterprise wide. Yes, an overwhelming task but it can be achieved if you take a step back from the one-dimension software thought process . . . . centered around one software product. Through my experiences, the lack of ownership within the enterprise results in a chain of isolated data islands with only the concerns to perform the isolated activity. MDM is much more than a single data activity or transaction within the operation or a software system to perform said activity.

In the perfect MDM world, naturally not only does the data (product, services, spare parts) adhere to governance, structure of classification, quality and content but also a data structure of location of use. An example of structure could incorporate naming standards for location of use, for example plant or office. Within the plant, the use could be referenced to a department, referenced to a piece of equipment and to a specific location within the department. This type of structure is preset in a MDM plan and will benefit the maintenance data structure. The MDM data plan and structure meets the requirements of the complete enterprise, the purchasing department may only require 5 or 6 data elements but the maintenance department will require 10 or more. This is why Master Data Management requires a complete view of all data concepts and use.

Think of how powerful the analytics are if the enterprise is set up with established standards through governance for plant / facilities location structure, location names, equipment location structure and equipment naming standards. The benefits include the ability to view equipment and spare parts enterprise wide enabling the initiation of common spare parts strategies, spare parts sharing programs supporting inventory planning and reduction.

This type of MDM planning also supports equipment moves or disposals with the view of spare parts associated to the equipment. The spare parts can be packaged and moved or disposed of at the time of the disposition of the equipment. I can’t count the number of times that I have been told that I am not even sure if we still have this piece of equipment that these inventoried spare parts are used on.

Now the beauty, yes I said beauty, is that the required data structure can be set up with templates, written into requirements and contracts to equipment suppliers and when the bill of material data deliverables are sent to the engineering department of the enterprise (entry point) ensuring the data location governance structure is audited and at that point accepted to start the data cleansing and purchasing setup or rejected to fix the data structure errors. Other key data elements are classification, verification, enrichment and translation before the data is setup in any of the enterprise systems.

The by-product of the well executed MDM governance plan is that once the spare parts data is processed, the cleansed record is then propagated into the purchasing system, engineering library and maintenance system. The maintenance system is fully loaded with spare parts information associated to equipment and locations of use ready for the maintenance staff to set up their tasks for the equipment maintenance and planning strategies.

View Jackie Roberts's profile on LinkedIn

The Act of Data Migration is not Master Data Management

Tuesday, March 1st, 2011

Let’s face it, if an organization is spending millions of dollars to purchase and integrate an ERP system, then the project requirements and schedule will be driven by the IT department. Unfortunately, in the definition of the scope of the project most will only focus on moving the “dirty” data from the legacy system to the bright new and shiny ERP system. The IT implementation then moves to the next integration and the users of the data will have the same data issues that plagued them in the legacy now also in the new ERP system.

A flawed philosophy is to migrate the legacy data, meet the deadline, call the project green and successful while the users must figure out to how to correct and update the data. These ERP systems are not designed to handle the volume of change to data, provide a simple method to track change, obsolete a record with history view and archive functionality is non-existent.  Another reason this is a flawed philosophy is that purchasing contracts are set up based on the “bad” data, a unit of measure, part number or manufacturer change will void a contract resulting in wasted time of valuable resources and at the end of the day an inability to source the item, this could set in motion a critical manufacturing line shut down. Let face it, an ERP system is designed store a product or service record providing the business a method to transact, not to cleanse a record to a single master version of an accurate classified, verified and technically described Master Record. Therefore the activity of migrating data to a new system is not Master Data Management.

Master Data Management needs to be independently structured and separately managed in the organization not through IT. It is critical that within the Master Data Management organization to properly represent the business assets of the data (engineering, purchasing, customer, etc). The data is the core information used as the foundation to run the operations, sometimes referred to as the BI for the analytics of sound decision making processes. If the data is incorrect in the new systems, how is the BI improved? How is the business case ever calculated and successfully achieved? I can’t even imagine trying to tally up the potential “cost savings” when bad data is migrated to a new system.

Establishing a MDM program will need to have clear and well defined ownership, stake in the end user organizations and representation in the design and schedule of the software roll outs with full participation in all the projects with data involved. They should also participate in the project design strategy for systematically cleansing, classifying and migration of the data to the new system. Strategy should include an audit of data in the legacy system, let’s face it there maybe 20 year old records with no transactional history or balance on hand in inventory. Should this data be moved to the new system? The answer is NO.

An MDM data strategy to support the IT team can encompass a number of options. A simple option is the publishing of a long term schedule establishing adequate time for the data group to meet the data cleansing and classification requirements. This is not always possible, so what about a phased strategy? Some of the possible steps should include

  • Evaluation of data to review transactional use 
  • Evaluation of the stock on the shelf and confirm that none of the inventory should be obsolete and disposed of.
  • Review of data related to the equipment but is not inventoried
  • Identify data that should not be moved to the new system
  • Establish data priorities for cleansing starting with high transaction use and stock items classified and cleansed first.

An ongoing maintenance and new set up process is imperative to be established with an easy method to request an urgent record during the data migration to support the day to day operations of the business.

We need to get out of the mindset that MDM is simply a data migration to a new system. MDM is a business process to establish the single version of accurate information which is then propagated throughout the organization, part of which is the proper migration of data from legacy systems.

View Jackie Roberts's profile on LinkedIn

We Had a Data Cleansing Project and It Did NOT Work

Thursday, December 16th, 2010

Lately I have had a number of meetings with material and purchasing managers and I have come to two distinct conclusions from the feedback. First, businesses recognize the importance of data quality and have attempted to work on improving their information with either implementing an internal program or hiring a company to provide data cleansing services. The second conclusion is that the activity of Data Cleansing has such an incomplete and broad definition, I reference the blog post by Koa Beck in Gartner Releases Its Magic Quadrant for Master Data Management, “while we continue to monitor the aggregate MDM market, we still believe that it is premature.”

 A key component for Master Data Management (MDM) is data cleansing which has multiple disciplines such as address cleansing or PIM (product information management). My expertise is in PIM, therefore my meetings have been focusing on data in the ERP and Inventory system.

My latest meeting was with an informed Material Manager, he understood the concepts of master data management, after the introduction meeting, he stated that “We had a data cleansing project and it did not work, I ended up going back and correcting the data.” Through the discussion, I came to believe that the data cleansing company, extracted the data and attempted to auto classify a half million records. As a purchaser of these types of services, I asked what was the process for mapping and quality checks?

The business issue is the buying team’s inability to utilize spend analytics and the solution is that the data needs to be referenced to the UNSPSC® (The United Nations Standard Products and Services Code®). The scope of the project is mapping the purchasing data to the UNSPSC®. In my experience, I have identified four general levels of PIM data cleansing, 1) auto mapping 2) auto mapping with a manual review 3) verification and 4) enrichment. The cold hard facts are “buyer beware”.

The detail of the levels are:

  1. Auto mapping: if you have a large collection of data, automation is a requirement however there are some issues. First, auto mapping incorrect, incomplete and inconsistent data will result in a system that will still have incorrect, incomplete and inconsistent data. The quality of the auto mapping is dependent on the structure of the data. If the data is structured to a noun or class, the auto mapping process will have high quality rate. If the data is set up as “free text”, the results will be dismal. This method will not address duplication or data quality in your system.
  2. Auto mapping with a manual review: this process will take the results of the auto-mapping process and add a step of a manual review of the data. The question of the review, will all records be audited in the review, or is the process to review just the records that when the auto mapping just failed? How will consistency of the audit be managed? Again there are still the inherent issues as described in the auto mapping process.
  3. Verification: In order to improve data quality, the data cleansing process requires verification with the manufacturer (service or product). The verification process assures that the purchasing record is set up to the correct manufacturer (referenced to the supplier via the contract), part number for restock ordering, UOM (Purchasing Unit of Measure), description with correctly classified i.e. BEARING, TAPER and the UNSPSC®. Our process is to request the manufacturer to provide the UNSPSC®. If the manufacturer cannot provide the UNSPSC®, the item is correctly classified; the auto map to the UNSPSC® will be successful. The verification process positions the data to identify duplication, manufacturer obsolescence and inaccurate data requiring additional information from the business to reconcile.
  4. Enrichment: The fourth level of data cleansing quality, in addition to verifying, the data is enriched, this can be obtaining a price, warranty with the terms, additional description attributes, ECCN (Export Control Classification Number), recommended repair spare part information, eCl@ss, NSN (National Stock Number) or any other data element your business requires.

The conclusion is asking the right questions of how my data cleansing project will be implemented and managed are essential to making it a successful data cleansing project.

What do you say to . . . I get all the spend details from the supplier and quote this on occasion.

Thursday, July 29th, 2010

And he continued to say “That’s the area where we would need the least amount of help given that we’ve outsourced these parts ten years ago and the low hanging fruit is not around any longer. What do you say to the outsourced scenario of the management of use, cost and inventory out of control the buying teams”?

My first question is “how you would get information when it’s not in your system”? Does your supplier manage inventory for all of your plants and facilities resulting in a global view of spend? Does your supplier manage your data to the OEM or to suppliers so you have duplicate inventory costs?

Just considering the MRO items, the information could come from engineering or the integrated supplier. Logically, the integrated supplier would have been provided the part information from your company in order to setup and purchase the items in the first place. It is likely that they have the records as they were given them and they are linked to item setup in the purchasing system. The top level source would have been engineering who would have either had the equipment constructed or been responsible for the equipment purchase and the parts along with them. If after or during the purchasing activity the “key” item record is setup in the purchasing system using the part supplier information versus the OEM information, this will lead to item duplication. Duplication then will create overstock, variant pricing, variant lead times and other inconsistencies that add unnecessary cost.

Based on what you are saying it sounds like items in your system are based on either the part supplier data or specifically identified by the integrated supplier (their item number). The best scenario is when the OEM part is what is setup as the key item, having the purchase action to the OEM directly (OEM setup as a supplier) removing the “middle man” cost. Second after that is having the OEM part as the item, linked to the specific supplier(s) for purchase. Local purchase suppliers are still linked to the same item also. Having the same item record used across the enterprise is optimum.

I would also add that there should be a means to discover OEM part information as a reactive purchase need comes from maintenance. Parts are typically identified physically with OEM information. For example an Allen Bradley/Rockwell module with have the Allen Bradley part number physically stenciled on it. If a part breaks and maintenance needs one, there must be a way to find out if that part is in stock and a way to buy it if is not.  We believe that enterprise wide viewable, verified and standardized OEM part information will reduce the cost for maintenance by eliminating the time consuming discovery of part information in your systems and the correct parts are stocked. This approach also enables part sharing between facilities that is limited without common data. Part sharing in turn reduces overall cost through reduction of inventory.  With plants here in the U.S. and worldwide, this type of advanced planning is where the true brunt of the savings come through.

Obviously, much depends on the specific agreements with your integrated supplier. But consider the following questions. If the data stored in your system is not the OEM information then it’s logical to assume that it is data created by the integrated supplier from the OEM data.

    1) How does your company know that the information is accurate? Are there any checks between the data given to the integrated supplier and what you have in your system?

    2) How does your company know if they have the correct parts setup in the system and stocked appropriately? It seems that there is an opportunity for the integrated supplier to setup and stock items which aren’t necessary and would only be discovered through data transparency.

    3) How does your company know that you are getting the best price on parts? Even if there is a cost savings agreement with the integrated supplier, if there are duplicates the opportunity for piece cost reduction is lost when the true usage is not known because of part duplication. 

My second question in this. It seems from your response that everything is running quite smoothly. But is that true in Manufacturing? Do they ever experience loss of production because a vital part could not be found or was out of stock? How about Maintenance? Inventory management? Engineering? These are the departments that should be surveyed because there is a benefit for them too.

Hey baby, what is your material type and material status . . .

Tuesday, June 15th, 2010

You would never believe the discussions around the “ho-hum” or “don’t sweat the small details” elements of a data cleansing project. Believe it or not, understanding your material type and material status is critical to be able to automate system updates. I have a firm belief that data updates to legacy systems should be completed as a night job or direct feed based a series of programmed templates. In one recent example we created an Oracle system update process for a new item referencing a material type template or another update process if the item is already set up for another location of use but is new to the requesting location, this is sometimes referred to as a location setup or purchasing organization update. You can start to imagine the amount pre-planning work and data mapping that is required for a data cleansing program.

The first fundamental rule is that the customer business doesn’t stop. For all you data purists out there that believe that one day a switch to turn on the cleansed database is in the near future, please include me, I would like to see it. Most master data management projects included years and years of legacy data; therefore there is an acceptance to draw a line in the database by last used date. When I design a data cleansing project, I will have a new item setup process referenced to legacy items, this way the client business continues and as the new items are analyzed and setup, we can reference and update the legacy item information. Independently, we will always have the legacy data cleansing parallel the new set up process.

As the data cleansing project is designed, let’s start to explore the data elements and classifications. Every client will have their material types and material status set up but generally during the data / systems assessment there should be a thorough review of industry standards vs. company processes. I find that our clients appreciate the opportunity to bench mark their processes and data structure elements such as material types and status.  We will start with material type and material status.

Material Type

Material types can be as simple as goods and services or as complicated as service, critical spare, spare part, commodity, generic, blueprint, etc. The material type is a critical element to classify which template is used for setup in the downstream legacy systems with an inventory stocking strategy applied.

Obviously a service can be standardized by the class type to describe the service where a cost for the service can be standardized. The definition of the service is described by the properties, for instance a service class of CLEANING, OFFICE can be set up with descriptive elements such as 10,000 square feet, light cleansing (dusting / vacuuming), etc. From a purchasing perspective, the buyer can run the reports globally to determine how much is spent for office cleaning then evaluate the costs and utilize best practice sourcing strategies and other global supply chain processes to lower costs. The purpose of the standard naming conventions of classes and property are to provide enough standardize information to provide the ability to compare and cost services or products.

If a critical spare is being set up for sourcing and inventory, then the part has been evaluated by maintenance or engineering and determined that the spare is critical for production uptime. An inventory plan is developed for stocking the critical spare including an initial buy quantity, plan for stores (inventory) setup of item’s unit of measure (each, assembly, package, etc.), min / max, reorder quality, stocking location, etc.

Material Status

In addition to applying a “material type” to the item records, due to the longevity of materials used in the manufacturing operation, a material status should be utilized as a long term data maintenance process. In dealing with component manufacturers and suppliers, a component may be active from a plant use perspective; however the component manufacturer no longer manufactures the item. How is that possible? A piece of equipment can have a 10 year or a 50 year life span, to maintain a piece of equipment, a list of recommended spare parts is identified and set up for equipment maintenance. If the spare part component is obsolete by the manufacturer but the piece of equipment is still in use on the production line, the material status would be “obsolete active”. A different buy / stock strategy would be implemented, such as purchase all available stock from the manufacturer or another alternative is to source with unconventional methods such as through eBay or maybe contract the item to be built by a local shop.

Typical material statuses that I have experienced are active, inactive item referenced to an active item, obsolete active, obsolete inactive (typically the status to start the disposal process) and archive. The archive status is a classification used by the analysts to allow the viewing of the item information but is not visible to the client or the item record is not exported to the client systems.

I would appreciate any input or better yet a discussion of the different material types and material status used in Product Information Management (PIM) or Master Data Management (MDM). As an industry we inherited material types and material status used in a purchasing system or maintenance systems designed to meet business function but not from the data quality or master data management perspective. What are the proper data requirements for a material type or material status? The MDM or PIM software companies and data quality consultants need to provide input from the data management perspective to provide long term data management functionality.

View Jackie Roberts's profile on LinkedIn

Did we forget the old adage “Garbage In, Garbage Out” I mean Garbage Extracted, Garbage Migrated

Friday, April 23rd, 2010

When it comes to Master Data Management, the implied definition is an à la carte of detailing and normalizing activities including data cleansing, data verification, data profiling, data governance, de-duplication, data enrichment and data provenance among other tasks. If you are managing or participating in the activities of a Master Data Management program, you are progressing in the right direction of achieving data quality. If you are NOT participating in the activities of MDM then you are part of a company wide initiative of “Garbage In, Garbage Out (GIGO)”. By the way, GIGO, in this case is not environmentally responsible or a “green” behavior.

Wikipedia’s definition for “Garbage In, Garbage Out, is a phrase in the field of computer science or information and communication technology. It is used primarily to call attention to the fact that computers will unquestioningly process the most nonsensical of input data (Garbage in) and produce nonsensical output (Garbage out).”
lancel premier flirt
sac lancel
sac lancel pas cher
sac bb lancel
lancel brigitte bardot
Lancel French Flair
lancel soldes
sac bb lancel
lancel soldes
sac bb lancel
Sac lancel
lancel soldes
lancel premier flirt
sac lancel pas cher

If you enter “garbage in” to a computer system, having the data passed through some very expensive ERP or CMMS software, isn’t going to change the data quality, the business results are equivalent to “garbage out”, which will be apparent in the day to day business activities and subsequent reporting used to determine the health of your business. Is it obvious that data should just not be moved from one system to a new system without a MDM program?

Let us now explore the concept of data migration. Wikipedia’s definition for Data Migration is the process of transferring data between storage types, formats, or computer systems. Data migration is usually performed programmatically to achieve an automated migration, freeing up human resources from tedious tasks. It is required when organizations or individuals change computer systems or upgrade to new systems, or when systems merge.

If an MDM program is not in process when implementing a new software or upgrading an existing software, the project should include an evaluation of the data and/or an evaluation of the additional functionality of the “to be” model of the new software identifying the new data required for improved business processes, reporting and the plan for legacy data clean up. A data migration project needs to be more than moving data from a legacy system to the new system.

I asked the question to one user of a maintenance software implemented a number of years earlier as I had the opportunity during a site visit at a plant. The software had awesome abilities to create and manage the relationships between equipment and spare parts, supplier contacts as well as the potential to improve processes, reporting and streamlining the information required for a maintenance organization. The company invested in the software / hardware, understood the ROI but lack the understanding of the data needs or management. The software was implemented however the majority of the functionality was not used, therefore the ROI was never achieved. When I asked why, I was told “no data and we don’t have time to add the data.”

Another scenario I came across, purchasing moved data from a legacy system to a new ERP system. The data wasn’t set up to a data governance or MDM procedure, legacy data riddled with duplication, obsolete information, unstructured descriptions and so forth. Different system, same legacy data quality and the ROI was never achieved.

I have one simple question, why invest in a software product if the data is not going to be treated as an asset? The results of a successful implementation are that the business processes are streamlined; simplified and reporting capabilities are enhanced through enabling both Master Data Management and Software functionality.

Garbage In, Garbage Out or Garbage Extracted, Garbage Migrated as we are moving to the next generation of technology. Are we relying on a skewed nonsensical output based on low quality data to make our critical business decisions?

View Jackie Roberts's profile on LinkedIn

Data Cleansing to Achieve Information Quality

Wednesday, March 10th, 2010

Those of us that work around or manage the day to day operations of an MDM, data governance, or data cleansing projects understand the challenges and efforts needed to transform “raw” data though multiple stages of analytics and processes to achieve information quality to be used in our customer’s CRM, CMMS, PIM and ERP systems. The result of an un-cleansed product record can cause a production line to stay off line because an inventory item wasn’t ordered due to incomplete information or added inventory cost of ordering an incorrect item (we can be talking about a $10,000 motor) or multiple entries and setups in the material master due to data duplication.

Data vs. Information definition: to simplify the concept, data is managed by a combination of a team of analysts and software to achieve the goal of a cleansed record or useable information. Data is imported and profiled, classified, structured, verified, enriched, translated and reports generated; we create useable information from low quality data for use in decision making related to engineering, purchasing, maintenance, marketing, sales, etc. The data that is exported into client systems is information that will meet a predetermined set of data governance rules and information quality requirements.

Data Quality Experts, let have a discussion on the definitions of data quality, does an address or a product detail meet the requirement if only classified? Or should verification at source (contact for address or manufacturer / supplier for product) be required at initial setup of the data in the system or maintenance scheduled as part of the data governance program? Is the data incomplete? Does the MDM process include a question / answer scenario to complete the data?

MDM software designers and developers can we also have a discussion on the software’s ease of use to manage the stages of data cleansing to support a MDM philosophy and using advanced techniques to automate the management, add intelligence in processing data imports, workflows and data cleansing stages of classifying, profiling, matching, translation, data audit analytics, exception reports and status reporting of a data record?

I believe these are great discussion points and will serve as great blog topics.

View Jackie Roberts's profile on LinkedIn

It Is Not So Easy to Build a Data Cleansing Logic

Tuesday, March 2nd, 2010

During my morning data quality, MDM and data cleansing reading, I happened upon this on a help site and the million $$ question:

I have a scenario to build a data flow task for Data Cleansing.

Logic 1 to be build:
Source data would be like 1050 and I should convert it to 1.050
Source data would be like 085 and I should convert it to 0.85

Profiling, structuring or normalizing data without any referential information risks errors in business use, especially if the data is use for purchasing or maintenance. If the goal is to automate the data normalization, the data needs to be referenced to metadata, 1050 could be a part number? Or a quantity? It could be an attribute representing a measurement such as length or diameter. Is it an inch or foot or meter?

View Jackie Roberts's profile on LinkedIn

Data Quality Open Issues and Questions?

Tuesday, March 2nd, 2010

Now that we have determined that MDM, Data Governance, Data Cleansing and Data Quality are important as well as the new trend for blogging, tweeting and discussion in general, I ask the most important question . . . HOW?  When do we get to the discussions on the content?

I am a very detail oriented person; I have to be as one of my largest accounts requires me to participate in the day to day deployment of global MDM processes for one the largest automotive manufacturers! I am very interested to learn how businesses in other industries manage their data. I would hope that sharing of information and best practices among industry partners will be a win-win situation. At a minimum the discussion will be refreshing; the sharing of innovative information the will spawn the creative improvement needed to create truly efficient knowledge driven business processes, data classifications, metadata and definitions and translation. . . is anyone interested in discussing the logistics of managing translation as part of Master Data Management?

Is anyone interested in discussing my struggles and sharing yours trying to find standard global translations for ISO UOM (Unit of Measures)?

Is anyone interested in discussing what fields should be included in a MDM Data Governance Program for MRO data; UNSPSC, warranty, term of warranty, lead time, estimated price, ECCN, etc.

What Schema or classification structures are you using for spare parts and maintenance items? What about a discussion on using a public vs. priority classification system?

What are some best practices for migrating, profiling, structuring, mismatching and re-verifying legacy system data?

We have a nifty data mismatch process for manufacturer contact information; will this be easily implemented for a CRM data project? What about patient contact information in the healthcare industry?

There are a few bloggers out there that continually add content to their writings but it is starting to appear to be a small group, anyone out there interested in achieving data quality want to discuss “real” life best practices, lesson learned or discuss HOW of MDM, data quality or data cleansing.

View Jackie Roberts's profile on LinkedIn

Open Letter to Gartner

Thursday, February 4th, 2010

Dear Andrew White,

Thank you for your comments in “Something beyond MDM is coming your way – would MDM 2.0 fly?” and starting the discussion to expand the definition of MDM to include data integrity, data quality, entity resolution, matching, data integration, governance, metrics and analysis. The topics discussed should also include work flow (management of data and analysts), translation management, data structuring, data profiling, duplication removal, data change management, verification contact management, etc.

The MDM and PIM software industry needs to take a step back to understand actual day to day business requirements of data management to achieve Master Data Quality. Lesson one is that data is created and supplied by many sources in many different formats at various quality levels. Data is created by engineering, submitted by integrators, manufacturers and suppliers. To add to the complexity of the information flow, data is introduced into businesses systems in different departments (engineering or purchasing or maybe plant from maintenance) with different data requirements to meet the needs of that job function. Now the next dynamic is mashing new data to existing legacy data in a number of systems to ensure no duplicates are created, managing obsolete / recommended use and functional equivalents. The old philosophies of a PIM or MDM software to “hold, provide search functionality and maybe a shopping cart” isn’t going to meet the true requirements of the new definitions of Master Data Management.

To meet the new definitions the MDM or PIM software needs to provide horse power to electronically and intelligently processing data to identify exceptions for manual intervention by an analyst. Data should be processed one time to ensure that the data record will be enriched to meet the requirements of the enterprise and then the record is moved to a maintenance program (managed also by the MDM or PIM software). The processing of data needs to be efficient and cost effective, from my perspective the cost of data management should be covered by the cost saving achieved by MDM management.

I look forward to the discussions as the definition of MDM is expanded to include data quality, data governance, data provenience as the software industry provides the intelligence, functionality and business processes to cleanse, enrich and management data for my client to ensure their ability to make confident business decisions based on data integrity and accuracy.

Here is to the future of PIM and MDM!

Jackie Roberts

View Jackie Roberts's profile on LinkedIn

Data Management: What to Consider in Tracking Change in Information

Monday, January 25th, 2010

Our work encompasses a large number of spare part records, each part records flows through our data management and verification process. A large number of spare parts could be as many 250,000 to 300,000 records references to as many as 10,000 pieces of equipment for just one program. As you can imagine tracking each part record is a challenge and the complexity of maintaining data change history for your business should be evaluated when considering a PIM software deployment.

The requirements for our clients business requires the complete documentation of spare part record change history including: Spare parts list submitted by, equipment used on, location of equipment, verification and data enrichment including who verified, change in information, when, etc. Why is this information important?

1. Spare Parts List – The supplier submitted spare parts list should be made a mandatory requirement for equipment design and build. In order to support a maintenance organization all suppliers should submit a full bill of material with recommended spare parts identified for the equipment they plan to deliver. The supplier requirement should include the original manufacturer for each spare part. Additional information tracked should include who submitted, file name, equipment name, equipment warranty, terms of warranty, when submitted and all contact information.

2. Use on Equipment – each spare parts list should include equipment part or model number, standardized name and a category of equipment. The standardized naming conventions are extremely beneficial for multi-facility maintenance use and will support common tasking procedures.

3. Location of Equipment – this information is essential for the export to a CMS maintenance system enabling spare parts to be set up for maintenance, work orders created and tracked and asset management.

4. Verification – is essential for accuracy of data quality. The verification process of a spare part is sometimes a true investigation. We receive data with suppliers listed as the manufacturer, partial part numbers, conflicting descriptions, incomplete descriptions, etc. Each data element change should be documented with when changed, who revised, what was changed and why.

5. Data Enrichment – What does the full enterprise (purchasing, engineering or maintenance) need to support the business activity? A spare part record should be touched 1 time and all information required should be included at the time the record is set up. Data Enrichment will include a reference to a class (category), required attributes to describe the part supporting the technical long description, estimated price, ECCN (Export Compliance Classification Number), UNSPSC® (United Nations Standard Products and Services Code®), lead time, warranty, terms of warranty, tasking information, etc.

In order to implement an accountable data governance program and useable data structure, a well planned data mapping should be documented for legacy systems of the enterprise. A complete data governance program will enable new efficiencies for data processing and the management of improved business processes such as parts sharing, identifying critical spares, strategic spare parts purchasing, and warehousing.

View Jackie Roberts's profile on LinkedIn

Data Quality: Classify and Describing

Wednesday, December 2nd, 2009

As the Master Data Management industry matures, the industry focus is not only on the development of software to collect product records but software to implement the data quality process solutions supporting data governance and provenance including record history, structure, completeness and accuracy to ensure our customers are able to make confident, informed and accurate business decisions based on data accuracy. The first step of implementing a data governance program is implementing a naming classification system.

I have had experience working with single business home-grown classification structures and third party developed structures for purchase, currently I have chosen an open and public classification structure provided by ECCMA ( This is beneficial to the customers that I support ensuring that they will always have access to the classification structure sometime referred to as the schema used to classify their data.

Implementing a classification requires setting up Identification Guide (IG) to establish the template definition to technically describe the product or service with enough information to support engineering, maintenance or purchasing while recognizing the limitation of software short and long description required character lengths. The IG template supports and simplifies the required information request to the manufacturer and suppliers to verify all information by our analysts to standardize the description.

To create an IG, we search the ECCMA class list; fortunately many of the classes are established. As the IG is set up we will use the ECCMA established class name convention; this will ensure that every item will be setup with the same name and format, every ball bearing item submitted will be classified as a BEARING, BALL.

The next step is to set up the properties required to describe the BEARING, BALL and for each property designated the data type requirements such as numeric, text string or designated unit of measure. The property value requirements for a BEARING, BALL might include TYPE, BORE DIAMETER, OUTSIDE DIAMETER, WIDTH, DYNAMIC LOAD CAPACITY, STATIC LOAD CAPACITY, MATERIAL and so forth. Our analysts will verify the data to the original manufacturer sometimes using xml to exchange the product information referred to as “Cataloging at Source”, the information requests are standardized and remove much of the quality issues commonly found in a non-standardized data verification or description process.

The property value description build is controlled by the sequence number of each property Item data that will make it’s way into a length restricted description field we place the most important information in the begin of the auto generated description.

Setting up the Identification Guides requires upfront strategic planning and detailed work, as you can imagine that a classification schema can be up to 10,000 classes depending on the industry but it provides a multitude of benefits including standardized requirements, a road map for our analysts to facilitate the process, improved data management reporting / metrics and enhances language translation for the global organization.

View Jackie Roberts's profile on LinkedIn

Implementation and Use of MRO Naming Standards

Friday, October 23rd, 2009

With all the discussion focusing on Master Data Management and Data Quality, I always come back to these questions: How is the data structured and how is the accuracy and content completeness measured? In our business of managing the coding and verification of items and spare part information needed to keep manufacturing plants running, a structured schema of naming conventions (class), descriptive attribute standardization (properties) and verification at the sources of manufacture (coding @ source) is “key” to quality and completeness measurement. We are managing the ECCMA eOTD for the Automotive Industry Content Standards Council (AICSC) focusing on MRO naming definitions which is the foundation to a spare part description, just as a table of contents is the foundation of a text book.

The first step is to develop the Identification Guide (IG) in order to baseline the properties needed to best describe the class. For example, let’s take the class of SCREW, SHOULDER and the properties TYPE, MATERIAL, FINISH, THREAD SIZE, DRIVE SIZE, SHOULDER DIAMETER, SHOULDER LENGTH, THREAD LENGTH, HEAD DIAMETER, HEAD HEIGHT, SHOULDER LENGTH TOLERANCE, MINIMUM TENSILE STRENGTH, CLASS, HARDNESS RATING and PACKAGE QUANTITY. The IG also provides the information needed for our analysts to acquire properties and our applications to sequence the properties within the short and long descriptions that are built:


Each time an item is submitted for coding or processing the item is imported into a master database. Through intervention by our data analysts, the item navigates its way through a number of checkpoints including an auto-suggest to propose a class. The class and properties via the IG are the requirements our coding analysts use to verify the accuracy of the information submitted, to verify the completeness and to acquire the additional information needed to enhance and build an item or spare part description for our clients to base real business decisions.

The implementation of the eOTD is a two process scenario when working with our clients. First, the legacy data is mapped to the class, the item data is profiled, cleansed and enhanced to meet the requirements of eOTD IG, ensuring the client’s data quality goals are met. The updated item information needs to be applied to existing client item data. It is critical that all changes to data be tracked and logged. A properly planned and executed update to legacy ERP and CMMS systems should be initiated to incorporate the enhanced and corrected item information into the user facing systems. This is an extremely critical step as the downstream information flow will affect systems and uses such as inventory re-distribution, purchasing and contract management, engineering bills of materials and maintenance schedules. A thorough and complete mapping of data through the enterprise should be used to understand data flow across all business units. The mapping should include data entry points and data use points through all departments which set up all of the cost saving pay points as the data processing is streamlined.

The second process is an on-going data maintenance plan for new items that are introduced into the organization. This process should start at the introduction of item information into the system. All items and spare part information should be verified with the manufacturer and classified to the eOTD before setup or use in any system. The length of time the coding process requires is a critical element as the item or spare part information should be as complete as possible while at the same time be ready and waiting for the buyer to put the item on a contact or a maintenance employee to setup the tasking information in the CMMS for a piece of equipment. The only requirement for the employees who use the information after its initial entry into the system is to perform the actual requirement of their job and not to decipher a cryptic unstructured description.

If the items are pre-processed using the eOTD and the associated ISO standards, every item and spare part will be structured and standardized. The engineering, purchasing and maintenance departments will focus on the core of their day to day specialized responsibilities instead of searching for parts or dealing with trying to purchase items that a supplier does not recognize or have to acquire the missing information.

We all agree on some of the basic benefits both in process and cost such as reducing inventory with the identification of duplicate items, facilitation of inventory sharing and internal purchasing programs, reduced employee time searching for parts, common spare part usage strategies, reduced downtime in manufacturing equipment due to lack of information availability and ability to manage using a just in-time inventory model. The eOTD and its Identification guides are the building blocks and the roadmap to achieving structured and accurate data that can be reliably used to base real world decisions.

For more information on the eOTD please visit

View Jackie Roberts's profile on LinkedIn

Data Quality: Software Innovation Please

Thursday, October 1st, 2009

I am all about the data, location management (to location and equipment), data quality, and methods to improve auto-processing, enhancing data, providing data reports and results that support our customer’s data requirements in their day to day activities.

Here is the million dollar question, this is one scenario: Over a million records in a year, legacy and new records submitted for processing from 2,500 different users and two different business processes (single submit and BOM extract). What technology would be required to intelligently automate the processing of these records to a Master Data Quality Standard?

Remember this is an on-going maintenance process, not a one time migration of non-cleansed data to a new ERP or maintenance system, nor am I referring to parsing the records into different fields of the new ERP system but ensuring that the records are verified, structured, properly attributed with full descriptions and additional information to support the business needs.

First, let’s look at the Wikipedia definition of Product Information Management (PIM) “PIM systems generally need to support multiple geographic locations, multi-lingual data, and maintenance and modification of product information within a centralized catalog to provide consistently accurate information to multiple channels in a cost-effective manner.”

Future PIM software purchasers, what evaluation methods are you using to ensure that your PIM software purchase will support the continuous update and flow of data for your entire enterprise system? Here are some items to take into consideration during your evaluation, these are all items that I ask about and would recommend that you request the answers in writing:

1. How is the change history of the data stored in the system and how easily can it be retrieved?
2. Has the performance of all modules of the software been tested and what is the base line?
3. Request references (at least three) for each module of the software.
4. What is the software product work flow and how is the data processing assigned to employees?
5. Ask to review the documentation and take the time to review; this should be a window into the complexity of the system.
6. Request the design process model and how the software company incorporates customer feedback?
7. What is the bug fix process? What is the quality system to implement a bug fix?
8. What is the software company’s philosophy on customizations at your cost?
9. How is language handled? Translations referenced to a master record?
10. If the software solution is multi module system, how are the master records referenced through
the entire solution?
11. What are the long term design strategies or road maps for each module of the software solution? Ask for the earlier road maps and the software release note to evaluate the how well the software company plans and implement updates to the systems.

And I can go on and on, the licensing; customizing and implementing software in your environment can be extremely costly and time consuming, does Caveat emptor “Let the buyer beware” work in the business world or is there a “Lemon Law” when purchasing software?

View Jackie Roberts's profile on LinkedIn

Who Represents the Data in your Master Data Management Software Systems Designs?

Thursday, September 17th, 2009

Those of us that are representatives of Master Data Management initiatives, data quality projects and the users working the processes developed by software makers have a difficult journey in front of us. It seems that for years software developers have designed cumbersome transactional data management systems that do not begin to understand real time data management and what effort it really takes to achieve an on-going Master Data Management program. I have two initial questions: Do these software companies toting one press release after another about Master Data Quality Management even understand the importance of on-going change management to a master data record? How does a business stay in front of the information flow if the software system does not dynamically adapt to the ebb and flow of data volumes and requirements? Software companies track updates and revisions to software code, data is of the same importance sometimes it is of greater importance; the number of data level updates can be monumental depending on the size of the company. Isn’t the end result of a multi-million dollar software system implementation supposed to drive efficiencies and streamline the activities to support their businesses? Cost saving and real time data management is the name of the game.

Here are a few data management tips:

1. Data needs a simple way to be imported into the system. Data comes from a number of sources so a dynamic mapping and import procedure to an internal processing area is useful for data analysis.
2. Yes, there needs to be an area to work on data before it is promoted to a Master Data Status. Software developers need to understand that data is never in a pristine state ready to be entered as a Master Data Record. Never!
3. Data processing requires a managed work flow through the system. Imagine the issue to have thousands of records for analyzing and many employees trying to manage who has what records outside the system. Just not functional work scenario.
4. Never copy data from one software module or grid to another, always reference. Cost per record to manage the data is increased every time a person needs to manually update an aspect of a record more than once.
5. Performance of the software is imperative. To really capitalize on software and technology reporting and analysis need to be done on thousands of records at a time. Time is money.
6. Provenance tracking is extremely imperative especially when “Cataloging @ Source” is the foundation to the quality of the record. Data should be identified with history: where the data originated, contact information, data and time, a revision level, file name, all associated records on the file, etc. MDM system developers, can you start to see the importance of this information?
7. Data needs to be cleansed and profiled; it is important that the software processing tools understand all aspects of the data. For instance search rules should not be so rigid that it takes an analyst manual actions to find a duplicate record because of an extra space or a slash. A worse case scenario is to take the data out of the system to work the data in excel, I am not going to even comment any more on that scenario except that it is totally unacceptable to remove data from a system to try to normalize it. Remember there is a lot of data brought into the business and the cost to manage the data is not core to the primary business, it is an indirect cost. The solution is not outsourcing to a “low cost, low skilled” worker in another country when much of the preprocessing can be done at the expense of CPU time.
8. Data changes, if you have a number of different modules in your software package what is the strategy to support aggregation of the changes to the different business units using the data? Does your software only update in one module and the other modules are in an out of sync situation? Again remember software should be designed to simplify the processes to support the business needs.
9. We live in a global economy language translation and localization of data is more important now than ever. What are the methods translate and maintain localized data?
10. Reporting and exporting of information is critical. It is a requirement to export a segment data set to send to a business customer or run a report of the activities of the work. A MDM system must be able audit data activities through the complete process of import through promotion to a master record.

I am a firm believer that software should not dictate a business process but should be designed to streamline and add efficiency to lower the cost the activity. If you are designing MDM systems, your team should include experts in data management, data quality and business process expertise with applicable experience. Businesses should not be paying for customizations to your software to be support basic 101 management of data.

View Jackie Roberts's profile on LinkedIn

Life Cycle Data Management Strategy

Thursday, September 3rd, 2009

Life Cycle Management implies a single “cradle to grave” plan that integrates production support planning, acquisition and sustainment strategies. Think about the importance of data flow and the criticality of accurate data throughout the complete life cycle of a piece of equipment: design, build, install, spare part acquisition, inventory management, maintenance, spare parts sharing and finally, asset disposal. From a data perspective, remember the old computer motto: “Garbage In, Garbage Out”.

What is your Life Cycle Data Management Strategy?

1) Drawing Libraries – The items in the library need to be cleansed and profiled to a classification schema. The schema requires standard naming conventions and technical descriptions. The schema can be designed within your company, priority purchased from another vendor or you can opt for using an open classification dictionary for public use such as the ECCMA eOTD.

2) Common Component Listing – provides a listing of preferred components that support the inventory management strategies for your organization. All equipment designers and builder are required to use the common components identified. Note: common components are set up in the drawing libraries.

3) Spare Part Acquisition – Place the components on purchasing contacts at the beginning of design, this will facilitate the ease of spare parts planning and purchasing. An item on contract provides purchasing the data needed to run analytical algorithms in order to better negotiate pricing organization wide. If the item is set up accurately to a standardized classification dictionary with technical descriptions only one time the whole organization can realize the benefits of the Life Cycle Data Management Strategy.

4) Inventory – supports optimal inventory management by promoting the ability to plan stocking levels and strategies with nearby facilities. Think about the implementation of spare parts sharing or an internal purchase first program. The most important requirement is the standardization or normalization of the data; the part needs to be classified only one-way and should be shown in every system the same way.

5) Maintenance –The use of standardized components coupled with a data management strategy allows the organization to streamline the number of different components used to serve the same function on different equipment. Also reducing the number of parts in inventory and maintenance management tasks.

Life Cycle Data Management Plans starts with component standardization and cleansing the data in your equipment drawing libraries and all downward systems including maintenance. This strategy avoids duplicate inventory items and at the same time promotes an internal purchase philosophy that puts a priority on inventory sharing before issuing supplier purchase orders. Standardizing inventory with information elements such as predefined stocking levels, identification of critical inventory, functionally equivalent item identification and purchasing analytics as well as enhanced vendor management are all necessary steps for a manufacturing business to remain competitive in today’s world of lean low overhead manufacturing.

View Jackie Roberts's profile on LinkedIn

Why Data Cleansing?

Thursday, August 27th, 2009

The statistics around data cleansing are overwhelming and there are mountains of discussions, white papers and tweets available pertaining to Data Quality, Data Profiling and Master Data Management. I think we need to take a step back and try to understand how and why data cleansing has become such a hot topic. You may have realized that business data typically isn’t as streamlined and efficiently maintained as we thought it was. Your organization may have shipped purchased items back because they were not what you thought you had ordered. In some cases another department was found to have the item in inventory, even though we have the item on urgent delivery status from a supplier because the item is set up under a different number or description, you couldn’t have possibly known the item was actually available from existing inventory.

The data quality issues that industries around the world are experiencing have occurred as a result of many years of manual inventory and purchasing record maintenance, through mergers and acquisitions of companies and business units as well as data migrations from various legacy systems into new fangled ERP black holes. There are a number of reasons why.

A common data trap frequently fallen into is assuming that just because you are implementing a new ERP system your organization will now have quality data. Remember the old computer motto – “Garbage In, Garbage Out”. Let me tell you based on first hand experience that there is nothing “sexy” about bad data when the production line is down or any other time.

Data Cleansing and Data Profiling is a very tedious and detailed oriented service. There are a number of key rules to follow whether the profiling and cleansing work is done internally or outsourced to someone who specializes in data cleansing. Here are some rules to consider before a project is started:

1) Conduct a detailed and comprehensive data mapping through all internal systems including engineering, purchasing, asset management, plant inventory management, etc. The goal is to standardize and document all data sources within the enterprise one time and ensure that each department is accounted for and determines what data elements are required to complete their business required tasks.

2) Build a central data cleansing database and make sure all locations using each item are referenced. This ensures that updated information will be passed back to the various legacy systems. You will need old information and updated information for this stage of the process.

3) The data cleansing database should include a balance of electronic scripting for data corrections and manual auditing. A solid process for answering questions needs to be set up. My preference is that the system should use a web utility that tracks data change history and other data related information such as contact information, issue resolution status, classification, questions and answers, etc.

4) The data needs to be referenced to a classification schema and a standard implemented for descriptions and properties. The schema can be designed within your company, priority purchased from another vendor or you can opt for using an open classification dictionary for public use such as the ECCMA eOTD.

5) Free text is not our friend in the data standardization world. If all possible use a system that has built in data rules and ensure anyone entering data into the system understands the standards and the importance of quality data in addition to the high cost to businesses using bad data.

6) Data Cleansing and Profiling the proper way is not “cheap”, but the cost of cleaning the bad data is always less than the expenditures incurred by cleansing your data multiple times or continuing to operate your organization based on erroneous information generated from one or multiple dirty databases.

Cleansed data permits the removal of duplicated inventory items, an internal purchase philosophy that puts a priority on inventory sharing before issuing supplier purchase orders, standardizing inventory with predefined stocking levels, identifying critical pieces of inventory, identifying functionally equivalent items, use of engineering component standardization libraries and facilitates purchasing analytics as well as enhanced vendor management.

View Jackie Roberts's profile on LinkedIn

The Spare Parts World

Tuesday, August 11th, 2009

Spare parts management at a high level is perceived and often approached as a process that should be simple. Looking at it from perspectives of the many different entities that form the supply chain and are required to work together – component manufacturers, tier 1 suppliers, tier 2 suppliers, and manufacturers, the logistical expertise needed to coordinate the information flow is anything but simple.

To realize cost saving from new process efficiencies, these separate legal entities need to “integrate” the information flow to manufacturers and within each manufacturer to internal groups such as purchasing, manufacturing engineering, plant maintenance, facilities management, warehousing, commodity management, and asset sharing / recovery need to share the mission critical master data related to the spare parts. A truly integrated information flow could conceivably touch a number of business units that indirectly work together across the supply chain to deliver just one item to a manufacturer. The most common element needed by (and from) all involved in the supply chain of the spare parts that keeps the equipment running is data standardization, data quality and an electronic method of transmittal. A study of large companies, a majority of which have revenues of more than $1 billion, found that 31% believe that their costs for incorrect data are $1 million or more per year.1

Data standardization and data cleansing cost should be covered with cost saving initiatives. In addition to the initial data cleanup; strong data governance processes should be implemented for on-going data setups.

1Dave Waddington, “Growing Adoption of Master Data Management by Business?” citing an Information Difference survey of 112 companies, 65% of which had revenues of more than $1 billion,, IT Analysis Communications Ltd., June 23, 2008.

View Jackie Roberts's profile on LinkedIn

Outsourcing; how do I compete?

Tuesday, July 28th, 2009

I get it, you operate globally and the cost of labor in the states is 4 to 5 times higher than the wages in the countries that typically receive outsourced work. I have only one question; is the only factor taken into account when deciding to outsource from the US to a foreign country cost? When the RFP is evaluated does intellectual property protection and security, quality of work product, time zone communication issues, the geopolitical climate or increasing price trends enter into the decision making process?

I once spoke with a purchasing agent employed by a Fortune 500 company and this is how outsourcing was explained to me…”even if takes someone in a foreign low wage country 3 attempts to get the work correct, we are still are saving 25% over their competitors in the US.” Of course, I had a number of responses, including: Was the cost to manage and audit the work 3 times included in the cost saving analysis? Of course not, the cost savings estimate is only documented at the RFP phase.

Each day our company evaluates our internal and customer processes to build automation and intelligent software applications that increase throughput, improve accuracy without manual intervention and provide our customers with a continuous stream of process improvements. I believe long term our cost are competitive, the challenge is educating new customers to understand the unique and beneficial processes that allow them to capitalize long term implementing  our data quality solutions.

My hope is that I will never see another response to an RFP “Need more competitive pricing or to include “off shore” solution – This is required for more competitive proposal and for further consideration”

How long will it take US salaries to race to the bottom so work can be outsourced back to the states? I hope that this is not the answer, let’s discuss what US vendors need to do offer the long term value add processes that off shore options do not?

               View Jackie Roberts's profile on LinkedIn

What is the Cost of Bad Data?

Friday, July 10th, 2009

How does a company apply a “cost” to bad data when the costs are so fragmented across the organization? There are obvious costs such as a part not being in inventory, purchasing has tried to buy the part but the supplier didn’t recognize the part number, now production is down and everyone is scrambling to find the replacement part. In this case the cost of the bad data can be assigned.

What about the other costs? What does it cost a global manufacturer the lack of visibility of the “spend” or the inability to manage vendors selling like or equivalent products?

It’s estimated that process failures and bad information cost $1.5 trillion or more in the U.S. alone.[i]

[i] Larry English, “Information Quality Tipping Point: Plain English about Information Quality,” DM Review, July 2007.

View Jackie Roberts's profile on LinkedIn

Attending the National Summit

Friday, June 26th, 2009

First, I want to thank my company DATAForge LLC for suggesting that I attend the National Summit. My leadership understands the importance of participating in the comprehensive dialog around the direction of the US strategies in the Technology, Energy, Environmental and Manufacturing industries.

The Detroit Economic Club put together a dynamic three days of thought starting and panel discussions with business leaders, state, federal and education representatives. I am looking forward to reading the report of the action items gleaned from the National Summit to be presented by Bill Ford and Andrew Liveris to Secretary Locke in Washington.

The dialog was refreshing and un-stereotypic as prominent business leaders discussed the “how to” and the “what is” for the future direction needed to support “clean energy” and “smart technology” for our environment. The discussions were the first steps to create a roadmap to set the long term policies and standards needed to allow our innovative research universities and businesses to work together to develop and manufacture the next generation of global environmental and manufacturing technologies. This is the old fashion “American” spirit and I am thrilled to be a part of the revolution.

So let’s get the plan in motion, communicate and discuss our action items, support the research institutions, businesses and the startup ventures to make America the next generation innovators and competitors in this global economy.

View Jackie Roberts's profile on LinkedIn

Data Quality – What is a good description?

Monday, June 1st, 2009

A spare part record, sounds fairly simple? Yes, I used the same concept earlier. Ask the questions, what is the end use for this record? Is this information only to purchase the item? What if we broaden the scope of Data Quality to enterprise wide, say Engineering or Plant Maintenance, is this description enough to describe the part technically?


A full technical description provides many benefits that support engineering and maintenance such as identifying functional equivalents or using another part in inventory if the requested item is out of stock. The description will provide enough information for purchasing to ensure that the correct item is being purchased and also provides ability to reconcile legacy data.  

What additional elements should be included in a description?


I am not sure what the legacy system search capabilities are but if these descriptive element are included in a description, it only ensures the ability to find the part quickly (or a substitute part) and this will minimize equipment down time and employee sourcing time.

Question: What level of description is required enterprise wide to optimize employee performance and inventory optimization?

View Jackie Roberts's profile on LinkedIn

Data Integrity – How is this really achieved?

Thursday, May 21st, 2009

Data integrity is the assurance that data is consistent and correct. Spare parts, sounds fairly simple?
What are the basic elements of a part record; name, part number, description? Data Integrity is used way too much but is a very vague concept. Let’s just look at the purchasing department; it is easy if the part records are only used by the purchasing department where the main objective is to purchase the item. This example is all the data that the buyer will need to purchase this switch.


How would a buyer know that these are the same parts? Two different manufacturer names and two different part numbers; this scenario will cause duplication in a purchasing system. The result is the additional work of creating and maintaining two contracts but also cause downstream effects such as excess inventory with more than 1 stocking location, lack of a volume purchase or a global view.

 Question: Is the answer to always to confirm the actual manufacturer and set up supplier references?

View Jackie Roberts's profile on LinkedIn

Language Fonts

Tuesday, May 5th, 2009

I had the fortunate opportunity to participate in a large data migration, and over the next couple of weeks as I run through my lessons learned, I will update my blog.

Language Font Lessons:

As a data migration plan is put in place to confirm that data has migrated accurately to a new system, audits and verification processes are planned, counting the data base rows, and verifying that tables are populated, etc. Yet in this data we sometimes find question marks or squares indicating that the method used to transfer data did not migrate some of the special characters in a number of different languages . . . . and a Unit of Measure that appears as a question mark will create a multiple number of issues for the purchasing process.

The solution is that the verification process should be designed to include the users of the data to confirm migration. The migration cannot be planned from only an IT or Software Suppliers’ perspective.  I find that the SME, aware of all aspects of “their” data needs to participate in the review and confirm the migration success based on data quality.