Restricted access

March 17, 2009

Keeping Your CRM Data Quality at Its Best

Filed under: Data Cleansing, Data Quality — Tags: , , , — Olga Belokurskaya @ 4:59 am

What is one of any organizations most valuable assets? CRM data is probably one of them. Companies are likely to protect and secure their CRM data, but what about its quality? Data quality management is very often one of the most neglected areas of CRM management and one of the major pain areas for administrators and managers.

So trying to learn more about the problem, I came across some practices that could help maintain and enhance the value of the data.

  • Do not ignore bad data until it starts affecting your work. Keep an eye on your data and monitor any changes in its quality.
  • It’s a good practice to manage, normalize, format, qualify and filter out your leads outside your CRM and then have it uploaded so that what is not valuable or quality data does not get added.
  • Periodical data append is important although it involves a lot of manual effort and may seem time consuming.
  • Duplication is possibly one of the most common problems and creates redundancy as well as inaccurate reports. So it has to be kept in check
  • The same thing may be said about expired data that simply junks your CRM. The more regularly you check for expired data, the healthier your CRM is.
  • And, finally, if data cleansing is what helps you maintain your database quality then data enrichment is what will help you enhance your data quality and make it more valuable to the end users.

And here we go with the conclusion: good data management practices, constant cleansing and enrichment process – that’s when your CRM data really becomes an asset.

March 21, 2008

Stop Accusing IT for Dirty Data

Filed under: Data Cleansing, Data Quality — Tags: , — Alena Semeshko @ 4:19 am

IT is the easiest to blame for drawbacks and holes in your data, that’s no news. Whenever you don’t get the results and the information you need (provided your business processes are set to present you with quality data), you naturally start looking for someone or something to blame. And IT seems to be the perfect scapegoat. Little do we realize that the problems lie in the business, not in IT.

The thing is, we associate data with IT, consider it a part of IT and don’t realize the two are totally different. Gartner research VP Ted Friedman suggest the solution that should keep the blame off of IT and cause less data quality problems:

“Business needs to be in the driver’s seat,” Friedman said. “At the moment we feel that the focus on the topic is way way too much in the IT camp.”

To advance data quality, Friedman suggests the use of a data steward, who is responsible for benchmarking current levels of data quality and measuring the impact on the business of bad data. The data steward looks at the data transfer processes, making sure, for instance, that the data passes through as few people as possible.

Data stewards will come from a business background, but have good relations to IT, Friedman said. They will only be effective if they are held accountable for their progress, and receive bonuses for meeting quality targets.

March 12, 2008

Data Migration Talk

Filed under: Data Cleansing, Data Migration, Data Quality — Tags: , — Alena Semeshko @ 4:37 am

CleanUp! You first hear these words as a kid from your parents. Clean Up! When you hear this you usually know you’ve made a mess. Clean Up! This is what you shouldn’t be hearing, or, for that matter, thinking, in regards to your company’s data. Or, at least, if the prospect ever crosses your mind, it shouldn’t look as nasty and unpleasant as it used to in your childhood. =)

But nonetheless, clean up you should. If your source systems and initial data are a mess, of course. The obcession with clean data is only justified in this world of Business Intelligence, where looking at the picture as a whole and thinking big is not an encouraged, yet infrequent occurance anymore, but a requirement.

One of the key elements to having your data clean and having a global view of your organization’s lifecycle is data migration. Wise data migration with an appropriate strategy and the right tools, not the sort where you splash money and remain in the same spot you started.

Anyway, a whitepaper I came across got me thinking about this, so you can download it and check it out for yourself over here. It’s called The Hidden Costs of Data Migration and it touches upon the issue of data migration, whether to employ it or not, and the costs associated with it.

Data migration has become a routine task in IT departments. However, with the need for critical systems to be available 24/7 this has become both increasingly important and difficult. This White Paper will outline the factors that are driving data migration and examine the hidden costs that may be encountered when data is moved.

March 7, 2008

Data cleansing…cleans data

Filed under: Data Cleansing, Data Quality — Tags: , — Alena Semeshko @ 5:42 am

As I mentioned in the previous post, data cleansing deserves a post of its own. Even more than just one post actually.

Well, it’s obvious data is the key player in business decision-making. Good clean data provides the platform for wise decisions that put the company’s profits onto an upward curve.

Acquiring the right data, however, is not always as simple as it seems. The techniques are many, but the effect from them doesn’t always meet the expectations. That’s where data cleaning technologies come in place. Data cleaning software cleanses the initial data, making it more precise, useable and up-to-date. Techniques used in data cleaning, among others, include:
• Data merge from data sources
• Record matching and synchronization
• Data type and format conversion
• Data segmentation

In this post I want to focus more on record matching and data synchronization.

An example that is often used in this regard is name and address data. Name, address and phone information is the quickest to get outdated and easiest to get wrong. Of course, there are directories and yellow pages that you can always check…but if you do it by hand each time you encounter a mistake, that’s an impermissible luxury in that it takes way (I mean waaaaaaay) too much time.

That’s pretty much the reason and the root of data synchronization technologies. They process the data, compare it to the standard and return a valid quality dataset with all possible mistakes (misspellings, wrong street type extensions, city and state names) eliminated. Apatar’s StrikeIron US Verification data quality service, for instance is one of such tools.

Employing sophisticated matching and data synchronization technology, it first closely inspects each address to ensure its validity and then updates incorrect addresses according to postal standards and cleans customer data before it gets into CSM/ERP systems, databases, flat files, and RSS feeds. It also adds ZIP+4 data, specifying congressional districts, carrier routes, etc. Data cleansing tools of this sort are indispensible in business today. They allow companies to increase productivity, improve sales strategies, and deliver a better and more accurate customer service.