Deduping: The Easy Way to Save Time and Resources

As part of your most recent mailing (To find new customers, new students, etc) a letter is sent to John Smith at 1 Someplace Blvd and J. Smith at 1 Smplace Boulevard.

This is a very common occurence as between 5 to 15% of all records in a database are typically duplicates. The result is wasted time, money, and customer confusion. Deduping a database can be time consuming as databases grow in size over time, and the need to link to outside databases increases.

 

The Data Warehousing Institute (TDWI) estimates that poor quality customer data costs U.S. businesses $611 billion a year in postage, printing and staff overhead (TDWI estimates based on cost-savings cited by survey respondents and others who have cleaned up name and address data). The true cost may be much higher as customer satisfaction, redundant pricing promotions, and missed opportunities are factored in.

Manually deduping a list quickly becomes unrealistic, and simple exact match deduping solutions are exhausted.

The solution is correctly standardizing, parsing, and using fuzzy logic machine learning to identify duplicate records quickly and easily. To get a free customized demonstration of how our solutions can help you please contact us or download a free no obligation trial to Get The Most Out of Your Data today.

 

-Linda Boudreau

Tags :