Restricted access

October 14, 2009

Getting Prepared for Salesforce Integration

Filed under: Data Integration, Data Migration — Tags: — Olga Belokurskaya @ 8:48 am

To ensure the process of integrating data with 3rd-party applications and databases to be successful, it’s important to know how to solve typical challenges and avoid the most common mistakes at each of the steps that should be taken to get customer and enterprise information (currently residing in integrated, replicated, or migrated to some new Software-as-a-Service package.

The first step includes preparation and planning. It’s essential at this level think over and clarify the goals of the oncoming integration process:

  • What data (tables/fields/rows) should be extracted?
  • What data (tables/fields/rows) should be considered as targets?
  • Do I need to integrate with one single database or multiple data sources?
  • Is it enough to perform a one-time migration, or do I need an ongoing synchronization?
  • Do I need to have data backed up?
  • Do I have enough experience to do manual coding, or would the use of visual data integration tools be the best decision?

Mind that having no strategic vision and not enough evaluation criteria are the most common mistakes occurring at the preparation level. So it’s really important to set the goals and objectives properly.

Find out more about successful integration from our whitepaper on “Five Steps to Integrate with 3rd-Party Systems and Avoid the Most Common Mistakes.”

October 12, 2009

Improving Data Integration with ODBC

Filed under: Data Integration — Tags: , — Olga Belokurskaya @ 7:04 am

Recently, I’ve touched upon the topic of data integration with ODBC. Today, I’d like to add some more words. Successful data integration with ODBC sources depends on many things of which not the least is the performance of ODBC applications. There are several factors that affect ODBC performance. Improvements of those factors help make ODBC applications faster which, in turns, help improve and avoid issues in data integration. Here the factors are:

  1. Network communication
    Reducing network communication may increases ODBC performance multiple times. Arrays of parameters used instead of Insert statements, for example, reduce the time required to complete the operation.
  2. Choosing the way the transactions are handled
    To improve ODBC performance it’s essential to choose the right way transactions are handled. Thus, for example, using manual commits instead of auto-commits gives better control over the work committed.
  3. Connection pooling
    When an ODBC application has several users connection pooling is a good way to increase connection efficiency.
  4. SQL queries
    Efficiency of SQL queries is an important factor affecting the speed of ODBC performance. If something is wrong with it, issues may occur with data filtering causing the driver to get unnecessary data (sometimes the amount of this data is very big) which slows down application performance. Using well-formed and rightly executed queries improves the performance greatly.

ODBC provides good opportunities in data integration, giving an access to multiple data sources through one application. So keeping the application’s performance high will benefit the process of data integration.

October 7, 2009

Data Integration with the Cloud: Challenges and Issues

Filed under: Data Integration, Data Migration — Tags: — Olga Belokurskaya @ 7:33 am

While clouds keep gaining popularity, there are issues occurring from time to time, as more organizations step into clouds. Of those issues, one of the most important is the data issue. More exactly, companies that moved their data to the cloud, sooner or later face the fact, that it should be somehow integrated or synchronized with the data in on-premise enterprise applications.

In fact, the possibility of data integration between on-premise and cloud systems and tools is the thing a company should provide for before turning to clouds. However, this fact is often overlooked, which leads to data integration problems in the future, taking into account the amount of data that is going to increase.

Here, some issues to consider before moving to clouds, provided by David Linthicum, a computing and application integration expert:

  • Firewall limitations – there should be the way found to externalize and consume not port-80-compliant data.
  • The speed of moving data should be considered in order to customize transformation and routing mechanism to perform properly.
  • Provide for maintenance and support for cloud systems.
  • And security, which is still a big issue for clouds.

To conclude, there should be planning, provisions for data integration, and technologies thought over properly to address issues if they occur prior to migrate any systems to the cloud.

October 5, 2009

Integrating with Legacy Apps: Best Practices

Filed under: Data Integration, Data Migration, ETL — Tags: , — Olga Belokurskaya @ 6:36 am

Delivering products and services faster and managing more complex marketing programs are among major challenges most companies face.

Thanks to its proven, non-intrusive and scalable on-demand platform, allows companies solve these challenges. However, companies must realize that to fully leverage the benefits of using they have to figure out how to connect the information residing in with 3rd-party systems, such as ERP, accounting and CRM packages, custom applications, and databases.

Below, there are some best practices on how to solve challenges and avoid mistakes when integrating with legacy applications:

  • Formalize Data Schemas – Custom objects, data fields, and tables created by individuals should be documented and align with all applications and processes within the integration environment, as well as be visible to other users.
  • Update the Information – Information should be updated on a regular basis or, if possible, in real time. Out-of-date views are useless, so keep an eye on this.
  • Maintain the Integration – Even the most defined integration process requires maintenance. New tables may be added, data structures may change, and so on. Having no plan or budget for an ongoing integration is a mistake, which may become expensive to fix.
  • Verify and Clean Up the Data – Perform data cleansing and verification required by your business and industry. Each industry will have its own baseline, inputs/outputs, and best practices for such data quality management. However, it is critical to check the names, addresses, and e-mail details of your prospects and customers.
  • Transform Raw Data into Business Information – Business users are typically looking for useful information that can be applied across the enterprise and provide business decision-making. That’s why raw data needs to be aggregated, filtered, enriched, and summarized.
  • Stick to Business Value – Don’t forget that the integration processes should bring value and align with your business processes.

Look through our “Five Steps to Integrate with 3rd-Party Systems and Avoid Most Common Mistakes” white paper to learn more.

September 29, 2009

Getting Ready for Data Migration: Data Quality Issues

Filed under: Data Migration, Data Quality — Tags: , — Olga Belokurskaya @ 1:42 am

Careful analysis of the quality of data in your current system should be done prior to migration. Good data can save quite a bit of time (and budget) during migration. In fact, this analysis and replacement set up during the migration will help you cleanse and improve your data. Below are some points to pay attention to.

  • Required fields
    • - There fields that require being filled in for records to be migrated, such as company names for accounts, customer names for contacts, and so on. The main issue here is some of the fields missing information.

      To cope with the issue, think what values should be entered into these empty fields. The key factor here is to make it convenient for you to use the data when it is migrated.

  • Data type transformations
    • - It is important to ensure that values entered in the fields of one system meet the data type requirements of the system where the data is migrated, and there is no conflict during the migration.

      - One more issue is data logical duplicates that are presented in different ways in the field. The thing is that there may be different ways of naming the same value. So you need to replace all the different names with one. If there’s no possibility for automatic replacement in your current system, you’ll need to add these replacements within the data transformation to be built for the project.

    Learn more about the data quality issues that may occur during data migration from the whitepaper on “The Three Most Common Data Integration Problems

    September 21, 2009

    Understanding System Capabilities to Ommit Data Migration Issues

    Filed under: Data Migration — Tags: , — Olga Belokurskaya @ 7:44 am

    As your business grows, requirements for your data storage change. At the same time, the IT market offers new, diverse software to catch up with or even get ahead of your needs. At a certain stage, you face the necessity of switching to a new system. What are the hidden problems related to data migration (DM)? How can you omit them and get the most of your DM initiative?

    Studying the new system’s capabilities is an important step not just in preparing requirements for your data migration/integration project, but in selecting the system in the first place. Here are some things to do to ensure your choice is right, and you understand the system:

    • Make sure the system really does meet your requirements and will be able to satisfy your needs.
    • Make sure the components and functions/capabilities of the system are convenient for you to work with and perform the actions you need to perform with your data.
    • Analyze how the new system communicates with other systems. A good import/export mechanism included in the new system is a useful feature, but it’s not always sufficient. So, it is important to study the system’s API or SOAP capabilities in order to define whether the system you’ve chosen provides a full-scale toolset for data integration or additional alternative integration methods will be needed, such as direct database access, if it is enabled by the system. This issue will definitely make the project more labor-intensive, time-consuming, and, as a result, more costly.

    Learn more about the challenges of data migration in our white paper “The Three Most Common Problems Faced During Data Migration

    September 15, 2009

    Connectors that Simply Connect: Are They Enough for Data Integration?

    Filed under: Data Integration — Tags: , — Olga Belokurskaya @ 12:05 pm

    What are connectors for data integration tools? Allowing to minimize or even avoid hand-coding required to move data out/into different data sources, they have become a significant part of data integration solutions. However, it’s not right to think, that simply having a bunch of connectors is always enough for easy and smooth data integration. Well, apart from providing connectivity, there may be some peculiar requirements to connectors. They are, according to John Bennett’s, a marketing consultant and business writer, whose article I’ve read lately:

    • Different level of security and access requirements for data sources. – In other words, integrating protected data, you need to ensure that it remains protected after the integration. This is especially actual for the data in the cloud.
    • The necessity of transformation. – Very often, data requires to be transformed somehow before being integrated in some application or another data source. This may be some data formats conversion or unification, etc.
    • Being a part of an integration solution that supports access controls, transformation, auditability, etc. – This means, ensuring that all operations over the data are complete, and data is up-to-date, before integrating it.

    Well, what’s all this about? It’s obvious, that very often data needs a lot more than to be integrated as is. So searching for a data integration tool or solution, make sure its connectors can cope with the operations your data may require. Otherwise, you’ll still need lots of hand-coding which is not a pleasant (and cheap) thing, and there are risks of mistakes and bugs leading to data integration nightmares – the things you definitely wish to avoid. =)

    September 7, 2009

    Why Enterprises Go Open Source for Data Integration?

    Filed under: Data Integration, Open Source — Tags: , — Olga Belokurskaya @ 12:37 am

    open source data integrationSome ten years ago, using open source was unlikely for big serious companies. Now that the benefits of freely distributed software have become more evident than ever and an abundance of such products has appeared at the market, more and more companies go open source.

    Open source data integration is no exception. Freely available solutions are doubly beneficial, bringing the license cost to minimum and enabling companies to save dramatic amounts on maintenance. Organizations rely on open source when it comes to integrating their data. But it is not only about money. Open source also has a number of advantages over traditional software such as:

    - Better performance and reliability
    Open source solutions have vast communities of developers, which ensures testing all the functional range of a product on different platforms before releasing, bugs are found and fixed rapidly, enhancements to the code are also easier to make due to the availability of the source code.

    - Multi-platform support
    Typically, open source software supports numerous platforms, leaving it to the user to choose the one that fits their requirements better – something many proprietary software solutions cannot offer.

    - Higher level of security
    With the source code publically available, open source software typically suffers fewer vulnerability attacks than proprietary solutions.

    - Flexibility
    Most open source developments allow a tremendous scale of flexibility and can be reused in a vast range of cases with little to none customizing required.

    - Easier deployment
    There is a tendency for open source software to concentrate on the essential features instead of implementing dozens of secondary features that hardly anyone uses. Due to that such software is usually more straightforward in use than proprietary products.

    You can learn more about why enterprises go open source for data integration from the “Guide to Reducing Data Integration Costs.”

    August 18, 2009

    Reducing Data Integration Costs with Open Source ETL

    Filed under: Data Integration, Open Source — Tags: , — Olga Belokurskaya @ 1:53 am

    Right data integration and data quality are critical points for companies wishing to have fast time-to market and to manage complex sales and marketing programs. However, when budgets are limited it becomes difficult to cope with increasing expenses of data integration. In fact, the amount that an enterprise spends on Extracting, Transforming, and Loading (ETL) may reach the numbers of millions of dollars, what makes executives (especially non-ITs) clutch their heads in horror and spend sleepless nights analyzing whether their ETL data-integration technology is as beneficial as it is positioned.

    Such factors as license costs, labor costs, and hardware costs drive data integration cost up. As the projects become more complex and amount of data increases taking a good care about data becomes more and more expensive.

    Here are some best practices on how to lower data integration costs:

    • Consider using commercially supported open source tools for your integration projects.
    • Verify licenses to make sure the product you’re using is really an open source solution and the terms of license suit you.
    • Openness of the source code is an advantage of open source over proprietary software. You are always free to view, fix, and modify the code to make your open source tool better suit your particular requirements.

    However, think over all the pros and cons. License costs are not the only expense when implementing software. If an open source solution is difficult to learn to use or implement, you better think whether it’s worth it.

    Look through our Guide to Reducing Data Integration Costs to discover more.

    June 29, 2009

    Common Rules for Data Migration

    Filed under: Data Migration, ETL — Tags: , — Olga Belokurskaya @ 1:21 am

    Data migration is an inevitable evil for any enterprise. As business requirements change and demand for new applications, or there is a need of removing duplications and inefficiencies, business-critical data migration can’t be avoided. Data migration initiatives, unfortunately, still have a high rate of failures due to the complexity of the process. According to CRN Australia’s article I’ve found lately, there are four common data migration rulles which could help to drive the initiative to success.

    The first rule says that data migration is primarily a business issue. Technical side is important, but only as a means to fulfill the process. IT people don’t always have the power or the necessary knowledge to deliver what is required by the business. It is business who is to make the decision on:

    • What is the purpose of migrating data?
    • What data should be migrated?
    • When should the data be migrated?

    The second rule claims that business goals define the solution and approach selected for data migration. This rule supports the first one. It is important to bear in mind that the best technical solution is not always the best business solution. IT’s responsibility is to make sure the chosen migration technology is able to support possible changes in priority and direction without restarting every time. IT also should provide for the issues that may occur during the migration.

    The third rule is about setting the level of acceptance of data quality level. Many data migrations have been scuppered by overestimating or not understanding the quality of the source data. While enhancing data quality is a worthy goal, it is really important not to go off on a data perfection crusade. It is a trap that drives many, many projects to inflating both the cost and the time to deliver the project. To avoid this trap, data owners and users need to determine the level of quality they require at the start of the project, in order that the technologists have an appropriate goal.

    The fourth rule is about measuring data quality in order to determine the level of quality business users require. Those measures should make sense to business users and not just to technologists. It makes possible to rightly measure deliverables, perform gap analyses, and monitor and improve ongoing data quality. It also ensures that the efforts are concentrated on where business users see value and can quantify the benefits.

    « Older Posts