Since the widespread adoption of computers in business over fifty years ago there has been a need for companies to integrate data from different systems for reporting, analytics or application development purposes. As software systems advanced and proliferated companies found themselves supporting a wide range of dissimilar hardware, software and applications. Faced with demands from management, users, and regulatory authorities to report or access this disparate data in an integrated way, IT departments often took the expeditious approach of writing code to integrate the required disparate data sources to respond to these requests. As a particular report or application proved popular, requests for additional data to be added often followed. This meant additional, time consuming, hand coding resulting in excessive drains on IT resources. Overwhelmed by these numerous demands, IT departments often found it necessary to delay responding, creating a backlog that resulted in an ever-more frustrated stakeholder base.
Integrating businesses during an acquisition or merger is hard—so hard, in fact, that many fail in the attempt. While there are many reasons for this (cultural, financial, political) many acquisitions don’t live up to the expectations of either party because they fail to properly plan for and execute the integration of the dissimilar technologies of the two companies. In fact, a McKinsey study discovered that while: “more than half the synergies in a merger are strongly related to IT…fewer than 40% of companies realize these benefits due to a lack of planning during the due diligence phase of the process.” One approach that can substantially contribute to ensuring planned IT benefits are realized is the introduction and use of Data Virtualization.