This is the most common and unpleasant pitfall a costumer will encounter. It appears because the human mind is not prepared for exponential prediction.
The team that will do the migration does an initial estimate based on the amount of entities needed to be migrated. They may even account for a risk factor. But as the migration complexity grows exponentially with the amount of entities – because of number of relations, constraints, circular dependencies, unique fields, etc. – what happens is that the team will need to put in more effort for each entity that is being added to the migration.
Infosistema DMM’s cost grows linearlly with the complexity and it can be even cheaper if you take in account that you can further reduce the complexity by breaking the migration in smaller logical chunks.
Time to Market
OutSystems customers are used to agile development and quick prototyping. Why is data migration so different?
How long do you think it takes to produce a script for migrating a single table? To that number, account for the time to test and execute it. Keep that number in your mind. What if I say the table has several Foreign Keys, will they be migrated also? If not what will happen to the destination data? Now let’s say some of the fields are mandatory… or unique. What if some of the FKs are also mandatory and unique or if some of th FKs point to OutSystems entities, like Users, Roles or eSpaces?
All this time for a single table from understanding it to building the script to test it. However two tables won’t take double of the time. Each time must be accounted together, because they may have dependencies between them, they may even have circular dependencies! That’s when exponential growth creeps in and consumes all the time you’ve allocated the migration team.
Furthermore, if we assume that development won’t stop and migration must be implemented in parallel. How will we handle changes on entities whose script was created some weeks ago? How will the inclusion of the new field be handled on the migration scripts?
Infosistema DMM’s time to deploy is measured in a much lower time scale. You install it from the forge, setup the database connections and a new migration configuration, and within minutes you’re watching your data peacefully flowing between environments.
Data migration should not be taken lightly. A problem with a script execution may be as disastrous as breaking your entire destination environment.
Your migration script changed some flag on some field and now your application won’t start or after the migration the environment started behaving erratically… everyone that’s in data migration has gone through all of these kinds of problems that only appear during live execution. The team will start saying things like “It’s the Murphy Law" or “I don’t get it. It was working nice on my machine". Assuming you have a backup, you’ll have to restore it, with all the hassle it involves.
The reason why errors probability increase is not only because of the exponential growth, but also because of the time to deliver the migration scripts. During this time the development team kept creating entities that tie-in with the entities being migrated, removed some fields or changed their data type or size, may even move entities from one eSpace to another… however, they generally “forget" to tell the migration team of those changes, causing scripts to fail and starting a non-productive blame game.
Infosistema DMM’s is a product. While we don’t claim that it is error free, we assure continuous improvements and errors correction. So, in the improbable situation that your data structures have some strange corner case not already handled by DMM, what will happen is that the product will grow to also support that case. This way all customers will benefit from it.
All the above only take in account simple migration of data between environments. If you also need data anonymization, data filtering, unmanned execution and so on, you’re just adding layers of complexity to an already very complex scenario.
DMM will handle all these and much more for you.
Making Data Migration Easier
Data migration component between different environments. Allows the selection of multiple entities in the migration process through a dynamic selection screen.
The configuration of the migration process is made by direct connection of databases.
Resolution of data inconsistency issues that usually occurs in direct migrations (related tables – foreign key, differences in physical names of tables between different databases).