Moving legacy data from a relational database to the cloud is becoming a common problem.

Given that databases like MySQL degrade on performance as the data grows, they are being demoted to a simple configuration storage or debugging tools.

If you want to collect massive amounts of data, you need a distributed cloud-based tool like Apache Hadoop.

The South China University of Technology submitted a new framework for migrating relational data to the cloud to the IETF (Internet Engineering Task Force). 

The framework is simple and consists of four steps:

  1. The user submits a request from a browser.
  2. The web server receives the request and submits a request for data migration.
  3. A migration engine receives the request and spawns a task.
  4. The task connects to the relational database and copies the data to the cloud-based storage.

It heavily relies on the cloud database features to support full imports, partial imports, and real-time sync, etc.

The migration engine will be a series of ETL spark jobs (written in Python, Scala or Java) that will connect to the relational database, extract the data, and save it in the target database.

The draft doesn’t provide many details. It does give you a clear idea of the process you need to follow. If you want to see a real world implementation of this, check out the new AWS Glue Managed ETL service.

Subscribe

Sign up for my newsletter and be the first to get the scoop on the coolest updates and what’s next in Advertising.

Powered by MailChimp

Leo Celis