Data Migration: Moving the Actual Data

There are two key elements to the exchange of any information between databases. One is the data structure used for the exchange, and the other is the transformation required to reach those structures.

Some of these are driven by the source database, others by the target database. For example, when moving data from RDBMS to NoSQL database generally requires constructing documents from what might be tabular, or joined-tabular data. This may involve both join elements on the relational side, as well as formatting on the NoSQL side. The eventual aim is to ensure that the data reaches the target database in both the right format, and without corruption, and also in a format that is most appropriate or efficient. That ultimately depends on what you are using the transferred data for.

The other aspect is the difference between source and target data types – that is the format and construction of the individual fields or components of the data. Document databases and Big Data stores tend not to care about the data type, whereas RDBMS cannot live without them.

Some important considerations for how we use this information:

  • The data structure must be retained (i.e., we must be able to identify the fields, columns or other elements of the data).
  • The data format and integrity must be maintained (the data should not be corrupted, shortened or reduced in any way).
  • The data must be able to be efficiently transferred (sending a 1GB file that only contains 15KB of valid information is not efficient).

In this chapter we’ll examine some of the key differences and problems with transferring data that transcend the mechanics of the process, and how to deal with them effectively. Although the three primary aspects, basic formatting, structural comparisons and data type limitations are handled separately here, there are few occasions when you can truly treat these elements individually. We’ll see some examples of this as we go through.

Basic Interchange Formats

When you start to move data efficiently between the different database types that you exist you will find a number of different possible interchange formats, and the primary issue with all of them is exactly how efficiently, and more importantly accurately, they enable the information to be exchanged.

First, the data will need to have been encoded with some kind of encapsulation format. This format is what describes the individual structure, and is entirely dependent on the underlying data that is being exchanged. For example, if the data has a very rigid structure then that will obviously normally enforce the format of the information. Row-based data, for example, can normally be encoded using CSV or even a fixed-width record format.

The second aspect is the encoding and formatting of the information itself. Exchanging data using CSV is fine, providing that you can correctly identify the format of the text itself. Should it be encoded in UTF-8? Or UTF-32? Is plain ASCII better? What if it contains a mix of these characters, should UTF-8 be used as the standard and the actual encoding handled by the database target when the data is imported?

In fact, many of the principles about translating information between different databases also rely on basic best practice for how you design and structure the data in the target system to begin with. Normalisation of the data structure for information can normally be abpplied to any database, even those that might have a loose or undefined structure have conventions. It’s unlikely, for example, that you will call a recipe title field ‘title’ in one record and ‘recipename’ in another record of the same database because your application will be a mess.

Of course, there are times when you may be merging, combining or otherwise consolidating data from a wide variety of different documents, records or blocks of information. There it is up to you to ultimately pick a standardisation for it to be useful to you once it’s been moved into the target system.

As a rough guide for the types of operation and translation that might take place, the following table highlights the kind of structural transformation and changes you might need to make when moving between some of the most common database environments.

Table 2-1: Structural Mappings between database environments

RDBMS Columnar Store Document Database Freetext/unstructured data store
RDBMS Vendor specific only Vendor specific only Field mappings only Application specific
Columnar Store Vendor specific only Vendor specific only Field mappings only Application specific
Document Database Field mappings only Field mappings only Vendor specific only Application specific
Freetext/unstructured data store Application specific Application specific Application specific Application specific

Notes:

  • Vendor specific only changes are those that are directly related to the capabilities of the source or target database. MySQL for example supports the ENUM and SET compound field types, whereas Oracle, PostgreSQL and SQL Server do not. Moving from one to the other may required changes.
  • Field mappings only refers to how you map the source fields or columns to the target fields/columns. Depending on the target this may include compound and/or JOIN based translation. For example, when moving from a document database to an RDBMS you might convert a compound field into a single field, or a lookup table. When translating from an RDBMS to a document store, the data might be combined using a JOIN into a single target field.
  • Application specific changes are those that will entirely depend on how you to use the information. Translating document data into freetext databases is unlikely to require any changes. But converting freetext info into an RDBMS format is going to require some significant identification and translation.

Let’s dig into some more of the specific challenges.

Row-Based Data

For row-based data, the information can generally represented and formatted as one of the regularly used and displayed formats, such as Character Separated Values (i.e. CSV), or in a fixed width format. Row-based data (which includes the column-based data used in big data stores) is probably one of the easiest formats of data to exchange. In nearly all cases the list of columns is usually pretty well fixed and the format of the data is well known because the columns are fixed.

Character Separated Values (not Comma-separated values) is one of the oldest methods of exchanging fixed format data like this, it was often used as the only available method for exchanging information in a reliable fashion. Historically most tabulated data like this tended to be financial information, and so the content and format of the information was relatively simple. As such, the most common format was to use carriage-returns (or the operating system equivalent, which could be newlines or carriage-return and newline characters) to separate the records, while the individual fields in each row of data were separated by a comma (hence Comma-Separated Values as the CSV).

For example:

1085,Creamy egg and leek special,,4,1,0,0,0.0,0.0,0
87,Chakchouka,A traditional Arabian and North African dish and often accompanied with slices of cooked meat                      ,4,1,0,0,0.0,0.0,0
347,Mozzarella and olive pizza,A very simple pizza base made without yeast topped with traditional Italian ingredients. Look out for a low fat type of Mozzarella or Cheddar cheese if a low fat diet is being followed.,4,1,0,0,0.0,0.0,0
720,Savoury pancakes,Basic pancake recipe. Allow 30 min standing time for batter.,8,1,0,0,0.0,0.0,0
477,Seafood filling for pancakes,,8,1,0,0,0.0,0.0,0

The problem with commas and carriage-return characters is that, as computers got more complex, and the data they stored got equally more complex, how do you determine between a comma in some text, and a comma separating a field? What if you transfer a text string that contains a newline or carriage return. You don’t want that interpreted as the end of the record if it happens to part of the field. The initial solution is to use some kind of further delimiter. For example, using double-quotes:

"1085","Creamy egg and leek special","","4","1","0","0","0.0","0.0","0"
"87","Chakchouka","A traditional Arabian and North African dish and often accompanied with slices of cooked meat                      ","4","1","0","0","0.0","0.0","0"
"347","Mozzarella and olive pizza","A very simple pizza base made without yeast topped with traditional Italian ingredients. Look out for a low fat type of Mozzarella or Cheddar cheese if a low fat diet is being followed.","4","1","0","0","0.0","0.0","0"
"720","Savoury pancakes","Basic pancake recipe. Allow 30 min standing time for batter.","8","1","0","0","0.0","0.0","0"
"477","Seafood filling for pancakes","","8","1","0","0","0.0","0.0","0"

This doesn’t fix the problem, it just diverts your attention for long enough to realize that now what happens if one of the delimiting characters needs to be used in the text? We could escape it, by prefixing it with a backslash:

"700","Garlic mushroom kebabs","The longer you leave these mushrooms to marinate, the better they will taste.\nGood for barbecue.","8","1","0","0","0.0","0.0","0"

But now we’re getting complex, both to read and write the information, the level of complexity is increasing to point of introducing further possible methods of corrupting the data as it gets transferred.

The alternative is to use a different delimiter that is unlikely to be used within the text in any form. Hadoop in fact follows this model, using the hex characters, 0x01 and 0x0A to delimit records and fields. As binary characters these are unlikely to be used in what is normally human-readable text. Of course, once you start transferring binary data, you need to find another method, such as hex-encoding binary data.

"700","Garlic mushroom kebabs",VGhlIGxvbmdlciB5b3UgbGVhdmUgdGhlc2UgbXVzaHJvb21zIHRvIG1hcmluYXRlLCB0aGUgYmV0dGVyIHRoZXkgd2lsbCB0YXN0ZS5cbkdvb2QgZm9yIGJhcmJlY3VlLgo=,"8","1","0","0","0.0","0.0","0"

The other alternative is to use a fixed width format. This has the advantage that providing you know the exact widths of the individual fields, encoding errors are eliminated because the characters are no longer significant in the format of the information.

The downside of the fixed-width format is that the size of the fields, records, and ultimately files, can become prohibitively large if you are exchanging potentially large or unlimited fields. For example, BLOB types in most databases can be MB or GB in size; expressing that in fixed width format is obviously not a practical solution.

Record-Based Data

Record based is information that may not necessarily be identifiable or resolvable by an easy to use row specific format or structure such as that used in CSV exchange. Complex table data, or information that that is made up of a combination of fixed fields and BLOB fields, for example, is unlikely to reliably, or efficiently, transferred. The problems of character and structural formats will ultimately make using that information difficult or computationally expensive when actively sharing the information – for example by making the file sizes too big to be practically exchanged.

A very typical example of record based information is either information from a document-based database, a free texrt database, or where the information that makes up the majority of the content is in fact really an attachment or noit inline field and database data. Think of an email message; the message, the address, from, subject are all examples of easily identifiable and classifiable database information. But what do you do with an attachment that might have been sent along with the recored?

How about documents generally? Metadata about those documents could be represented in a typical RDBMS row, but not the document itself. But the combination of the two – the metadata and the actual document together make up a ‘record’ that you may want to effectively share all or part of with another database.

When transferring record-based data, think first about what constitutes the record and how that can be represented in the different environments. Then move on to understand how the fields and individual data points can be translate into a format suitable for the target database. With record-based data, it may be that you have a massive volume of data and compound records that when move from a document store to a transactional RDBMS require 20, 30 or 200 rows of data to be represented properly; this is not a problem, providing you find a method for identifying all of the row data that refers to the record is handled correctly.

In general with a record based database the easiest approach is to actually translate the data at the source into something that can be imported directly into the target format. For example, from a record-based environment you can generate four different blocks of CSV import data, one for each table and portion of the source information.

The opposite is actually also true; when converting data from a column or table store into a record based format, it normally makes sense to do this on the basis of translating the key data into the new structure before doing the transfer. As a general rule, either use the native target format if you can, or make use of representable formats such as JSON to do the actual encapsulation of the information. Many record or document-based environments already use JSON, or a format similar to this.

{
    "title": "Fried chilli potatoes",
    "preptime": "5"
    "servings": "4",
    "totaltime": "10",
    "subtitle": "A new way with chips.",
    "cooktime": "5",
    "ingredients": [
        {
            "ingredtext": "chilli powder",
            "ingredient": "chilli powder",
            "meastext": "3-6 tsp"
        },
        {
            "ingredtext": "potatoes, peeled and cut into wedges",
            "ingredient": "potatoes",
            "meastext": "900 g"
        },
        {
            "ingredtext": "vegetable oil for deep frying",
            "ingredient": "vegetable oil for deep frying",
            "meastext": ""
        }
    ],
}

One final consideration is those situations where there is no structure – or the structure is so ephemeral or complex that there is no way to map information. You cannot, indeed should not, either impose a structure just for the sake of one, or inversely rely on sub-standard features in the target database just because it makes your life easier.

Some examples of this include trying to extract key fields or information from unstructured data that are complicated either to identify, or to map back to the original. Extracting a telephone number from a massive text string just because you can does not mean that the telephone number you have extracted is really the one that should be associated with this record in your database. Equally, relying on full-text searching engines within RDBMS environments can be problematic.

Is that a Column or a Field?

Not all fields and columns are created equal, and many of these difficulties come from the complexities or abilities of the database environment being used. Specifically, at which point do you treat a fragment of the data that you are dealing with as a column, or a field, or even just a uniquely identifiable piece of information?

As you move between different environments, the differences become more pronounced or more difficult to identify. True columnar stores, for example, tend to have a significantly reduced number of datatypes and support, and that often limits your ability to store certain values and information. For example, compound values, or specialist types, such as XML, GeoData and spatial points may be storable in one database but not another.

Consider this record, taken from a CouchDB (JSON document) database:

{
   "title" : "Chicken Curry",
   "Keywords" : [
      "Indian",
      "Chicken",
      "Okra",
      "Spicy"
   ],
   "id" : "8023754"
}

Now is the ‘Keywords’ compound object in the document a field, or is it a column? In MySQL we could translate this into a SET datatype, a special datatype, providing we knew what all the possible values for that column are. In Oracle, a field that has multiple possible values like this would normally either be split into separate columns as a bit or boolean value, or it would converted to a lookup table, as in the diagram below.

 

Depending on your use case, within a strict columnar environment such as Cassandra or HP Vertica you might actually consider going the other way and repeating the information with the keyword data in individual rows, like this:

dbadmin=> select * from recipes_kwbytext;
   id    |        title         |     kw
---------+----------------------+------------
 8023754 | Chicken Curry        | Indian
 8023754 | Chicken Curry        | Chicken
 8023754 | Chicken Curry        | Okra
 8023754 | Chicken Curry        | Spicy
(4 rows)

With a column store this can be more efficient if what you are looking for is patterns in the data, because repeated data like this is easy to extract and identify. In this case, what we’ve done is convert something that is a compound field in a document store into multiple rows with the same ID in a column store. This solution can also be used in environments where there are no JOIN operations, or a JOIN is expensive, but where the information is still required at each level. Good examples here are many of the document stores and structureless environments such as Hadoop.

Can you Bypass Datatypes?

Another temptation when translating data between very different database formats is simply to ignore the formatting, and especially the very strict datatypes, that might normally define the data being stored.

This is particularly a problem within those environments where there may be a very limited set of datatypes to work with and can be application specific. For example, the Hive database environment within Hadoop is reqally little more than a thin veneer over a text format used to store the actual data. When you define a table within Hive and then select the rows from the table, Hive parses each row and uses that to display the value in the corresponding format.

This can cause problems for certain data, for example, numbers that are too big, dates that don’t match the very limited set of date formats supported by the Hive parser. In the long term, this causes corruption of the data that you have transferred.

For this reason, some people choose to create tables within Hive that use the Text datatype to display the information rather than the true underlying Integer or Floating Point value as it ensuires the raw value, not the interpreted value will be used.

The same process can be used when moving data; extract the ‘raw’ value rather than hope the source or target database will interpret, store and display the information in the right format.

If you are only sharing or displaying the information in the new target database then there is probably no reason to worry. If you start processing or actively using the data, this is where corruption can occur if you are not storing the information correctly. For example, if an integer is stored and then incremented, you want 10,000 to become 10,001, not 100001.

The bottom line, you can bypass the datatype, but probably shouldn’t if you hope to use the information in the format in which you’ve decided to store it. If you have a datatype, and can identify it from the source material, then use it if the target environment can handle and interpret it correctly. See the notes later in this chapter on limitations in different environments.

Optimization and Performance

Irrespective of the the database environment and the reason for you moving the data, the end goal should always be to move data into a target in a format that will be efficient to use at the target end.

In many document or unstructured systems, or those with very flexible storage mechanism such as Hadoop, the performance will often be predicated not on the structure of the information, but what information is closest to you, or what can be pre-mapped or organized through a map reduce or index generation exercise.

Conversely, RDBMS require highly structured and organized data structures both with and without indexing to provide the best performance. Columnar stores are often much more efficient if you can logically group or sort information together. Some will handle this automatically for you, otherwise are more efficient if you can pre-determine the distribution of the data on which you are most likely to sort and query on. That might mean that when you transfer the data, you sort the generated file by that column or columns before loading. In some extreme examples it may be that you load the data in an unordered format and then move again into a new table with the right column structure.

Don’t be afraid of making the wrong decision, because you can often sort this structure out during a secondary or tertiary stage, but equally don’t ignore it. Having to parse or process large bodies of data a second or third time will be impractical if you are sharing or replicating data compared to single, isolated, dumps.

Ensure Two-way Validity

Without good reason, you should always try and avoid making any kind of translation of format that cannot be either reversed, undone, or translated back into it’s original format, even if that might make the process a little more complicated. Remember that data is often a living organism when being actively used and employed. Therefore doing too much to format, combine, extract or otherwise manipulate the information can then make it difficult to be used again elsewhere.

Note that this is not about normalization. Normalization in typical database parlance means finding the right, fixed, database type for the field data, making it the most efficient choice, and understanding the limits and structure of the data so that you can decide whether a field should be 10 bytes or 12 bytes long. Doing this normally results in identifying the data structure, lookup tables, relations and joins so that you have the right structure. In this context, normalization is really about making the data look like a typical table structure in an RDBMS; normalization for document databases is entirely different. Normalisation for data interoperability is another level still, and we’ve already seen a number of different examples of that.

Instead, think about the quality of the data and how it should be used, while keeping in mind that the structure required for efficiency in an RDBMS may be completely different to the efficient storage of the same information in a document DB, or when resolved down to a text fragment in an unstructured data store.

To return to a well-trodden example, in the section ‘Is that a Column or a Field?’ we looked at the translation of compound values into single or multiple fields. Using a comma to separate the potential values means that we could split the value back out. If the field had been correctly translated either to boolean columns or a linked table is easier to translate back again into a whole variety of formats.

When representing a compound type, think about how you would reverse the structure so that it could be used the other way round. For example, if you decide to dump the information out to another table or column structure, make sure that you add identifiers so that you can track the record it came from (which you’ll probably need anyway), and can reformat it back into that format. If you’ve split it out into multiple rows in a columnar store, make sure you know how to combine it back and deduplicate the information again if you need to send it the other way.

The compound types are the most complex single field type here because there are so many opportunities for you to mess up the translation, but the same is also true for basic structural information, and even more so if you decide that you only want to transfer a smaller number of fields of data from one database to another. Either transfer everything, or transfer what you need and include information (like a unique identifier) so that you can associate it back with the data you extracted it from. Once you’ve lost the context of the information, it can be impossible to get it back and the result is a useless dataset.

Database Metadata

We’ve concentrated very heavily on the actualy data you are storing and want to work with, but what about metadata:

  • Sequences, current auto-increment values?
  • Whether you transfer or record these is going to depend on exactly how you want to use the information when it reaches your destination (or comes back).
  • How about the definition of the structure that you are using? Do you want to be able to share and use that? What happens when the structure changes. Do you want to track and identify those changes?
  • When doing a one-time export of information from one database to another you can be sure about the structure and what you expect to get from it. But what happens when you repeat the same export multiple times? Or when replicating?
  • If you are sharing data between two different systems and integrating them, knowing the sequence number may be irrelevant unless you can synchronize the generation of the number so that it can be used by the multiple databases in an effective manner. Perhaps your databases could use a better unique identification method, rather than relying on a monotonically increasing sequence number, such as UUIDs or using a central identifier registry?

How to address these different problems will be covered in later chapters, but it’s important to think about it here as it has a knock on effect to other areas. For example, when moving unstructured or document based databases into multiple separate tables, you need to identify and tie that information together, where a UUID is important, and it therefore becomes a critical part of the data structure that you swap.

Data Migration: Methods

Throughout the series we will examine four distinct methods for moving and sharing information stored in databases, and each have their specific meanings, techniques and tricks to get the best out of the process. The four methods are:

  • Moving – this refers to the process of moving data to another database for a simple export or data exchange.
  • Migrating – this refers to exchanging the data to a different database to take advantage of other database features.
  • Sharing – this refers to the exchange of data where the full data set needs to be used alongside the existing database, such as analytics.
  • Integrating – this refers to the exchange of data where the data may be combined across multiple databases within the application.

In the next few posts, we’ll start to look at the core structural and data format differences that will affect any movement of information, the series will be divided into specific sections that look at the challenges for specific database types and data exchange operations. These are divided into four main types of data exchange, moving, migrating, sharing and integrating. Despite appearances, these four types are not the same. Each has different traps and considerations before you can make them work.

Moving Data

There are times when you simply want a copy of a selection or subset of the data so that it can be used elsewhere. Classic examples are exporting information from a database for the purposes of a mail merge – the address data is extracted and moved into a format that can be used by the target software (CSV) for this single purpose, for example.

In this situation, the movement of the data is generally temporary; that is, we’re moving the data from the actively used source database into a target database so that we can use it for a specific purpose. Once that purpose is over, the data is deleted or ignored. The source database never stops being the canonical source of the data, and we don’t care about keeping the moved data up to date with the source material; we can just perform another export of the data to achieve that.

Migrating Data

Data migration is where the information needs to be moved, wholesale, into another database system, perhaps because you have decided that you no longer want the application to use MySQL as it’s data store, but MongoDB. Over the life of many different applications the decision is made to move it to a different underlying database system, either to take advantage of it’s scalability or functionality.

As the internet explosion hit many companies, many applications were migrated entirely from their MySQL database to various NoSQL solutions in the belief that this would solve their scalability problems. They weren’t always successful, but the principle is sound. Of course, migrating the data to NoSQL is not without some serious consideration about how the data is moved to the new target.

For the migration to work, the structure, format and most of all the usability of the data in it’s new database are the primary considerations. For all of the promise of improved performance and scalability to be realised, the data must be migrated properly. Simply copying the data over and hoping the database will take of it is not enough.

Of all the considerations should be the requirement that the migrated data has to be updateable in the easiest fashion, and has to suit the application needs and requirements in this respect. Unlike moving data, where the original source of the information is not removed or switched off, in a migration we have to consider what happens to keep the data updated.

Sharing Data

Occasionally you have information in your core database that you need somewhere else for a very specific purpose. When sharing data, the canonical version of the data does not change. For example, you may have a MySQL database and you want to share the data with a key/value store such as Memcached in order to improve the performance of the database. Although we might place the data into Memcached for this purpose, updates to the information are always stored within MySQL. We literally only share the data with Memcached for as long as is needed for the specific situation.

Another good and recent example is the movement of data from an existing datastore, such as Oracle or MySQL into an analytics platform such as Vertica or more recently into Hadoop. The data is moved into this platform only for the purposes of more detailed analysis than would be possible on a single transactional database, or where the data from multiple database shards is being concentrated into a larger single data store for the purposes of analysis.

Again, the data is only shared with the analytics engine – the transactional data store that holds the active records is not changed or altered, and the data is never physically moved from the source database. However, you may want to keep the information synchronised; that is, when a change happens in the source database, it must be updated in the target database

Sharing raises some interesting problems when talking about the movement of data, mostly these are related to the ‘liveness’ of the information being transferred. The physical act of sharing the information is not complex, but doing so in a way that ensures that the information is up to date with the live database presents some issues, particularly if the process is, as with the Memcached example, designed to speed up the access to the information.

Integrating Data

There are times when you want to use multiple formats and databases of data together, whether that is within the same application, different elements of the same application, or linked or connected applications that are able to make better use of specific areas.

For example, you may store your core customer database in MySQL, but want to store a cached version of information for when the customer visits their account page in your web application within a NoSQL or even key/value store so that you have quicker, ready access to the information.

Integration also occurs when you are mixing and matching data from different sources for different purposes. For example, actually natively keeping your customer data in a NoSQL database, while storing transactional information, such as sales, in a transactional RDBMS.

Finally, building on the sharing data example above, a common deployment is to share transactional data with a big data store, perform analytics, and then combine the output to be used back in the web application. Online stores use this method to concentrate sales information from sharded transactional stores, calculate the most popular items or reviews and ratings, and then provide that information back to the web application to be displayed with the data.

The process of integration is different to other systems; you need to consider how the two systems will work together, what information will be shared, and how you relate the information on the target system to the information on the source system. This encompasses elements of the three previous methods, plus some new ones to ensure the integration works as expected.

Extending the Tungsten Replicator Core JS Filter Functionality

Tungsten Replicator has a really cool feature in that we can filter data as it goes past on the wire.

The replicator itself is written entirely in Java and writing filters for it is not as straightforward as it looks, which is why the much better feature is just to use the JavaScript mechanism and write filters using that tool instead. I’ll save the details for how you can write filters to process and massage data for another time, but right now I wanted to find a good way of improving that JavaScript environment.

There are filters, for example, where I want to be able to load JSON option and configuration files, or write out JSON versions of information, and plenty more.

Mozilla’s Rhino JS environment is what is used to provide the internal JS environment for running filters. The way this is supported is that rather than creating a Rhino JS environment that can do whatever it wants, instead, we create a JS instance specifically for executing the required functions within the filter. One of these instances is created for each filter that is configured in the system (and each batch instance too).

The reason we do this is because for each filter, we want each transaction event that appears in the THL log to get executed through the JS instance where the filter() function in the JS filter is executed with a single argument, the event data.

The limitation of this model is that we dont get the full Rhino environment because we execute the JS function directly, so certain top level items and functions like load() or require(), or utilities like JSON.stringify() are not available. We could do that by changing the way we do the configuration, but that could start to get messy quickly, while also complicating the security aspects of how we execute these components.

There are some messy ways in which we could get round this, but in the end, because I also wanted to add some general functionality into the filters system shared across all JS instances, I chose instead to just load a set of utility functions, written in JavaScript, into the JS instance for the filter. The wonderful thing about JS is that we can write all of the functions in JS, even for classes, methods and functions that aren’t provided elsewhere.

So I chose the path of least resistance, which means loading and executing a core JS file before loading and executing the main filter JS so that. We can place into that JS file all of the utility functions we want to be available to all of the filters.

So, to enable this the first thing we do is update the core Java code when we load the filter JS to load our core utility JS first. That occurs in replicator/src/java/com/continuent/tungsten/replicator/filter/JavaScriptFilter.java, within the prepare() function which is where we instantiate the JS environment based on the code.

String coreutilssrc = properties.getString(“replicator.filter.coreutils”);

// Import the standard JS utility script first
try
 {
 // Read and compile the core script functions
 BufferedReader inbase = new BufferedReader(new FileReader(coreutilssrc));
 script = jsContext.compileReader(inbase, scriptFile, 0, null);
 inbase.close();

 script.exec(jsContext, scope);
 }
catch (IOException e)
 {
 throw new ReplicatorException("Core utility library file not found: "
 + coreutilssrc, e);
 }
catch (EvaluatorException e)
 {
 throw new ReplicatorException(e);
 }

This is really straightforward, we obtain the path to the core utilities script from the configuration file (we’ll look at how we define that later), and then compile that within the jsContext object, where our JavaScript is being executed. We add some sensible error checking, but otherwise this is simple.

It’s important to note that this is designed to load that core file *before* the main filter file just in case we want to use anything in there.

Next, that configuration line, we can add into a default config by creating a suitable ‘template’ file for tpm, which we do by creating the file replicator/samples/conf/filters/default/coreutils.tpl. I’ve put it into the filters section because it only applies to filter environments.

The content is simple, it’s the line with the location of our core utility script:

# Defines the core utility script location
replicator.filter.coreutils=${replicator.home.dir}/support/filters-javascript/coreutils.js

And lastly, we need the script itself, replicator/support/filters-javascript/coreutils.js :

// Core utility JavaScript and functions for use in filters
//
// Author: MC Brown (9af05337@opayq.com)


// Simulate the load() function to additional external JS scripts

function load(filename) {
    var file = new java.io.BufferedReader(new java.io.FileReader(new java.io.File(filename)));

    var sb = "";
    while((line = file.readLine()) != null)
        {
            sb = sb + line + java.lang.System.getProperty("line.separator");
        }

    eval(sb);
}

// Read a file and evaluate it as JSON, returning the evaluated portion

function readJSONFile(path)
{
    var file = new java.io.BufferedReader(new java.io.FileReader(new java.io.File(path)));

    var sb = "";
    while((line = file.readLine()) != null)
        {
            sb = sb + line + java.lang.System.getProperty("line.separator");
        }

    jsonval = eval("(" + sb + ")");

    return jsonval;
}

// Class for reoncstituing objects into JSON

JSON = {
    parse: function(sJSON) { return eval('(' + sJSON + ')'); },
    stringify: (function () {
      var toString = Object.prototype.toString;
      var isArray = Array.isArray || function (a) { return toString.call(a) === '[object Array]'; };
      var escMap = {'"': '\\"', '\\': '\\\\', '\b': '\\b', '\f': '\\f', '\n': '\\n', '\r': '\\r', '\t': '\\t'};
      return function stringify(value) {
        if (value == null) {
          return 'null';
        } else if (typeof value === 'number') {
          return isFinite(value) ? value.toString() : 'null';
        } else if (typeof value === 'boolean') {
          return value.toString();
        } else if (typeof value === 'object') {
          if (typeof value.toJSON === 'function') {
            return stringify(value.toJSON());
          } else if (isArray(value)) {
            var res = '[';
            for (var i = 0; i < value.length; i++)
              res += (i ? ', ' : '') + stringify(value[i]);
            return res + ']';
          } else if (toString.call(value) === '[object Object]') {
            var tmp = [];
            for (var k in value) {
              if (value.hasOwnProperty(k))
                tmp.push(stringify(k) + ': ' + stringify(value[k]));
            }
            return '{' + tmp.join(', ') + '}';
          }
        }
        return '"' + value.toString() + '"';
      };
    })()
  };

For the purposes of validating my process, there are three functions:

  • load() – which loads an external JS file and executes it, so that we can load other JS scripts and libraries.
  • readJSONFile() – which loads a JSON file and returns it as a JSON object.
  • JSON class – which does two things, one is provides  JSON.parse() method for parsing strings as JSON objects into JS objects and the other is JSON.stringify() which will turn a JS object back into JSON

Putting all of this together gives you a replicator where we now have some useful functions to make writing JavaScript filters easier. I’ve pushed all of this up into my fork of the Tungsten Replicator code here: https://github.com/mcmcslp/tungsten-replicator/tree/jsfilter-enhance

Now, one final note. Because of the way load() works, in terms of running an eval() on the code to import it, it does mean that there is one final step to make functions useful. To explain what I mean, let’s say you’ve written a new JS filter using the above version of the replicator.

In your filter you include the line:

load("/opt/continuent/share/myreallyusefulfunctions.js");

Within that file, you define a function called runme():

function runme()
{
     logger.info("I'm a bit of text");
}

Now within myreallyusefulfunctions.js I can call that function fine:

runme();

But from within the JS filter, runme() will raise an unknown function error. The reason is that we eval()‘d the source file within the load() function, and so it’s context is wrong.

We can fix that within myreallyusefulfunctions.js by exporting the name explicitly:

if (runme.name) this[runme.name] = runme;

This points the parent namespace to the runme() in this context, and we put that at the end of myreallyusefulfunctions.js script and everything is fine.

I’m lazy, and I haven’t written a convenient function for it, but I will in a future blog.

Now we’ve got this far, let’s start building some useful JS functions and functionality to make it all work nicely…

Data Migration: Database Terms and Structures

In the previous post we looked at a number of different database types and solutions, and it should be clear that there are a huge range of different terms for the different entities that make up the database structure. All the different entities fit into one of four categories, and they have significance because when moving and migrating data you need to know the source and destination type and whether you should be creating a database for every document (bad) or a document for every record (good). The components can be described as shown in Figure 1-6.

Figure 1-6.png

Figure 1-6: Database Terms and Structures

Most databases support the notion of four different components:

  • Field – generally the smallest piece of addressable data within any database. However, not all databases identify information down to the field level. Others don’t even recognise fields at all.
  • Record – a group of fields, or, a single block of identifiable information. For example, your contact information is a record made of the fields that define your name, your address, and your email address. Some databases only support the notion of a block of information and don’t care what it contains, whether that is fields or a binary string of data. Records may also involve either a fixed set of fields, or a variable group.
  • Table – a group of records. Some databases assign a specific group of fields to a specific table. Others just use a table to hold or identify a collection of records with largely similar information. Some database types, such as NoSQL, do not support a table, but immediately jump from record to database.
  • Database – a group of tables. Not all databases support this additional level of organisation, and in fact it tends to be those that have a significant structure at the lower levels (field, record). The database is usually used in the role of multi-tenancy, that is, the ability to store a collection of data related to a single application.

Of course, the problem is that different databases apply and support these terms differently, many use different terms, and some may blur the lines between each term to such an extent that it is impossible to tell where the different elements exist.

Let’s explain this a little further by providing some explicit examples:

  • MySQL, Oracle database, IBM DB2, Microsoft SQL Server, Microsoft Access, and other relational databases tend to support all four levels with a very rigid structure in place, as you would expect from a structured RDBMS.
  • Memcached knows only records (values) identified by a supplied key, and those records have no fields.
  • CouchDB, MongoDB and Couchbase support different databases, and within those databases you have documents, which are logically similar to records. These documents have fields, but there is no requirement for the fields within each document to be the same from document to document. MongoDB also supports collections, which are akin to tables.
  • Hadoop in it’s bare Highly Distributed File System (HDFS) native structure doesn’t understand anything, although you can place files into different directories to mimic a structure. If you use a system on top of HDFS, such as Hive, HBase or Impala, you are normally implying a typical 4-level data architecture.

In general, the ability to identify different components within the database depends on the database type, and a summary of these is provided in the table below.

Database Fields Records Tables Databases
RDBMS Yes Yes Yes Yes
NewSQL Yes Yes Yes Yes
NoSQL Mostly Documents/Rows Maybe Yes
Key/Value Stores No Yes, by ID No Maybe
Unstructured No No No Maybe

Now let’s have a look at the specific example database solutions, including the term used for the corresponding value:

Database Type Database Fields Records Tables Databases
RDBMS Oracle Yes Yes Yes Yes
MySQL Yes Yes Yes Yes
PostgreSQL Yes Yes Yes Yes
NewSQL InfiniDB Yes Yes Yes Yes
TokuDB Yes Yes Yes Yes
NoSQL CouchDB Yes, embedded in JSON Documents No Yes
Couchbase Yes, embedded in JSON Documents No Buckets
MongoDB Yes, embedded in BSON Documents Collections Yes
Cassandra Implied in column family Yes, implied by key ID Implied in Column Family No
HBase Implied in columns Yes Implied in Column Family No
Key/Value Memcached No Yes, by key ID No Maybe
Redis Yes Yes, key/value pair No No
Riak Yes Yes Schema No
Unstructured Hadoop/HDFS No No No By HDFS directory
Hive Yes, if implied Yes, if implied Yes Yes

 

Although it wont be covered in this series to any significant degree, these different levels also tend to support one further distinction, and that is security. Different database solutions provide security at a variety of levels and some allow you to restrict access down to the record level. For all database systems where different databases are supported and their is some level of security or protection between them, these databases are called multi tenant databases.

As we start moving the data between databases, understanding the importance of these elements is critical. For example, when moving data from an RDBMS to Hadoop, the distinction of table or database may disappear, and the significance of individual records may be deliberately removed entirely to enable the information to be processed effectively.

In contrast, moving data from MongoDB into MySQL is easier because we can identify specific elements such as a database and a table. Where we start to become unstuck is that although documents contain a collection of fields, they may not contain the same fields across each document.

Homogeneous vs. Heterogeneous

The primary issue with exchanging information is whether you are moving data between homogeneous or heterogeneous databases. Homogeneous databases are those that are of the same type, for example, moving data from Oracle to MySQL; both are RDBMSs, both have databases, tables, records and fields, and therefore the complexity of moving data between the database is straightforward from a structural perspective. But the datatypes supported are not the same. What do you do about CLOB or RAW datatypes in Oracle when migrated to MySQL?

In a similar vein, the actual procedural process of moving data between database types is similarly affected. MongoDB and Couchbase, for example, support the same structure; JSON and BSON are largely identical, and although there are some differences, reading the data from MongoDB and writing it to Couchbase can be achieved with functions that are almost identical – get the document by it’s ID on MongoDB and set the document on Couchbase with the same ID.

Most RDBMSs can be accessed through SQL and front-ends like JDBC or ODBC, opening two connections and reading/writing are easy to do. Most support the SELECT INTO and LOAD DATA INFILE style SQL to export and import data in larger chunks. But in heterogeneous deployments the same tools are not always available. A quick, but not always accurate, description of these elements across different databases is shown in this table.

Issue Homogeneous Heterogeneous
Data structure No Yes
Data types Yes Yes
Data Loading No Yes
Data Usability Yes Yes

Defining the Problem

Now that we have a good grasp of the different databases, their abilities, and their differences, it is time to take a closer look at what we mean by moving and migrating data and the problems associated with this kind of operation. Now we can finally start to define the problem of exchanging data between different databases and how that process can be tackled and resolved.

All of the following aspects must be considered in entirety before you start to exchange data, but think about it logically and holistically – you have to decide how data will be formatted, how the data is going to look (structure), how the data physically going to be transferred, and finally how it is going to be used.

Altering the Format

All data is not created the same, or in the same format, and furthermore, not all data is supported or acknowledged. Within NoSQL, for example, there may be no datatypes other than string, so you need to consider how you are going to move the data to the right type and the right format without (unnecessarily) losing data. The main considerations are:

Differences in supported types – you may have to choose between migrating to the next nearest, or most appropriate type. NoSQL and all Big Data targets tend not to have strong datatypes, whereas RDBMS database have very strong typing. You must choose a type that is able to handle the data in the way you want, and be able to hold the size of the information being inserted. Large text data, for example, may be too long to fit in a CHAR or VARCHAR column, and may need to be inserted into a BLOB or RAW column.

Differences in type definitions – databases have different definitions of different types. For example, Amazon RedShift supports only 19 digits of precision for floating-point values, while MySQL supports up to 53. Dates and times are also typically represented different, with some only supporting an explicit date type, or supporting a combined date time, or supporting a time with heavily restricted precision. All these differences mean that you may wish to store values outside the given range as a different type; for example, storing dates or timestamps-point values and dates as strings so as not to lose data.

Differences in type interpretation – generally a difficult problem to resolve without extensive testing, some datatypes can be interpreted incorrectly when the data is moved into a target database. String encoding – for example ASCII and Unicode, or bit-specific fields can cause issues. Also timestamps which may be interpreted during import as being subject to time differences; for example, if you exported on a server using Pacific Standard Time (PST) but imported on a different database using Central European Standard Time (CEST).

These issues must be considered in entirety before you exchange data; getting it wrong could lead to incorrect, invalid, and even completely corrupt information.

Altering the Structure

It should be clear right now that there are differences in the structure of the different database types. What may not be clear is that there are more options available to you than a simple direct association from one type to another. Instead you must make sure that the data is exchanged in an effective manner appropriate the information that is being exchanged.

For certain combinations the structure may appear obvious, but there is always the possibility that you the structure and information can be more effectively organised. For example, when moving from an RDBMS to a document store, the first intention is simply to place the different tables and structure them as different documents within the target database. This is fine, but adds complications you may want to avoid when you come to use it. Instead, merging the different tables into one larger document with nested components may simplify the use of the data in the target application.

The same can be true in reverse, exploding a single document into multiple, related, tables. Alternatively, you may want to take advantage of specific functionality in the RDBMS, such as XML fields, sets, enums or even convert the information to embedded JSON or serialised language variables if that makes sense to your application.

Loading the Information

Physically transferring the information seems like the most mundane of the processes in the entire scheme of exchanging data between systems, but in actual fact, it is is less clear than you might think. We’ll look at this in more detail when examining specific examples and database exchange projects, but some upfront issues to consider:

Does the solution include a native bulk loading system. Some databases specifically support a method of importing data, whether larger or small. For example, in MySQL the LOAD DATA INFILE SQL statement can do this for you. Cassandra supports a COPY command in CQL, and various Hadoop interfaces such as HBase and Hive enable you to access CSV files directly without explicitly importing them.

Custom loading may be required if no built-in solution exists. This can take many forms, including writing your own, or if they are available using specialised tools like Tungsten Replicator or  Sqoop. The exact method is going to depend on the data exchange type, data size, and complexity of the load process.

Application loading can be used in those situations where the application is running and a different version or format of the information is used. For example, when caching with a NoSQL engine on top of an RDBMS, you might adapt your application to automatically generate the NoSQL record. Similarly, during a migration, you might configure your application to look in the new database, and if it doesn’t exist, load it from the old database and generate the new record.

Data sizes must be a consideration. It seems ridiculous in this age when disk sizes are so large, but database sizes can be huge too. A recent project I was involved in required migrating just under 150TB of information. Storing all of that data in one go would have a required a mammoth sized disk array before the data was loaded into a Hadoop/Hive database. There are solutions for moving and migrating such large volumes of data without it ever touching the disk and using up all that space.

Depending on your data exchange requirements, any, or all of these may be an issue you have to contend with.

Making the Data Usable

Exchanging data between systems is only any good if once there the data is usable. Nobody would consider releasing a physical book in the USA, and a digital book in France, and not translating it. The same is true of data. Exchanging the data between databases requires you to take these issues into account during the movement of the data; it’s no good just blindly copying the data over and hoping it will be usable.

To make the data usable the following aspects must be considered:

  • Data accessibility – we’ve already talked about the key structural translation that needs to take place, but you also need to think about the effect on elements such as searching and indexing. Certain indexing methods are more complex (and therefore computationally expensive) than others. Some are more efficient. Some database environments support a limited number, quantity or complexity of indexing and querying that can only be addressed if the format and structure of the data is correct to begin with.
  • Data validity – if you change the structure of the data, does that change the ability to validate or otherwise ensure the quality of the information? For example, moving from RDBMS to NoSQL you may lose the ability to single out duplicate entries for certain types and fragments of the dataset. Relational constraints are not enforced within non-relational databases. Data format differences may also present problems; in a NoSQL database, for example, the same strict database types, such as dates, times or numbers do not exist. How do you prevent an invalid date being inserted into a date column, or worse, a non-date value into a date column that would have been identified during a database write?
  • Application usability – if the data is moved, can you still access and update it in the same way? RDBMSs tend to be transactional, providing stability and support, NoSQL databases do not as a rule, particularly across multiple silos. If an invoice is updated, how do I guarantee that the customers account is also updated, especially if one operation, or the database itself, fails during the process?

These are some, but not all, of the issues you need to be aware of. Regardless of the actual method though, you want to actually use the data at the end, so don’t forget how you might query or index the data once it’s moved. Keep in mind that not all data moves require heavy consideration. If you are exporting the data to be loaded for a mail merge for example, the usability aspects may be minor compared to the format and quality of the information.

How to Buffer posts+hashtags from your Blog using Zapier

I try to automate as much my life as possible, particularly when it comes to computers.

I’ve been using the automated ‘Social Sharing’ on WordPress.com (and indeed, my blogs in general) for years. However, I’m also a keen Buffer user and WordPress.com does not offer a Buffer connection. Because I also use Buffer to handle my Patreon posts, concentrating them all in one place would make things a lot easier.

What I wanted to do was something quite straightforward, I wanted to turn a blog post entry into post to Twitter (and others) that turned the list of tags I created on the post into #hashtags. This actually doesn’t seem like a particularly complex or uncommon request, but apparently it’s not a standard offering. What I was even more surprised at was that nobody else seemed to have done the same, which has me confused…

Now there are many options for doing this kind of automated posting, I could have used IFTTT, but IFTTT while incredibly useful (I have about 60 recipes on there) is also incredibly simplistic and your options are limited. That means I can’t post from WordPress to Buffer with the required hashtags.

Zapier is very similar to IFTTT, but also has the option of running multistep Zaps that do more than one thing (IFTTT is limited to one target), but better than that you can include a step that runs information through a JavaScript (or Python) script to do some additional processing.

And this is the key that enables me to do precisely what I need, take a blog post from one of my blogs, process the list of tags into a list of (de-duplicated) hashtags, and then post it into my Buffer queues.

So, here’s how to get Zapier to do what you need, there are going to be five steps to this:

  1. Identify when a new post appears on a WordPress blog
  2. Run a short Javascript program to take the list of tags (actually Terms) from the Blog post into a deduced and hash tagged version
  3. Add it to my Twitter Buffer
  4. Add it to my Facebook Buffer
  5. Add it to my LinkedIn Buffer

Here’s how to get it setup. I’m going to assume you know Zapier and can follow the onscreen instructions, it’s not that complex.

Step 1

  • Register for a Zapier account, if you don’t already have one.
  • Connect your Zapier account to your WordPress blog
  • Connect your Zapier account to your Buffer account

Step 2

Create a new Zap on Zapier.

Select ‘Wordpress’ as your trigger app.

Screenshot 2016-02-21 13.45.18.png

Now configure how you want the trigger to occur. I basically every post in every category, but if you want to add specific categories or other filtering, feel free.

Step 3

For the Action select ‘Code </>’

Screenshot 2016-02-21 13.45.28.png

Now Select ‘Javascript’

Screenshot 2016-02-21 13.45.34.png

When it gets to the Edit Template, you’ll need to specify the input variable to the JavaScript, in this case, create one called ‘tags’ and then select the ‘Terms Name’ from WordPress Step 1 and you’ll be ready to go.

Screenshot 2016-02-21 13.45.40.png

These variables that you select here are placed into a hash (associative array) in the JavaScript context called ‘input’, so in this case, we’ll have the item ‘input.tags’ to parse in our JavaScript code. The actual list of terms will come through as a comma-separated string

The code itself is quite straightforward:

var hashlist = {};

input.tags.split(',').forEach(function(item,index)
{
  var res = item.replace(/ /g,'');
  res = res.toLowerCase();
  res = '#' + res;
  hashlist[res] = 1;
});
return({'hashlist' : Object.keys(hashlist).join(' ')});

We iterate over the terms by using ‘split’ to separate by a comma, then we replace any spaces with nothing (so we turn things like ‘data migration’ to ‘datamigration’, convert it to lower case, add the # prefix and add that all to a new associative array. The reason for this is to get rid of duplicates, so even if we have ‘data migration’ and ‘datamigration’ in the input, we only get one in the output. This is particularly useful because the ‘Terms’ list from WordPress is actually composed of both the tags and the categories for each post.

Finally, we return all of that as a string with all the keys of the hash (ie. our nicely formatted tags) separated by a space. However, just like the input value, we return this as an Object with the string assigned to the field ‘hashlist’. We’ll need this when creating the Buffer post.

I recommend you test this thoroughly and make sure you check the output.

Step 4

Choose your target Buffer.

The Buffer API only allows you to post to one queue at a time, but brilliantly, Zapier lets us add multiple steps and so we can do one for each Buffer queue, in my case, the three. The benefit of this is that I can customise and tune the text and format for each. So, for example, I could omit the tags on Facebook, or, as I do, give a nice intro to the message ‘Please read my new blog post on…’ on FB because I’m not character (or attention span) limited.

Now for each Buffer queue, create your message, and when it comes to choosing the output, make sure you select your JavaScript output (which will be Step 2) and the ‘hashlist’ value.

Step 5

That’s, it! Test it, make sure your posts are appearing, and check your Buffer queue (deleting the entries if required so you don’t double-post items).

You can duplicate and use this as many times as you like, in fact I’ve done this across my two blogs and am now looking into where else I can use the same method.

Data Migration: Understanding the Challenges

Data migration – that is, the practice of sharing and distributing information between databases – requires some very careful consideration. Are you moving the data permanently, temporarily, sharing it between applications? Do want to share all of it, some of it? Are you changing databases, or trying to move some data to access or use the data in a more efficient system?

Let’s start by looking at what we mean by a database, and what the myriad of different databases are out there.


 

Walk up to any person at an IT conference or gathering twenty five years ago and ask them to name a database most would have probably selected one of a couple of the available tools at the time. All of the databases would have been the same type. That type would have been some kind of fixed record database management system, along the lines of dBase III+ or Oracle.

These had some very specific layouts and formats – the record would have had a fixed size, based on fixed fields, often with fixed widths. The reasons for this were largely for technical reasons – the way to store data efficiently was in records of a fixed size. Each record was made up of fields, each with a fixed size. To read a record, you needed the definition and then just extracted the bytes, as shown in Figure 1-1.

Figure 1-1.png

Figure 1-1: Fixed Record and Field Sizes

To access a different record, you could ‘seek’ ahead in the file according to the size of the records, and the number of the record you wanted to update. For example, to read record number 15 you would skip forward by physically reading the bytes from a file at 14 x RECORDSIZE.bytes, reading RECORDSIZE bytes, and then extracting the field data using the known record structure. This meant that records were treated as one, big, long block of bytes, as shown here in Figure 1-2.

Figure 1-2.png

Figure 1-2: Fixed Records as a stream of data

In fact, this was a very simple data model that was (and still is) thoroughly practical – many young developers and programmers may well have created a database using this very model. It even works if you use indexes – you can point directly to a record using the same system.

It may surprise you to know that for some databases this is still the fundamental model at the lower levels, although there may be some additional complexities and features. But over those same 25 years some other things have changed in two different directions, data formats, and data diversity. Those two have lead to a level of complexity in terms of the database systems that manage.

Although it may be useful to understand these low-level data formats about how the data is actually physically stored by the database, the focus of this series is one level higher. We want to consider how the data is structured, fields, records, documents, and also about the formatting and character structures and information, and finally how the entire database appears and is usable within your chosen database system. More importantly, we want to know how to move it all elsewhere. Before we get there, let’s look at the top level, database types.

Database Types

My earliest database – at age eight – was one that I built to catalogue my book collection using my Sinclair ZX81, with the software written entirely in BASIC. By the time I was 13 I had started to build custom applications using dBase III+ to manage my fathers accounts. When I left college, my first job was to move data, first from an old Digital Unix system to the new Sun Solaris 2 using the same database, and then from that database engine called BRS/Search, to Oracle. BRS/Search was a completely free-form database.

The aim of this process was to move that free-form store into a structured format – Oracle, an RDBMS – and to access it using a front-end built using a Macintosh specific RDBMS engine called 4th Dimension. In the background, we also started putting different classes of data into the then-brand-new Macintosh specific database called Filemaker.

Since those early days I’ve worked with (and on) PostgreSQL, MySQL, Oracle, Microsoft SQL Server, Microsoft Access, CouchDB, Berkeley DB, SQLite, Couchbase, MongoDB, Cassandra, DB2, and most recently Hadoop, to name just a few. They all have different characteristics – this is the primary reason they exist at all, in fact – and capturing the essential essence of each group of databases is our first step on the road to understanding how to move data between these databases.

The point here is not that I’ve got experience of (although hopefully that helps explain the reason and experience behind the content here), but instead, to demonstrate that there is a huge array of choice out there today. They all have different parameters, different methods of storing data, different supported formats, and a huge array of methods for reading, querying and extracting the information.

But what exactly moves a collection of data from just that – a string of bytes – into a database? And how does affect how we move data between them? Let’s look at some basic database principles. This will not be new information, but they are vital concepts to understand so that we can translate and refer to these elements through the rest of the series.

Database Principles

What is a database?

That is not an innocent question, and the answer depends entirely on the database system, type and individual solution before you can really provide an answer.

However, it can be summed up in two sentences:

A database enables the storage of individual, addressable blocks of information to be stored efficiently. These blocks can also be retrieved and potentially searched and indexed to enable the information to be effectively retrieved.

Whenever you look at a database and how to store, retrieve and update the information, you need to consider how the information within the database is accessed.

All databases share the same basic principles when it comes to working with the information itself, they must all share the following functionality referred to as CRUD; Create, Read, Update, Delete:

  • Create – data must be able to be created within the database, and this can be done on record or block basis, or in a batch mode where data is created in bulk.
  • Read – data must be able to be read back out. By their very nature, all databases must be able to do this on a selective basis, either by record, or by a group of records. More complex databases enable you to achieve this more selectively, for example, by selecting all of cars that are blue, or all the invoices raised for Acme Inc.
  • Update – data must be able to be updated. Again, as with reading, this must be possible on a record by record basis. Updates may also involve bulk modification of multiple records and even multiple fields simultaneously.
  • Delete – data must be deletable or removable on a record by record basis, involving either single or multiple records simultaneously.

Understanding the significance of these different operations within different databases is important to getting the movement and migration of information correct. Some databases can, by design, only support certain levels of these operations. Some provide implicit and explicit deletion of records, and others may deliberately not support update operations.

To further complicate matters, performance should always be a consideration for certain types of data migration. Most analytical and data warehouse platforms benefit from large, batched, or combined updates. Hadoop, for example, works badly with a large number of small files, because these cannot easily be distributed across the cluster. Hadoop is also, by design, an append-only system, which means updates are more complex to handle.

Contrast this with Memcached, where bulk writes or updates are supported, but where for reasons of cache efficiency you do not want large batches of data to be updated simultaneously as it would invalidate large portions of the cache.

Data Formats

Different databases store and structure information differently. Some use records, some use fields, some use documents. Some expect data to be highly structured, where a single ‘database’ may consist of tens, hundreds or even thousands of different tables for different pieces and types of information. At the opposite end of the scale, some just have a record with no further classification or identification.

These principles and how to migrate between them will be discussed throughout the series, but some general principles about the different structures and how to move between them will be examined in closer detail in a future post, when we look at Data Mapping and Transformations.

Datatypes

Depending on the database in use, different databases may use or enforce specific datatypes on the data that is stored. For example, there may be both character (string) and numeric datatypes.Although it is possible to store numeric information into a string column, there are often benefits to the numerical identity, including more efficient storage (and therefore faster operation), and the ability to run or perform specific operations, such as a SUM() or AVERAGE() function on a numeric column without having to translate each individual string into an integer or floating-point value.

Datatypes and their identification and translation are a major focus of a future post on  Data Mapping and Transformations.

Indexes

All databases are predicated on the need to access the information within them very quickly. Consider a simple contact database with just 20 records in it. To look for the record with the name ‘MC Brown’ in it requires us to look at every record until we find the matching one. Of course, there may be more than one such record, so even if we find that the first record matches, we still have to iterate over 20 records to find all the matching entries.

With 20 records this isn’t a problem, with 20,000,000 records this is inefficient. Indexes bridge the gap by allowing the database to be addressed more efficiently. There are different algorithms for creating indexes that are beyond the scope of this text, but in all cases, the role of the index is to provide quicker access to information than could be achieved through a sequential sort.

Database Types

There are a myriad of different ways in which you can identify and classify different databases, and the dissection mechanism depends on what aspect of the database you are looking at. For example, SQL was for a long time associated exclusively with structured RDBMS engines, but has now become a data interface standard of it’s own and is used in both RDBMS and non-RDBMS environments. For the purposes of our understanding, we’ll examine them according to how they organise and classify their data.

Through the rest of this series, we concentrate on three major types, the RDBMS, NoSQL and Big Data.

Structured and Relational Database Management Systems (RDBMS)

Examples: Oracle, MySQL, PostgreSQL, Microsoft SQL Server, Microsoft Access, Filemaker Pro

Most structured database systems tend to have a relational database core (RDBMS), and most often, but not always, are interacted through the Structured Query Language (SQL). When talking to people about any databases, an RDBMS and SQL is what people will think of first, because it matches the idea of a strict database and types. The highly structured and rigid nature requires a rigid method of storing and retrieving information. It also places limitations and rigidity to your database types and structure. A simple layout is shown in Figure 1-3.

Figure 1-3.png

Figure 1-3: A structured RDBMS table diagram

Structured databases have a few specific characteristics:

  • Strict data structure – data is stored within fixed named silos (databases), within named tables, and with each table having a fixed number of named columns. Every single record within each table has the same number of fields (columns), and each column is used for a specific purpose or piece of information.
  • Strict data types – for example, an RDBMS will store integers and floats differently, and may have additional data types designed to provide fast access to specific information, for example, the SET and ENUM types within MySQL.
  • Data Definition Language (DDL) – related to the elements above, the DDL within any database is important because it provides a reference structure which can be used to replicate that structure in other database. Depending on the database system, the DDL may either be implicit in the way the data is accessed or stored, or in the API and interfaces provides, or the DDL could be more explicit, as in the dialects in SQL and similar statement-based interfaces.
  • Data manipulation language (DML) – Typically, but not always, SQL. The DML enables you to perform the correct CRUD operations to enable the information to be managed. Like DDL, the exact interface is very database specific. Some databases and systems rely entirely on a statement based language like SQL, which has it’s own dialects and structures for performing the updates. Others rely entirely on the API that interfaces between client applications and the database storage.
  • Relational capability – because the data is in a fixed format and with fixed types, it is possible to create specific relations between the field in one table with the field in other tables. This enables the data to be JOINed together to provide a unified output. For example, if you have orders and invoices, it’s possible to link the order and the invoice by a unique ID, and the database can either use or explicitly enforce the relationship. Joins are actually further characterised by their type, enabling many-to-one relationships (for example, multiple invoices relating to one client), one-to-many relationships (one invoice number referring to multiple invoice lines) and one-to-one (invoice to payment received).
  • Constraints and Indexes – constraints enable data to be created within a limited subset, or to identify rows uniquely. For example, a primary key constraint can force the table to create new records only with a new unique identifier. Indexes are used to create efficient methods for looking up and identifying data according to criteria. Within an RDBMS indexes are generally used to speed up access on a specific column, or multiple columns, to improve the speed of access during specific queries. Without an index, the RDBMS will default to performing a full table scan.

Structured/RDBMS solutions provide some of the easiest methods for exchanging data – it is generally easier to move data from a structure store to elsewhere. However, most destination databases do not have support the same range of indexes. Conversely, moving data from unstructured databases of any kind into Structured/RDBMS because you have to decide what goes where.

NewSQL Databases

Examples: Clustrix, VoltDB, InfiniDB, TokuDB

Traditional RDBMS and SQL databases are designed to run on a single machine. This has performance and hardware limitation issues. There is only so much memory and hard disk space that can be installed in a single machine, and if your database or performance requirements are high enough, a single server is not the solution. There are strategies, such as sharding the database (specifically splitting it up by an identifiable key, such as ID, name or geographical location), or more specifically dividing the database across machines, but these place a different load on your application layer, and are beyond the scope of this book.

NewSQL databases are a modification of the Structured/RDBMS that use multiple machines in a cluster to support the database requirements. Unlike the sharding and other methods, NewSQL solutions automatically distribute the load across the machines and handle the interface, indexing and querying required to access the data.

The main elements of the database and structure, such as databases, records and fields, and all other data migration considerations are the same as for traditional RDBMS environments.

NoSQL/Document Databases

Examples: Couchbase, CouchDB, MongoDB, Cassandra, HBase

NoSQL databases actually span a wide range of different databases, originally classified by their rejection of SQL as the DDL and DML language of choice, more usually resorting to the use of a direct API for accessing information. There was a resurgence of these different solutions in the early 2000s as people sought alternatives that were faster and simpler than the transactional RDBMS for web applications and websites.

Most NoSQL databases rely on simpler methods for accessing the information, for example by using a single document ID to retrieve a record of information. This document ID could be extracted from the users email address, so when a user logs in or register on a website, the document associated with that email address is accessed, rather than ‘looking-up’ the record in a larger table of user records.

NoSQL databases of this type can be roughly split into two groups, the columnar/tabular databases, and the document databases. The columnar/tabular type include Cassandra, Apache Hbase (part of Hadoop), and Google’s BigTable. Data is organised through an identifiable row ID, and a collection of associated column IDs that classify the data structure. They can look, and even act and operate in a similar fashion to the structured RDBMS table/row/column structure. A sample column style database (in this case Cassandra) looks roughly like that in Figure 1-4.

Figure 1-4.png

Figure 1-4: A columnar (Cassandra) database structure

Document databases are completely different. Unlike the table structure, data is instead organised into a document, usually using JSON or a JSON-like structure. Unlike the table structure, a document often combines different fragments of information together – for example, a contact record may store all the phone numbers, email addresses and other components within the single document for a given person. Documents, especially JSON based documents, are also very flexible and consist of fields that are nested, such as an array of phone numbers, or even entire nested structures, such as the individual rows (qty, product id, description, price) for an invoice or order, all encapsulated into a single document. A simple document database structure can be seen in Figure 1-5.

Figure 1-5.png

Figure 1-5: Document Databases

Perhaps most importantly, documents in a document database do not need to be identical. In a structured RDBMS environment, every record contains every field, even if the field is not actually used for that record. In a document database, different documents, even if within the same database or group may have only one field, or may have 20. The variable nature makes them appealing for this very reason, but represents an area of complexity when migrating information.

Most NoSQL systems have no idea of an explicit relation or join – this is often one of the aspects that makes the system faster. However, the lack of this element means that different techniques are required to store and interact with complex data.

Depending on the NoSQL solution, you may or may not have access to an index or quicker method of accessing the data. In CouchDB and Couchbase, for example, the fields of a document can be used to generate an index that provides quick searching and retrieval of information.

NoSQL databases can be easy to interact and migrate data to and from, providing there is (or isn’t) a strict schema, accordingly. For example, moving from an RDBMS to a document-based NoSQL database can be a case of converting the table records into documents identified by the primary key. It can also pay off in the long term to perform a more concerted conversion and translation of the source tables into unified documents.

Key/value (KV) Stores

Examples: Memcached, Redis, Riak

For most global declarations, key/value stores are treated as NoSQL, but I’ve split them out here because they have some interesting attributes that affect data exchange. A key/value store is exactly what it sounds like. A single blob of data (the value) is stored against a given key identifier. You store the information by giving the key, and retrieve the information by giving the same key. In most cases, the information can only be retrieved if you know the key. Iteration over the stored data, or indexes, are generally not available.

The roots of the key/value store go back to the attempt to speed up access to data where a given identifier is known, such as user id or email address. The best known key/value store is probably memcached which was originally developed to make use of the spare RAM of machines supporting a website (LiveJournal, a blogging platform) and enable fast access to blog entries. Since the ID of the blog could be derived from the URL being accessed, the entry could easily be looked up in memcached. If it didn’t exist, it was looked up from a MySQL database, and the formatted/retrieved version placed into the cache with the identifying URL.

Most document databases are really a modification of the key/value store. The value portion can be any data you like, from a simple string, through to a serialised object from C, Java or other languages, or a JSON document. In fact, some databases actually support both, and the only distinction between a key/value store and a document database is whether the database engine itself can identify and interact with the embedded structure. MongoDB and Couchbase, for example, have this distinction; MongoDB enables the database engine to update fields within the BSON (JSON-like) values, while Couchbase supports indexing of the JSON fields.

Key/Value stores are some of the harder databases to migrate and move data between. The lack of a structure, or the custom nature (for example a serialised language object), and the requirement to identify the record by a specific ID make exchanging data more complex.

Big Data (aka Unstructured, Semi-structured and Implied Structure Databases)

Examples: Hadoop, Apache Solr, ElasticSearch, Lucene

BRS/Search was, for the time and technology, relatively ground breaking in that it was a full-text retrieval system. Today we would probably classify this as a ‘document’ based database, that is, one that has a structured format, although the power behind BRS/Search was the ability to perform a free-text search across an entire collection.

Today, we generally referred to these types of database as unstructured, that is, there is no discernible format or structure to the information. Although there are many different examples of this, probably the best known today is Hadoop. Without getting into the functionality or history of Hadoop, the power of Hadoop comes from it’s ability to distribute the raw data and also to process and extract usable information from the unstructured data into something usable.

Within Hadoop, the normal workflow is to load Hadoop with raw data, for example, the text from tweets, or web-pages, and then use that information to build an index or data structure around the information so that it can be analysed or searched. Solutions such as Solr, Lucene and ElasticSearch work in similar ways, accessing the raw text and either indexing it so that the data can be indexed and searched, or using the structure that is available to provide searching and indexing by a more specific area.

This is an example where ‘semi-structured’ data applies. Twitter data for example consists of the twitter name, the tweet itself, and any tags or twitter users the tweet was directed to. The fixed fields and the tweet go together to make it semi-structured, as it consists of both structured and free-form information.

Implied structure databases are those where the structure of the data is implied by the database, even though the underlying data may only be partially structured and described. Apache Hive, part of Hadoop, is an example of this. Hive can natively read text files and interpret them with a specific structure, converting CSV files into columns so that they can be queried by HiveQL, a simplified form of SQL. Hive can also parse more complex data, including CSV that embeds JSON and serialised data structures, all so they can be queried through a familiar interface.

However, unlike a true RDBMS, Hive only interprets the underlying format, and it performs this interpretation every time the data is accessed. At no time does the data have to be translated into Hive format (nor, really, is there one), and no indexes are created to enable quick access to the data.

All of these individual types are wrapped up into what I’ve classed as ‘Big Data’. This is not to say that the data needs to be of specific size or complexity, only that it may consist of structured, unstructured, or all variants in between.

Moving data to and from unstructured, semi-structured, and implied structure databases entirely depends on what the information is, what structure is available, and how that structure can be used (or ignored) accordingly.

Keeping the Fiction Flowing

As I announced in my last blog post, I’m starting to publish all of my fiction through Patreon.

There are two books I’m actively publishing right now:

  • NAPE – a sci-fi story featuring a missing artificial intelligence.
  • Kings Courier – a fantasy featuring a boy who is a courier and gets pulled into a deeper role than he ever expected.

In case you’ve already missed the previous instalments:

NAPE

Kings Courier

Of course, you can head to Patreon and sign up for the regular updates over there.