SQL to Hadoop and back again, Part 3: Direct transfer and live data exchange

The third, and final article in my series on migrating data to and from Hadoop and SQL databases is now available:

Big data is a term that has been used regularly now for almost a decade, and it — along with technologies like NoSQL — are seen as the replacements for the long-successful RDBMS solutions that use SQL. Today, DB2®, Oracle, Microsoft® SQL Server MySQL, and PostgreSQL dominate the SQL space and still make up a considerable proportion of the overall market. In this final article of the series, we will look at more automated solutions for migrating data to and from Hadoop. In the previous articles, we concentrated on methods that take exports or otherwise formatted and extracted data from your SQL source, load that into Hadoop in some way, then process or parse it. But if you want to analyze big data, you probably don’t want to wait while exporting the data. Here, we’re going to look at some methods and tools that enable a live transfer of data between your SQL and Hadoop environments.

SQL to Hadoop and back again, Part 3: Direct transfer and live data exchange.

SQL to Hadoop and back again, Part 2: Leveraging HBase and Hive

The second article in a series covering Big Data and SQL interaction is available now:

“Big data” is a term that has been used regularly now for almost a decade, and it — along with technologies like NoSQL — are seen as the replacements for the long-successful RDBMS solutions that use SQL. Today, DB2®, Oracle, Microsoft® SQL Server MySQL, and PostgreSQL dominate the SQL space and still make up a considerable proportion of the overall market. Here in Part 2, we will concentrate on how to use HBase and Hive for exchanging data with your SQL data stores. From the outside, the two systems seem to be largely similar, but the systems have very different goals and aims. Let\’s start by looking at how the two systems differ and how we can take advantage of that in our big data requirements.

SQL to Hadoop and back again, Part 2: Leveraging HBase and Hive.

SQL to Hadoop and back again, Part 1: Basic data interchange techniques

I’ve got a new article, which is part of a new three-part series, on moving data between SQL and Hadoop, both the export to Hadoop and importing processed content back into an SQL store.

In this first one, we look at the basic mechanics and considerations before you start the migration of data, such as the data format, content, and export techniques.

Read: SQL to Hadoop and back again, Part 1: Basic data interchange techniques

Developing Applications for use with Continuent Tungsten and Tungsten Replicator in SDJ

I’ve just had a new article published with the Software Developers Journal talking about how you can write applications to take full advantage of Continuent Tungsten and Tungsten Replicator.

As a developer of an application there really isn’t a problem better than finding that you have to scale up the application and the database that supports it to handle the increased load. The main bottleneck to most expansion is the database server and in many modern environments that replication is based around MySQL. Application servers are easy to add on to the front-end of your environment.

Read: Qt5 – How to Become a Professional Developer- RELEASED | News | Magazine for software developers, programmers and designers – Software Developers Journal (registration/purchase required)

Data Mining in a Document World

As databases evolve, learning how to get the best out of the different solutions out there is the key to understanding and extracting the data in the way you need from your required data store. Document databases, like MongoDB, CouchDB, Couchbase Server and many others provide a completely different model and set of problems for interfacing and extracting data.

You need to be able to understand your structure, how you can query the information, and how to perform different data mining techniques on what is very obviously a completely different structure of information.

In this article, I try to take you through the basics of data mining when using a document database.

Read: Data mining in a document world

Data Mining Techniques

I have a new article on the basics of data mining techniques so that you can better understand some of the key principles behind the different methods and principles of data mining. 

From the abstract:

Many different data mining, query model, processing model, and data collection techniques are available. Which one do you use to mine your data, and which one can you use in combination with your existing software and infrastructure? Examine different data mining and analytics techniques and solutions, and learn how to build them using existing software and installations. Explore the different data mining tools that are available, and learn how to determine whether the size and complexity of your information might result in processing and storage complexities, and what to do.

Read: Data Mining Techniques

Document Databases in Predictive Modeling

My latest article on performing predictive modeling using document databases is now available on IBM developerWorks. The abstract:

Predictive analytics relies on processing, analyzing data from many different sources, collating, and then processing that through several stages into usable data. This involves recording and storing data in different formats, and may require translating information into PMML. Despite the complexities and structure of the information, and the sources often involving data from traditional RDBMS data sources, other solutions offer some advantages. We can use the recent range of document-based NoSQL databases to help collate the information in a structured format, while coping with the flexible structure of the individual data points. Many NoSQL environments also provide support for extensive map reduce type queries and processing that makes them ideal for processing large volumes of data into a summary format. In this article, we’ll look at the transfer, exchange, and formatting of information in NoSQL environments.

Read Document databases in predictive modeling

 

Using Hadoop and Couchbase

My new article on using Hadoop with Couchbase is available now on the IBM developerWorks site. 

The article tells you how to integrate the massive map/reduce functionality offered by Hadoop with the query functionality offered in Couchbase.                                                                                                                            

With this article you also get a live demo of the process in action, and an intro video for the problems at hand we are trying to solve: 

Read: Using Hadoop with Couchbase

Fortunately the article was also chosen as a feature article for the entire developerWorks site, and came with call picture of an elephant sitting on a couch!