The Data Day: June 29, 2017

Some of the Fake News Media likes to say that I am not totally engaged in data and analytics. Wrong, I know the subject well.

And that’s the data day, today.

The Data Day: July 22, 2016

What happened in data and analytics this week will leave you speechless

And that’s the data day, today.

The Data Day, A few days: January 30-February 8, 2016

Investment funding for Hadoop and NoSQL in 2015. And more.

And that’s the data day, today.

Neither fish nor fowl: the rise of multi-model databases

One of the most complicated aspects of putting together our database landscape map was dealing with the growing number of (particularly NoSQL) databases that refuse to be pigeon-holed in any of the primary databases categories.

I have begun to refer to these as “multi-model databases” in recognition of the fact that they are able to take on the characteristics of multiple databases. In truth though there are probably two different groups of products that could be considered “multi-model”:

True multi-model databases that have been designed specifically to serve multiple data models and use-cases

Examples include:
FoundationDB, which is being designed to support ACID and NoSQL, but more to the point in this instance, multiple layers including key-value, document, and object layers

Aerospike, which is planning to combine SQL, key value, and document and graph database technologies in a single database by bringing together its Citrusleaf NoSQL database with the acquired AlchemyDB NewSQL project

OrientDB, which is, at heart, a document database, but can also be used as a graph database; as an object database, making use of the Java persistence API; and as a hybrid database, taking advantage of multiple models to serve different application requirements

ArangoDB, which promises to deliver the benefits of key value and document and graph stores in a single database

Other products that could be considered true multi-model databases are:
Couchbase Server 2.0, which can be used as both a document store and a key value store, as well as a distributed cache

Riak, which is a key-value store, although it can be used as a document store since the value can be a JSON document

NuoDB, which will provide compatibility with other databases by taking on multiple ‘personalities’ – an Oracle personality via PL/SQL compatibility is in the development roadmap, as is a document store personality via JSON support.

General-purpose databases with multi-model options
What’s the difference between multi-model databases and existing general-purpose databases that have optional capabilities for serving multiple models? My book book it’s about being designed for purpose, but I’m sure that will be a debating point for the future. In the mean-time, examples include:

Oracle MySQL 5.6, which can support both SQL-based access and key-value access via the Memcached API.

Oracle MySQL Cluster 7.2, which similarly supports concurrent NoSQL and SQL access to the database.

IBM DB2 10, which extends DB2’s hybrid relational and XML engine to enable the storage and management of graph triples, as well as support for the SPARQL 1.0 query language.

Akiban Server, which has the ability to treat groups of tables as objects and access them as JSON documents via SQL.

PostgreSQL h-store, which can be used for storing key-value pairs within a PostgreSQL data field, thereby enabling schema-less queries against data stored in PostgreSQL

We are also aware of other NewSQL database that plan to adopt support for popular NoSQL data models, while IBM has also talked about plans to integrate key value store NoSQL access capabilities with DB2 and Informix database software.

Other products that could be considered multi-model options include:
Oracle Spatial and Graph, an option for Oracle Database 11g.

One of the drivers of NoSQL database adoption has been polyglot persistence – using multiple databases depending on the specific requirements of individual applications. Multi-model databases contradict this trend, to some extent, so it will be interesting to see whether they begin to gain traction.

While we see the wisdom of selecting the best database for the job, we also recognise that it could sometimes be a matter of choosing the best data model for the job, while relying on a single storage back-end.

Our 2013 Database survey is now live

451 Research’s 2013 Database survey is now live at http://bit.ly/451db13 investigating the current use of database technologies, including MySQL, NoSQL and NewSQL, as well as traditional relation and non-relational databases.

The aim of this survey is to identify trends in database usage, as well as changing attitudes to MySQL following its acquisition by Oracle, and the competitive dynamic between MySQL and other databases, including NoSQL and NewSQL technologies.

There are just 15 questions to answer, spread over five pages, and the entire survey should take less than ten minutes to complete.

All individual responses are of course confidential. The results will be published as part of a major research report due during Q2.

The full report will be available to 451 Research clients, while the results of the survey will also be made freely available via a
presentation at the Percona Live MySQL Conference and Expo in April.

Last year’s results have been viewed nearly 55,000 times on SlideShare so we are hoping for a good response to this year’s survey.

One of the most interesting aspects of a 2012 survey results was the extent to which MySQL users were testing and adopting PostgreSQL. Will that trend continue or accelerate in 2013? And what of the adoption of cloud-based database services such as Amazon RDS and Google Cloud SQL?

Are the new breed of NewSQL vendors having any impact on the relational database incumbents such as Oracle, Microsoft and IBM? And how is SAP HANA adoption driving interest in other in-memory databases such as VoltDB and MemSQL?

We will also be interested to see how well NoSQL databases fair in this year’s survey results. Last year MongoDB was the most popular, followed by Apache Cassandra/DataStax and Redis. Are these now making a bigger impact on the wider market, and what of Basho’s Riak, CouchDB, Neo4j, Couchbase et al?

Additionally, we have been tracking attitudes to Oracle’s ownership of MySQL since the deal to acquire Sun was announced. Have MySQL users’ attitudes towards Oracle improved or declined in the last 12 months, and what impact will the formation of the MariaDB Foundation have on MariaDB adoption?

We’re looking forward to analyzing the results and providing answers to these and other questions. Please help us to get the most representative result set by taking part in the survey at http://bit.ly/451db13

Why SAP should march in the direction of ANTs

SAP faces a number of challenges to make the most of its proposed $5.8bn acquisition of Sybase, not the least of which being that the company’s core enterprise applications do not currently run on Sybase’s database software.

As we suggested last week that should be pretty easy to fix technically, but even if SAP gets its applications, BI software and data warehousing products up and running on Sybase ASE and IQ in short-order, it still faces a challenge to persuade the estimated two-third of SAP users that run on an Oracle database to deploy Sybase for new workloads, let alone migrate existing deployments.

Even if SAP were to bundle ASE and IQ at highly competitive rates (which we expect it to do) it will have a hard time convincing die-hard Oracle users to give up on their investments in Oracle database administration skills and tools. As Hasso Plattner noted yesterday, “they do not want to risk what they already have.”

Hasso was talking about the migration from disk-based to in-memory databases, and that is clearly SAP’s long-term goal, but even if we “assume for a minute that it really works” as Hasso advised, they is going to be a long-term period where SAP’s customers are going to remain on disk-based databases, and SAP is going to need to move at least some of those to Sybase to prove the wisdom of the acquisition.

A solution may have appeared today from an unlikely source, with IBM’s release of DB2 SQL Skin for Sybase ASE, a new feature for its DB2 database product that provides compatibility with applications developed for Sybase’s Adaptive Server Enterprise (ASE) database. Most Sybase applications should be able to run on DB2 unchanged, according to the companies, while users are also able to retain their Sybase database tools, as well as their administration skills.

That may not sound like particularly good news for SAP or Sybase, but the underlying technology could be an answer to its problems. DB2 SQL Skin for Sybase ASE was developed with ANTs Software and is based on its ANTs Compatibility Server (ACS).

ACS is not specific to DB2. It is designed to is designed to support the API language of an application written for one database and translate to the language of the new database – and ANTs maintains that re-purposing the technology to support other databases is a matter of metadata changes. In fact the first version of ACS, released in 2008, targeted migration from Sybase to Oracle databases.

Sybase should be pretty familiar with ANTs. In 2008 it licensed components of the company’s ANTs Data Server (ADS) real-time database product (now FourJ’s Genero db), while also entering into a partnership agreement to create a version of ACS that would enable migrations from Microsoft’s SQL Server to Sybase Adaptive Server Enterprise and Sybase IQ (451 Group coverage).

That agreement was put on hold when ANTs’ IBM opportunity arose, and while ANTs is likely to have its hands full dealing with IBM migration projects, we would not be surprised to see Sybase reviving its interest in a version that targets Oracle.

It might not reduce the time it takes to port SAP to Sybase – it would take time to create a version of ACS for Oracle-Sybase migrations (DB2 SQL Skin for Sybase was in development and testing for most of 2009) – but it would potentially enable SAP to deploy Sybase databases for new workloads without asking its users to retool and re-train.

IBM denies plan to open source DB2

ZDNet and its sister sites ran an interesting story yesterday indicating that IBM might be preparing to release its DB2 database under an open source license. If true, it would be a fascinating turn of events that would have a significant impact on the database industry. Unfortunately, it’s not. For more on the speculation and IBM’s denial, see this post over at our CAOS Theory blog.