August 6th, 2009 — Data management
Since the start of this year I’ve been covering data warehousing as part of The 451 Group’s information management practice, adding to my ongoing coverage of databases, data caching, and CEP, and contributing to the CAOS research practice.
I’ve covered data warehousing before but taking a fresh look at this space in recent months it’s been fascinating to see the variety of technologies and strategies that vendors are applying to the data warehousing problem. It’s also been interesting to compare the role that open source has played in the data warehousing market, compared to the database market.
I’m preparing a major report on the data warehousing sector, for publication in the next couple of months. In preparartion for that I’ve published a rough outline of the role open source has played in the sector over on our CAOS Theory blog. Any comments or corrections much appreciated.
July 29th, 2009 — Data management
Interesting news from Ingres today that it is teaming up with VectorWise, a database engine spin-off from Amsterdam’s Centrum Wiskunde & Informatica (CWI) scientific research establishment, to collaborate on a new database kernel project.
The Ingres VectorWise project will create a new open source storage engine for the Ingres Database that will better enable it to be positioned as a platform for data warehouse and analytic workloads, although Ingres does not have detailed plans for the productization of the technology at this stage. The starting point for the project is the theory that modern multi-core parallel processors now look like, and behave like, symmetrical multi processing (SMP) servers, and that on-chip memory is taking the place of RAM, but that database software has not been updated to take advantage of process developments.
In order to do so Ingres and VectorWise will be collaborating on vectorized execution, which sees multiple instructions processed simultaneously, and in-cache processing, through which the execution occurs within the CPU cache and main memory is effectively treated like disk. The result, according to Ingres, is to reduce the I/O bottleneck for query processing. Additionally, the VectorWise engine enables on the fly decompression and operation handling in memory and includes a compressed column store.
It is claimed that the Ingres VectorWise project will deliver 10x performance increases over the current Ingres database.
VectorWise span off from CWI in 2008 to commercialize the the X100 system previously created by its database architecture research group. Development of X100, now also known as VectorWise, has been led by respected research scientists Peter Boncz and Marcin Zukowski.
Ingres maintains that by working with the CWI research scientists it has proven that their theories are technically feasible in a commercial product. Bringing such a commercial product to general availability is the next step, and history has proven that can be easier said than done. With that caveat we are impressed with the vision and ambition that Ingres is demonstrating.
June 8th, 2009 — Data management
At last year’s 451 Group client event I presented on the topic of database management trends and databases in the cloud.
At the time there was a lot of interest in cloud-based data management as Oracle and Microsoft had recently made their database management systems available on Amazon Web Services and Microsoft was about to launch the Azure platform.
In the presentation I made the distinction between online distributed databases (BigTable, HBase, Hypertable), simple data query services (SimpleDB, Microsoft SSDS as was), and relational databases in the cloud (Oracle, MySQL, SQL Server on AWS etc) and cautioned that although relational databases were being made available on cloud platforms, there were a number of issues to be overcome, such as licensing, pricing, provisioning and administration.
Since then we have seen very little activity from the major database players with regards to cloud computing (although Microsoft has evolved SQL Data Services to be a full-blown relational database as a service for the cloud, see the 451’s take on that here).
In comparison there has been a lot more activity in the data warehousing space with regards to cloud computing. On the one hand there data warehousing players are later to the cloud, but in another they are more advanced, and for a couple of reasons I believe data warehousing is better suited to cloud deployments than the general purpose database.
For one thing most analytical databases are better suited to deployment in the cloud thanks to their massively parallel architectures being a better fit for clustered and virtualized cloud environments.
And for another, (some) analytics applications are perhaps better suited to cloud environments since they require large amounts of data to be stored for long periods but processed infrequently.
We have therefore seen more progress from analytical than transactional database vendors this year with regards to cloud computing. Vertica Systems launched its Vertica Analytic Database for the Cloud on EC2 in May 2008 (and is wotking on cloud computing services from Sun and Rackspace), while Aster Data followed suit with the launch of Aster nCluster Cloud Edition for Amazon and AppNexus in February this year, while February also saw Netezza partner with AppNexus on a data warehouse cloud service. The likes of Teradata and illuminate are also thinking about, if not talking about, cloud deployments.
To be clear the early interest in cloud-based data warehousing appears to be in development and test rather than mission critical analytics applications, although there are early adopters and ShareThis, the online information-sharing service, is up and running on Amazon Web Services’ EC2 with Aster Data, while search marketing firm Didit is running nCluster Cloud Edition on AppNexus’ PrivateScale, and Sonian is using the Vertica Analytic Database for the Cloud on EC2.
Greenplum today launched its take on data warehousing in the cloud, focusing its attention initially on private cloud deployments with its Enterprise Data Cloud initiative and plans to deliver “a new vision for bringing the power of self-service to data warehousing and analytics”.
That may sound a bit woolly (and we do see the EDC as the first step towards private cloud deployments) but the plan to enable the Greenplum Database to act as a flexible pool of warehoused data from which business users will be able to provision data marts makes sense as enterprises look to replicate the potential benefits of cloud computing in their datacenters.
Functionality including self-service provisioning and elastic scalability are still to come but version 3.3 does include online data-warehouse expansion capabilities and is available now. Greenplum also notes that it has customers using the Greenplum Database in private cloud environments, including Fox Interactive Media’s MySpace, Zions Bancorporation and Future Group.
The initiative will also focus on agile development methodologies and an ecosystem of partners, and while we were somewhat surprised by the lack of virtualization and cloud provisioning vendors involved in today’s announcement, we are told they are in the works.
In the meantime we are confident that Greenplum’s won’t be the last announcement from a data management focused on enabling private cloud computing deployments. While much of the initial focus around cloud-based data management was naturally focused on the likes of SimpleDB the ability to deliver flexible access to, and processing of, enterprise data is more likely to be taking place behind the firewall while users consider what data and which applications are suitable for the public cloud.
Also worth mentioning while we’re on the subject in RainStor, the new cloud archive service recently launched by Clearpace Software, which enable users to retire data from legacy applications to Amazon S3 while ensuring that the data is available for querying on an ad hoc basis using EC2. Its an idea that resonates thanks to compliance-driven requirements for long-term data storage, combined with the cost of storing and accessing that data.
451 Group subscribers should stay tuned for our formal take on RainStor, which should be published any day now, while I think it’s probably fair to say you can expect more of this discussion at this year’s client event.
March 9th, 2009 — Data management, M&A
Covering the the complex event specialists just got 25% easier. We noted in September last year that the complex event processing (CEP) specialists StreamBase Systems, Aleri and Coral8 were attractive acquisition targets and that it would only be a matter of time before we saw consolidation in the event processing sector. Consolidation among those vendors wasn’t exactly what we had in mind, but that is what has come to pass as Aleri has announced the acquisition of Coral8 for an undisclosed fee.
The combined entity, which continues to use the Aleri name, is now claiming to be the largest CEP specialist on the market, although that is debatable and we expect it to be strongly debated by StreamBase and Progress Software’s Apama division.
Here are the numbers to be debated: All of Coral8’s 45 employees are joining Aleri, which will have a combined headcount of 95 and will boast 80 paying customers, less than five of which are existing customers of both companies.
We will have a full assessment of the deal and its implications out later today, but our first impressions are as follows:
While the acquisition of Coral8 by Aleri may appear at first glance like a combination of near-equals the resulting business stands to benefit from complementary product and sales strategies that should bring about cost savings via reduced duplication of effort and enable further expansion outside financial services.
CEP is becoming a core enabling technology for data processing and analysis and the new Aleri is well positioned to build on its established position in capital markets and exploit partnerships with business intelligence and data warehousing vendors for wider adoption
February 20th, 2009 — Data management
Recent attempts to reach business event processing vendor Syndera by email proved unsuccessful, and just as I was about to reach out by more traditional means comes speculation that the company has shut down. Certainly www.syndera.com appears to no longer be operational.
We previously noted that Tibco acquired ‘certain assets’ of the real-time BI software vendor for $1m in July, and those continue to be available in the form of the TIBCO Syndera Operation Suite.
As Marco Seiriö notes in his speculation, it is somewhat surprising that the company, which had raised over $20m in VC funding, only managed a return in the region of $1m. A sign of the times or a special case?
February 18th, 2009 — Data management
Back in July last year we reported on the formation of a new open source cloud computing start-up called 10gen on our Cloud Cover and CAOS Theory blogs.
Seven months later and there have been a few changes at 10gen, such that this information management blog is arguably the most suitable venue for discussion of the implications of 10gen’s MongoDB, the cloud computing database which has now become its major focus.
A quick recap: 10gen launched as an open source platform-as-a-service play offering the MongoDB object database as well as an application server and file system. So far, so cloud stack.
However, the file system quickly became an interface layer to MongoDB while the company more recently decided that its application server runtime and MongoDB are better off apart and shifted its attention to the database, a standalone beta version of which was released last week.
As the two projects have diverged so will this post. To continue reading about the future of the Babble application server head for CAOS Theory, otherwise:
As this post from Geir Magnusson Jr, 10gen VP of Engineering & Co-Founder, at Codehaus describes, MongoDB is not your traditional database.
“As I argue when people give me the chance to speak about it, databases are changing – just look at what is available in the so-called “cloud” arena. It tends not to be a RDBMS if it’s scalable. The storage engine under AppEngine, or Amazon’s SimpleDB, or any of the Dynamo implementations, etc, all of which change your programming model to one that isn’t “tables and joins”. Or look at the excellent CouchDB, a JSON store. If the RDBMS isn’t being replaced outright (like it has to be in “the cloud”), it can to be augmented with other persistence technologies that are better suited for a portion of the data requirements of a system.”
This was one of the themes of my talk at our client event in Boston last year, and nothing has happened since then to change my mind. As Geir explains, the interesting thing about the new cloud databases (for want of a better term) is that they force users to think differently about what a database is for – and specifically to think beyond the realms of the relational.
We see similar forces at work in the data warehousing space driven by column-oriented architectures, but the end result is the same as users are increasingly thinking beyond what already know to consider the best database management tools for the job at hand.
As Geir adds of MongoDB: “It works fine as a database, but you can’t think relational. If you want to just replace MySQL with something else, but don’t want to rethink your data model, MongoDB isn’t for you.”