Entries from June 2009 ↓
June 25th, 2009 — 2.0
Another event, another post-event wind down. I had limited time at the Enterprise 2.0 conference this year so only got a limited view of what was happening. Some general takeaways anyway:
The SharePoint Factor. This was the title of a session I attended – a good one – put on by Amy Vickers, VP, Global Enterprise Solutions at Razorfish. The session abstract asks the big question, “How does the SharePoint competition stand a chance?” She asked at the beginning of the session how many in the audience were either using or planning to use SharePoint. I think I was one of about 5 people who didn’t raise a hand. Obviously a it was a session on SharePoint, but still.
In another session, there was a question about integration standards and whether audience members would like to see social software vendors support the JSR portal standards or OpenSocial or what. Silence. Then one attendee raised his hand to say he didn’t care about the standards but as he walked around the show floor looking at all the independent players, all he wants to see is – how does it integrate with SharePoint? The SharePoint factor indeed.
Are portals back? A related and surprisingly lively and interesting topic, especially for me as I spent years covering the portal market and working as a product manager on an enterprise portal product. I’ve heard repeatedly that portals were “dead.” Not so apparently. It’s been obvious for awhile that social software products are starting to look like portals, with UIs turning into somewhat configurable, personalized dashboards with data coming from different underlying tools (e.g., tag cloud, forum posts, wiki activity etc.). But it seems things are going one step further with products from MindTouch, Telligent and Atlassian all heading more towards portal-like features even if they’re not calling them that (most stick with “platform”). Others, like Bluenog, are pushing the portal idea more explicitly. In any case, this mostly includes the ability to aggregate and/or integrate data or functions from tools / apps outside of the purview of the social software vendor.
There was even a portal session with panelists Larry Bowden from IBM and Vince Casarez of Oracle. Here the point being made was that the raison d’etre of portals hasn’t changed — customers are still looking to aggregate services and info for different audience groups in a way that is secure and roles-based (if not actually personalized). And that this can be a perfect delivery vehicle for newer social features via an environment users are already familiar with. Not sure portals in many cases have had the adoption to make that last statement as true as the portal vendors might like it to be. But there is something to their argument that these newer products don’t necessarily need to reinvent the aggregation, security and delivery mechanisms already found in portals. In any event, interesting to see a breath of new life in the portal market. And let’s not forget there’s a portal component in SharePoint…
Use cases not tools. This was another recurring theme I heard across meetings and sessions. We’re thankfully moving beyond the discussion of blogs, wikis and so on, to discuss customer support, sales team effectiveness, innovation management, brand development and the like. This shows some much needed maturity in the market, but also makes it perhaps even more difficult for vendors to differentiate; anyone can sell (or at least try to sell) a use case. There were still a lot of vendors on the show floor this year, though I’d bet fewer than last year (but I don’t have that data), and lots more discussion of profitability, viability and risk.
What will it look like next year? It seems to me part of the growing maturity is the realization that a lot of this social functionality needs to seep into other apps and business processes (that’s part of the portal discussion certainly). I think that makes it harder for dicussions or events specifically on “E2.0” as it is really so many different things depending on what exactly you’re trying to do and why. There will undoubtedly be a show next year, but I wonder for how many years after that? We should remember that this used to be called the Collaborative Technologies Conference and still so many of the ideas discussed remind me of knowledge management conferences ten years ago. We’ll keep talking about these things, but I’m not sure how much longer under the “2.0” umbrella.
June 23rd, 2009 — Search
Enterprise search company Vivisimo has appointed a new CEO and president as the company aims to scale up. Co-founder and previous CEO Raul Valdez-Perez becomes chairman. John Kealey, the new CEO has been on the company’s advisory board for the past 18 months and so knows the company well.
In terms of executive positions he was most recently CEO of iDirect Technologies, which was acquired by Singapore Technologies in 2005. He’d held previous positions at SBC Communications. Calderwoodm who has worked alongside Kealey for many years was a founder and managing partner of Amp Capital Partners, a VC firm in Virginia and also worked at iDirect. He has many years of software management experience while at Baan.
The CEO search apparently started last year and Kealey was an early choice but wasn’t available until now.
We’ve noted before that Vivisimo that was being bad-mouthed by its competitors over and above the usual level of FUD you see in this or any other tech market though we couldn’t figure out why. However we said then that it seemed misplaced to us and this move would seem to indicate that the company is certainly going in the right direction. We look forward to getting an update from John in the near future.
June 22nd, 2009 — 2.0, Content management
I want to revisit a few of the relevant questions that came via the webinar I did last week with Bryan House from Acquia on open source social publishing. We got to some of these on the call, but not all, and some of the more market-level questions seem worthy of sharing.
The webinar focused on both the coming together of social software and WCM, and on open source content management; these questions do too.
Q: Why is open source a disruptive force in the social web CMS space?
I started out my part of the preso talking a little bit about The 451 Group and our focus on disruption and innovation in IT. I mentioned this includes disruptive technologies, business models or larger market changes. Open source certainly fits into the disruptive business model category (though, I know, open source is not a business model). Open source can impact how technology in a particular sector is developed, distributed, procured, priced and supported. This isn’t new in content management; open source projects like Drupal have been around for quite some time.
But as more vendors are making a go of businesses tied to open source code in content management, the dynamic is changing. Open source is becoming more of a viable option in content management for even the largest of organizations and that is something that is only going to get more pronounced. And some of the open source projects (like Drupal and WordPress) seem to do a particularly noteworthy job of tying CMS and social software capabilities (of varying types) together. An interesting fact, I think, as it shows that when a community drives software development in this area, it combines these two areas together, an indication of what the larger market may want.
Q: Tools like Interwoven or Vignette are often described as more “enterprise-ready” than open source alternatives? How big is the delta? How should I evaluate whether particular differences are important?
In general, Interwoven, Vignette et al. have had more of a focus on online marketing capabilities the last couple of years and so have more in the way of content targeting, analytics, multivariate testing and so forth to offer. But I don’t think this is what people usually mean when they say a CMS is “enterprise ready” — I think that’s more to do with things like LDAP/AD support, migration and upgrade tools, platform/commercial database support and so on. The reality is that a lot of commercial open source content management vendors do offer these capabilities but often only in an “enterprise” edition of the code that may only be available under a commercial license. The key is just to ensure that a particular distribution meets your requirements under a license that works for your project.
Q: What questions should I ask a vendor to understand how tightly integrated their social software and web content management capabilities are?
There are several models here. Some vendors have built some social capabilities directly into their WCM products, basically with the idea that most of this as it relates to content sites isn’t too much more than defining a content type (e.g., blog, comment, profile) and its attributes. Some mostly support plugging in third-party blogs, forums etc. Others have separate social software modules. In some cases these have come via acquisitions and others have been built from scratch and so integration levels vary. Some share a content repository and some don’t. So there’s quite a bit of variety and, as usual, it’s mostly just important to make sure however a vendor has done it works for your project. If you just want to add support for comments to an existing content-heavy site, using the integrated features from a WCM vendor probably works fine. If it’s a full-blown, forum-heavy customer support site, more of a stand-alone product (whether from a WCM or social software vendor) might work best.
Q: How will the recent transactions (Vignette & Interwoven) impact this market?
The consolidation at the high end of the market has a number of vendors scrambling to get some advantage. Competitor FatWire Software has a formal “rescue” program and others are certainly having similar discussions with customers. Customers looking to migrate or to evaluate a wider field of WCM options may well look at open source, as the broader availability of products from commercial vendors makes this a more viable option.
June 11th, 2009 — 2.0, Content management
In the midst of a busy month, working through some really intriguing stuff as part of our upcoming Special Report on Information Governance, but I’ll also be part of some interesting upcoming events.
On June 17th, I’ll be in NYC taking part in an event being put on by open source CMS provider Squiz, as part of its US launch. I’ll be presenting on trends in the WCM market with a specific focus on the growth of commercial open source content management. This ties in somewhat with a report I did earlier in the year (for 451 Group clients), “Open source content management: It’s coming to America.” This looked mostly at the trend of European open source CMS providers moving to the US market. Squiz started out in lovely Sydney, Australia but is part of the same trend nonetheless.
Also in the open source realm, on June 18th, I’ll be taking part in a webinar hosted by Acquia, the commercial entity looking to put a commercial support and services for Drupal on the map. Here we’ll be discussing open source surely but also the increasing overlap between WCM and social software. This will reprise to some degree the talk I gave on this topic at the AIIM event in Philly earlier this year.
Then of course there is the Enterprise 2.0 show here in Boston, June 23-25. I have limited time at the conference this year unfortunately (my information governance work beckons), but if you’ll be there drop me a line.
June 8th, 2009 — Text analysis
The 2009 Text Analytics Conference was a great time, congratulations to the organizers for once again putting on a terrific event. I heard from one of them that attendance was down 20% from last year, which sounds about right given the economic situation and travel budgets right now, but it didn’t put a damper on the festivities.
Voice of the customer was once again the application that got the most play, from vendors and speakers. However reputation analysis/opinion mining/buzz monitoring – or what was sometimes called social media analysis – was a close second this year, with an eye to the lower-cost offerings springing up in this area to mine blogs and internet forums. Some related points:
came up several times (it’s everywhere this year of course), but prevailing opinion was that it’s not a great resource for text mining – too many misspellings, abbreviations, and just plain not enough text per tweet to be able to get a good read on the content.
’s Roddy Lindsay was back to offer an overview of some of the projects underway to mine popular topics on the site for insight on its users and how their age, gender and regional demographics affect their views. Unfortunately as data on Facebook is private to its users and their network of friends, this was kind of a tease for those of us who would love a bigger peek at it.
In non-social media, another sentiment analysis-focused site, the Financial Times
’ recently launched meaning-based news search Newssift
, also got some mentions (in part because two of the vendors present, Lexalytics
, were involved in the project along with NStein
End users were well-represented this year, and I was even fortunate enough to get to moderate the end user panel, featuring former school superintendent Chris Bowman, Mike House of Maritz Research, Bryan Jeppsen of JetBlue, John Lehto of Monster and Rick Lewis of AOL. The gentlemen weighed in on everything from technical problems (they overwhelmingly chose SaaS to avoid issues) to variations on the inevitable ROI question, and provided some much-needed perspective to what end users expect out of the vendors and their products. Response has been good, and for anyone wanting more, be aware that the ever-quotable Mr. Bowman is now on Twitter and may very well be watching your every move.
June 8th, 2009 — Data management
At last year’s 451 Group client event I presented on the topic of database management trends and databases in the cloud.
At the time there was a lot of interest in cloud-based data management as Oracle and Microsoft had recently made their database management systems available on Amazon Web Services and Microsoft was about to launch the Azure platform.
In the presentation I made the distinction between online distributed databases (BigTable, HBase, Hypertable), simple data query services (SimpleDB, Microsoft SSDS as was), and relational databases in the cloud (Oracle, MySQL, SQL Server on AWS etc) and cautioned that although relational databases were being made available on cloud platforms, there were a number of issues to be overcome, such as licensing, pricing, provisioning and administration.
Since then we have seen very little activity from the major database players with regards to cloud computing (although Microsoft has evolved SQL Data Services to be a full-blown relational database as a service for the cloud, see the 451’s take on that here).
In comparison there has been a lot more activity in the data warehousing space with regards to cloud computing. On the one hand there data warehousing players are later to the cloud, but in another they are more advanced, and for a couple of reasons I believe data warehousing is better suited to cloud deployments than the general purpose database.
For one thing most analytical databases are better suited to deployment in the cloud thanks to their massively parallel architectures being a better fit for clustered and virtualized cloud environments.
And for another, (some) analytics applications are perhaps better suited to cloud environments since they require large amounts of data to be stored for long periods but processed infrequently.
We have therefore seen more progress from analytical than transactional database vendors this year with regards to cloud computing. Vertica Systems launched its Vertica Analytic Database for the Cloud on EC2 in May 2008 (and is wotking on cloud computing services from Sun and Rackspace), while Aster Data followed suit with the launch of Aster nCluster Cloud Edition for Amazon and AppNexus in February this year, while February also saw Netezza partner with AppNexus on a data warehouse cloud service. The likes of Teradata and illuminate are also thinking about, if not talking about, cloud deployments.
To be clear the early interest in cloud-based data warehousing appears to be in development and test rather than mission critical analytics applications, although there are early adopters and ShareThis, the online information-sharing service, is up and running on Amazon Web Services’ EC2 with Aster Data, while search marketing firm Didit is running nCluster Cloud Edition on AppNexus’ PrivateScale, and Sonian is using the Vertica Analytic Database for the Cloud on EC2.
Greenplum today launched its take on data warehousing in the cloud, focusing its attention initially on private cloud deployments with its Enterprise Data Cloud initiative and plans to deliver “a new vision for bringing the power of self-service to data warehousing and analytics”.
That may sound a bit woolly (and we do see the EDC as the first step towards private cloud deployments) but the plan to enable the Greenplum Database to act as a flexible pool of warehoused data from which business users will be able to provision data marts makes sense as enterprises look to replicate the potential benefits of cloud computing in their datacenters.
Functionality including self-service provisioning and elastic scalability are still to come but version 3.3 does include online data-warehouse expansion capabilities and is available now. Greenplum also notes that it has customers using the Greenplum Database in private cloud environments, including Fox Interactive Media’s MySpace, Zions Bancorporation and Future Group.
The initiative will also focus on agile development methodologies and an ecosystem of partners, and while we were somewhat surprised by the lack of virtualization and cloud provisioning vendors involved in today’s announcement, we are told they are in the works.
In the meantime we are confident that Greenplum’s won’t be the last announcement from a data management focused on enabling private cloud computing deployments. While much of the initial focus around cloud-based data management was naturally focused on the likes of SimpleDB the ability to deliver flexible access to, and processing of, enterprise data is more likely to be taking place behind the firewall while users consider what data and which applications are suitable for the public cloud.
Also worth mentioning while we’re on the subject in RainStor, the new cloud archive service recently launched by Clearpace Software, which enable users to retire data from legacy applications to Amazon S3 while ensuring that the data is available for querying on an ad hoc basis using EC2. Its an idea that resonates thanks to compliance-driven requirements for long-term data storage, combined with the cost of storing and accessing that data.
451 Group subscribers should stay tuned for our formal take on RainStor, which should be published any day now, while I think it’s probably fair to say you can expect more of this discussion at this year’s client event.
June 4th, 2009 — Text analysis
With the 5th annual Text Analytics Summit now in the bag, here are my thoughts on the event.
My talk on which vendor options to choose on Sunday night was, I think at least, well received. Probably only about 30 people in the room but all bar about 5 of them were end users, which is good. The slides are available to anyone who drops me a note, and for those that were there on Sunday, I will get them to you very soon.
That end-user theme carried on to the main conference, whereby there was a higher proportion of end users this year than last year without a doubt. The overall attendance was down slightly and when I saw the list on Monday morning I was concerned, but more than a third of them were users, which was much better than last year when there was often a feeling of vendors pitching to other vendors, which doesn’t help anybody.
A fair few of the end users present were at a very early stage of their assessment, too. Many were merely aware that text analytics can do something for them, but hadn’t engaged properly with any of the vendors. I will be following up with those and the other users I met during the conference as we look to help them evaluate their vendor options.
The end-user panel, moderated well by our own Katey Wood was interesting as ever. Jon Lehto of Monster.com had some rich insight and Bryan Jeppsen at JetBlue, now two years into its use of Attensity explained how it had changed its customer surveys from 1 open-ended question in 40 (and 39 structured questions) to mostly open-ended as it now has the power to analyze that text and get insight it would have never had received had it had to work out in advance what sort of answer it wants. Both AOL and JetBlue were able to bypass their IT departments and go with the SaaS versions of their vendors’ products.
The analyst panel, if I’m being honest, was probably a bit flat from the audience’s perspective as we were agreeing too much. I tried to disagree at one point but then didn’t quite clarify what I meant, so I did it in an earlier post. We had a question from the audience from someone at Whirlpool about ROI which we all struggled with a bit. That’s because ROI on text analytics apps is tricky because
- quite often you’re doing something completely new that you’ve never been able to think of doing before, such as automatically parsing customer’s comments on blogs
- many text analytics apps are quite small and thus don’t often require such an ROI measure
- they’re often part of some sort of competitive or customer intelligence effort that’s much larger and thus the text analytics element itself isn’t subject to ROI.
But clearly for a company with the size of investment Whirlpool has made with text analytics, it’s a valid question and made us all ponder the ROI question a bit more deeply.
Things I thought I’d hear more about but didn’t: cloud and eDiscovery. There were SaaS-based representatives there in the shape of Clarabridge and Attensity for sure and Clarabrige in particular has some great reference customers willing to speak on its behalf, notably AOL and Intuit. But in terms of true cloud-based text analytics, it’s still too early, and may even been so next year.
I was more surprised not to hear much about eDiscovery. What little I did hear (apart from the listening to the sound of my own voice, of course) was from Ernst & Young and its proactive fraud detection work, plus some of which has been parlayed from previous successful eDiscovery work with clients, which is exactly what we thought would be happening (always good to hear end user validations of predictions made in research).
Things I though I’d hear about and did: sentiment analysis. Last year it was the undercurrent of the conference. This year it came very much to the surface. There wasn’t too much difference between a lot of the offerings and some of the presentations (but by no means all) were a bit too down in the weeds. But there’s tons of interesting implementations out there now, although a fair amount of work still to be done.
Anyway overall it was well worth it and I recommend the conference next year to anyone interested in how to leverage text for insight into customers, competitors, risk exposure or all sorts of other business and organizational issues.
June 2nd, 2009 — Text analysis
I made a comment on the analyst panel at the end of day 1 about the emergence of startups in this space that I wanted to qualify, as it’s caused a bit of confusion here at the Text Analytics Summit. The other three panel members said they are seeing startups while I said I’m not and nor are VC customers asking about them in the way they did a few years back. I said that partly to shake up the panel a bit as we were agreeing on everything until then, which isn’t that interesting for the audience ;), but I meant it in a specific way.
The main area where text analytics-based startups have emerged in the last few years is in sentiment analysis, in areas such as opinion mining, buzz, product/service reviews and advertising targeting. Many of these apps are being used by enterprises for sure.
But what I was referring to is that I’m not seeing companies offering text analytics tools (whether on-premise or on a SaaS or cloud basis) that can be used as the basis of text-aware or search-based applications. I am seeing a lot of demand and interest in those apps from enterprises (our main focus here at 451) but the tools to build them are not coming from startups.
Instead they’re coming mainly from more established search, content management and eDiscovery-focused companies (with one or two notable exceptions, such as Attivio and Digital Reef in the past two years). There is probably room for more startups in this space, that’s for sure.
More on what has been a great conference so far later.