Automation, devops drive open source deeper in the enterprise

Server provisioning and configuration management and automation are the latest examples of where the tech industry is being driven, largely by open source software. The leading open source server and IT infrastructure automation frameworks, Opscode Chef and Puppet Labs’ Puppet, sit on the leading edge of significant trends under way in enterprise IT — particularly disruption from cloud computing and devops, where application development and IT operations come together for faster, smoother delivery of software and services.

I’ve discussed the importance of open source software in cloud computing and in trends such as devops and polyglot programming. Consistently across all of these trends and the technologies that go with them, there are prominent roles for Chef and Puppet.

Read the full article at LinuxInsider.

PuppetConf and the state of devops

It’s been some time now that we’ve been talking about devops, the pushing together of application development and application deployment via IT operations, in the enterprise. To keep up to speed on the trend, 451 CAOS attended PuppetConf, a conference for the Puppet Labs community of IT administrators, developers and industry leaders around the open source Puppet server configuration and automation software. One thing that seems clear, given the talk about agile development and operations, cloud computing, business and culture, our definition of devops continues to be accurate.

Another consistent part of devops that also emerged at PuppetConf last week was the way it tends to introduce additional stakeholders beyond software developers and IT administrators. This might be the web or mobile folks, sales and CRM people, security professionals or others, but it is typically about applying business operations methodology to applications and IT, thus bringing in more of the business minds as well. The introduction of additional stakeholders was also a theme we heard from Puppet Labs CEO Luke Kanies in his keynote address. Kanies then discussed how the community was working to make Puppet the ‘language of operations,’ which it basically is along with competitors Chef from Opscode and CFEngine when it comes to devops implementations.

There was another interesting point on the PuppetConf stage from DTO Solutions co-founder and President Damon Edwards, who said devops should not be sold as a way to achieve cost savings, but rather as something that will bring return on investment (ROI). This is similar to the shift of open source software drivers we’ve seen in the enterprise, which are sometimes changing from cost savings and time to factors of performance, reliability and innovation.

Later in the conference during his keynote, Eucalyptus Systems CEO Marten Mickos also had some interesting observations concerning devops, which he described as managing the cloud from both sides. One of his points was that developers have the most to learn about operations. While I would agree to some extent, this statement is interesting when considered alongside my contention that most of the change in devops is happening on the IT administrator and operations side. Later in an interview, Mickos elaborated on his devops thinking, indicating the experts who orchestrate applications in cloud computing — both developers and admins — must understand the entire lifecycle and environment. Continuing our comparison of devops to open source, Mickos indicated the open source MySQL database that he helped usher into the enterprise was disrupting old technology, while devops is innovating new technology.

While it remains early days for devops in the case of many enterprise organizations, we continue to see and hear signs that devops practices, technologies, ideas and culture are making their way into more and more mainstream enterprise IT shops. While we expect devops practices to be implemented by many enterprises based on utility and need to leverage cloud computing, we see a higher level of awareness and engagement from leadership and executives than we did with open source software. This means we expect uptake of devops to happen more quickly and to generate more revenue and opportunity.

Time for your cloud gut check

It may be hard for Amazon, any of its users, critics or competitors to find a silver-lining in the recent cloud outage that took major sites offline for significant periods over the last week (ok, the critics and competitors are getting plenty), but I see a real upside for all: this has been our latest cloud computing gut check.

Just as we have seen in the case of open source software forks, dissents and competition, these challenges all represent a form of open source discipline that keeps code, communities and vendors ‘honest’ in the sense they must respond to developer and user demands and must also steer a successful path both organizationally and commercially. So while there is no doubt pain and loss from the Amazon outage, it is also a reminder that what does not kill your cloud computing deployment will only make it stronger.

It’s true, the outage illustrates that users and providers are still figuring out cloud computing, and that there is still much learning to be done. It was interesting to see some companies actually sending out press releases regarding how well they and their teams were able to keep their cloud-based environments going through the outage. Indeed, as highlighted recently by our own Tier 1 analysts Jason Verge and Doug Toombs, a number of heavy Amazon cloud users were able to largely sustain the blow of the outage and keep their clouds aloft, including Neftlix and Zynga. We can probably assume this kind of thing could happen with a private cloud, and if we don’t, we should. Still, the point is that the differentiation of technology and the team to effectively leverage it emerged as a critical differentiator during the Amazon cloud outage.

I believe the technology, tasks, procedures and preparedness that are represented in the winners versus the losers in this centers on ‘devops,’ a term we refer to often that involves the crossing of development, operations and other professionals in modern IT environments that both leverage and provide cloud computing services. Discussion of devops often centers on efficient use of cloud computing resources by both providers and users. Even when we consider ‘no-ops’ or more accurately ‘auto-ops,’ — whereby systems and operations are abstracted for developers and users — there is a definite need for knowledge, skill, experience and process when confronting cloud crashes, particularly on the operations side. Devops also represents a more holistic view of software in its environment(s), which is critical to crisis management and recovery for both Amazon and its users. Certainly Amazon and its partners are working hard to restore all of their cloud services to full functionality, but it is very interesting and encouraging to see customers and users adding in their know-how and talent to offset down servers and avoid downtime. It makes it clear why a large organization such as Facebook would benefit from opening its own datacenters and practices.

From Amazon’s and other providers’ perspectives – the cloud stubbed toe of this week also highlights how communication and reaction are perhaps as critical as the technical aspects of addressing what’s wrong and fixing it. Open source software also provides lessons here, indicating vendors and providers are best served by transparency and openness. What the message boards and Twitterverse are telling us now is that users will accept some degree of downtime and difficulty, but they want straight information on how long and how severely they will be down. Just as vendors face a challenge in fairly yet effectively pricing and charging for cloud computing, it may be difficult to provide guidance on recovery from an outage, but the same rules of PR crisis management apply: don’t over-promise and don’t under-deliver.

So just like a fork, leadership crisis or large, proprietary competitor is supposed to wreck an open source project or vendor, the latest cloud crash will finally stifle this cloud hype, bluster and momentum, right? Not quite. I would argue that just like a good fork, feud or megavendor foray into open source software is actually a strengthening, disciplinary measure, the latest cloud coughing will serve as a necessary gut check on cloud computing, thus helping us avoid a cloud bubble.

CAOS Theory Podcast 2011.04.15

Topics for this podcast:

*New CAOS/IM Special Report on database alternatives
*Future of Open Source, Future of Cloud Computing surveys
*Database heavyweights and the new challengers
*VC funding for open source in Q1 2011
*Cloudera and Apache team on Hadoop
*VMware’s Cloud Foundry PaaS, latest on devops

iTunes or direct download (30:14, 5.2MB)

The future of cloud computing is the future for open source

I recently wrote a column about the lack of a cloud computing bubble, even though the hype and marketing levels around the cloud have risen along with innovative technologies and vendors. As we consider what’s next for cloud computing with a survey presented by 451 Group, North Bridge Venture Partners and GigaOm, we will also be able to get a good sense of what’s next for open source software, given the prominence and significance of open source in the clouds.

Given our most recent efforts to track open source software in the enterprise, it is relevant to note that we see a continued, symbiotic relationship between open source and cloud computing. In fact, in many ways, the future of open source depends on the future of cloud computing and vice-versa. One of the symbiotic relationships between open source software and cloud computing is also one of the main reasons I believe both will continue to be a big part of enterprise IT and a big opportunity for vendors and investors: customer enablement. The lessons, practices and community of today’s enterprise IT that have been ushered in by open source – more transparency on the plans for products and code, more flexibility in working with both legacy products and software as well as newer open components, add-ons and combinations, faster development and fewer dead ends via vendor death, acquisition or strategy shift — are being applied to cloud computing. We also see evidence of this customer enablement in the makeup of today’s communities, both open source and non, which include both developes and users/customers.

I continue to have some concern about how open will be open enough, and whether that will truly be open and collaborative enough for these new, customer-enabled cloud communities.

However, I remain convinced that cloud computing may be opening up and, just like open source, is much more than a catch-phrase or hyped-up marketing term. It is central to the continued success, growth and innovation of vendors and users in the key categories I cover, including open source and devops.

Day with DevOps delves into culture, technology and the movement

I’ve written before about all the things that go into a trend we and others are describing as ‘DevOps.’ The subject of a coming 451 Group report, DevOps at its heart represents the intersection and integration of enterprise software development and enterprise software deployment, a.k.a. IT operations. To more closely examine the topic and gather other perspectives on devops, I attended DevOps Days in Mountain View, Calif. last week. Here’s some of what I encountered:

Culture of DevOps

One of most frequent words, discussions and topics was ‘culture,’ which similar to devops itself consists of many layers, including corporate culture, developer culture, admin and operations culture, management culture and open source culture. One individual cultural difference mentioned centered on how operations pros tend to be specialists in storage, network, Web operations, etc., while developers tend to be more generalized in their tasks and expertise. This was only one perspective, but I think it is one that is fairly common and true. Another cultural point was that sysadmins can be good developers, but aren’t always aware or don’t always know of the best tools and practices, which are more natural for a software developer. A related point was that this devops is good stuff that good sysadmins have been doing for years. It is true that as we describe and discuss devops, many organizations are recognizing that they are already doing it and already capable of it. Some of the other discussion on culture centered on changing from the bottom-up, which is where we see parallels between devops and open source, or from the top-down, where effective leaders such as those at the conference could steer an organization in the right direction.

The event also provided an opportunity to learn about additional resources, and several acknowledgements were made to Visible Ops, a handbook on improving IT operations written by Tripwire’s Gene Kim, who was in attendance. There was also reference to Leading Geeks by Paul Glen.

On the panel I was on, ‘Making the Business Case,’ I made a point that devops seems to be creeping and seeping into enterprise organizations similar to the way open source did — through the developers and ops people in the trenches, without cost or procurement or policy or awareness from the executives. However, I was reminded from a devop in the audience of the importance of maintaining respect for all of the different stakeholders in devops, which I’ll discuss in a bit. Bottom line, there needs to be trust and respect for devops and opsdev to work, understanding that each role and step in the hopefully improving process is important.

Aside from being a major focus of the conference, culture (and proof you were truly in a big room filled with devs and ops geeks) was also readily apparent with Ignite sessions including a Chief Eff You Officer and a job offer: ‘If you like to write code and hate all crap you have to deal with …’

Stakeholders of DevOps

It doesn’t take long when discussing devops to realize that it involves a lot more than people from dev and people from ops. While these folks, and many in attendance, report bouncing between development and operations, transitioning from one to the other or betting hired as an actual devop, there are a number of other people that come into play. Topping the list are the business requirements people who are increasingly shaping the application itself and when/where/how it is deployed. Those keeping track of company, product, team or division success, productivity, responsiveness and improvement are also involved in devops, and tracking and proving efficiency and performance are critically important. Of course, at some point, the leaders and executives of a company need to be involved, but it also goes the other way to users, who may be the first judges of devops as they give code and processes their first real-life tests. There are yet more stakeholders in devops, including security pros, which leads to another class of devops: secops.

Of course, like open source, devops probably wouldn’t amount to much if it didn’t have some champions, and one individual who seemd not only ready, but excited to carry the torch is Dan Nemec, who provides some of his thoughts on devops at Geeks Gone Mad. Canonical’s Clint Byrum also told a common story of working both sides – dev and ops – to understand the challenges and rewards of both groups and how they might both be better off in it together.

Another stakeholder that I hear about in devops is the single CIO, VP of Products or VP of Product Quality or other title who is in charge of an enterprise application from development to deployment, or as heard at the conference, ‘from code to cash.’ The many stakeholders that can have a hand in devops also highlights the need for a proper handoff, whether code and communication are going from dev to op, op to dev or back again.

Lastly on stakeholders, VMWare’s Javier Soltero, formerly CEO of Hyperic before it was acquired by SpringSource before it was acquired by VMware, made the point that the different people of devops have different tools and responsibilities, and it often comes down to paying enough money for the best people and keeping them.

Technology of DevOps

While much of the discussion was about culture and people, this is still the software and IT industry, and there was no shortage of discussion regarding technology. Attendees seemed to agree that technology and tools are perhaps most effective at changing culture and improving both software development and deployment.

Hitting on a common theme of the need for predictability, OmniTI Founder and CEO Theo Schlossnagle stressed that a devop, or anyone for that matter, should never wonder what’s wrong with an application, but rather that the application should tell you what’s wrong. Automation also figured in frequently to the discussion and serves as a critical underpinning for devops, at least successful devops. Along with automation is its elder cousin, agile, which also brings web application development and lightweight application development, leading some to wonder whether devOps is actually WebOps? What really got a reaction was when late in the day, someone on the panel talked about giving developers root access to the systems, to which one enthused devop, or op-dev, or some guy replied: ‘Yeah!’

The bottom line for the technology of devops is that it must win both sides time while removing the hoops through which they must jump. Devops do need to be able to tackle issues and do the work of devops, but they must also be able to prove their improvement and justify it, thus emphasizing the need for monitoring and tracking, which has long been an integral part of devops.

Throughout the presentations, there were references to devops communities and tools groups, including devops-toolchain.

Future of DevOps

As we continue to prepare our report on the topic, we have little doubt about the current and coming impact of devops. There will be more sharing of culture, code and practices. Open source software is one thing both devs and ops seem to already have in common, and we’ll also be watching that part of the story. In addition to sharing open source software tools for development and operations, these groups are also sharing open source practices such as collaboration, transparency and speed, all of which can contribute to successful devops.

We also expect trends such as virtualization and cloud computing to both encourage devops by facilitating more communication, co-management and mutual consideration and to also force devops by putting time, quality, uptime and other pressures on both devs and ops.

It’s here and it’s spreading. Have you thought about devops?