HashiCorp was founded by Mitchell Hashimoto and Armon Dadgar in 2012 with the goal of revolutionizing datacenter management – application development, delivery, and maintenance. Since then, the company has been in hyperdrive, with a quick release cycle of in-demand open source developer tools and a rapidly growing community of users around the world. In fact, HashiCorp User Groups (HUGs) recently surpassed 10,000 members in 50+ cities worldwide.
We connected with Armon, co-founder and co-CTO, to learn more about HashiCorp and the products that are driving their growth and exciting their user fanbase.
Q: Tell us about HashiCorp and what led you and Mitchell to found the company.
A: Mitchell and I met at the University of Washington, working on a research project to figure out how we could build a scientific compute cloud, as it was called at the time. The idea was you donate 10 to 20 percent of your background CPU time on your laptop to solve scientific problems and it was a super popular concept in the mid-2000s. We spent a bunch of time trying to build a general-purpose scientific compute cloud and cutting our teeth on how you build cloud infrastructure—this was in the super-early days of what we consider cloud today.
Then we ended up working together at an ad network where we found we were spending 30 to 50 percent of our engineering cycle building tooling that was not related to an ad network. We were doing cloud provisioning, service discovery, and security and thought why are we building this for an ad network? Why doesn’t tooling exist that we can just use off the shelf?
Ultimately, these kinds of nagging questions led us to found HashiCorp. We felt that not only we were reinventing the wheel, at that time, but also, when we talked to our colleagues and various-sized companies, the common theme was that everyone was rebuilding infrastructure. So, with HashiCorp, we saw an opportunity to build more general-purpose tooling that others could use, allowing them to focus on their core businesses, rather than the lower-level infrastructure management.
Q: Did that inspire you to publish the ‘Tao’ that I’ve heard about on infrastructure management? What’s that about?
A: At HashiCorp, in general, our approach is grounded in a very UNIX philosophy. We don’t make one mega tool for doing all infrastructure management — we have many different open source projects that each focus on doing one thing and doing it well.
We always get questions from our community around our ideology and the thinking behind our designs, so we finally published a document called, “The Tao of HashiCorp,” which is basically our design philosophy. It includes several of our core principles. For example, focus on workflows over technology is an important one.
Our fundamental belief is that technology will continue to march forward and innovate and evolve. And yet, workflows mostly stay the same. What I mean by that is we still have to provision our application — at some point we were provisioning a mainframe, then we were provisioning bare metal, then we were provisioning VMs, now we might be provisioning containers in the cloud. So, the specific thing that we are provisioning has changed, but the fact that we have to provision and manage the lifecycle hasn’t. Core workflow is fundamental.
I think the other one that’s super-super important to us is infrastructure as code. How do we actually capture all the details and process about how our applications are packaged and built and delivered in a way that’s codified and version controlled? If we do that, then we get a change history, we know how this has evolved over time, we know who made what changes when, and we can automate it.
So, if you say, “Great, I’m happy with our production, but I want a staging environment,” well, it’s all codified. We can stamp out a staging environment in no time, versus, what we jokingly refer to as the “oral tradition” process: there’s the few people who know how it’s done and as you hire new people, you pass it down through lore. But that’s not a scalable way of doing things. In summary, codification is a super important principle for us.
And there’s a few others in there. So, for people who are interested in our design philosophy, I recommend that they give it a read.
Q: What are the biggest pain points that HashiCorp resolves for developers?
A: There are a few big ones. Going back to that original inspiration, which is: if you’re delivering an application, you have to solve all these problems. There’s no way to deliver an app without figuring out how to provision infrastructure, as an example. So, if you’re going to have to figure it out, how do we at least provide you a tool so that you’re not building your own Terraform? Developers can use a tool rather than waste time building a tool.
We also provide some amount of “opinion-ation” around how infrastructure is managed. Developers live in a complicated world, with lots of moving pieces and details meriting concern. We often hear from our customers that HashiCorp is flexible in that we can fit whatever technology and tools we need into it, but they like having a consistent and somewhat opinionated view of how things should be done to sort of nudge them in the right direction.
The last bit —I think this goes back to our philosophy of focusing on workflow and not technology—is that we provide developers a standard way to think about tasks and then snap to the different technologies involved. They don’t need separate methods for provisioning a VMWare cluster, an Azure cluster, or a Docker application. We try to say ”here’s a consistent way of thinking about provisioning” and you can simply plug these different technologies into it. This simplifies the way many companies do application delivery and is a big benefit that we often hear from customers.
Q: Is that a scenario where you see developers using your tools versus cloud providers’ native tools?
A: I think that goes back to the same workflow orientation. Our view is that we really want to make sure that the end user is as successful as possible, no matter what technology they’re choosing. So, we’re fortunate to get to partner with companies like Microsoft and say, how do we make Azure a first-class citizen across all our tooling? And when a customer says, “I’m on VMWare today and I’m moving to support Azure,” it’s not a shift of their entire workflow, their entire process. It’s simple to plug in what I’m doing with Azure now into Terraform and not fundamentally changing anything about how the application is delivered.
And so that capability exists with Terraform and it’s the same story for other workflows, like Vault around secrets management. I don’t want to go from using hardware devices and things that are specific to my private datacenter to suddenly running in Azure and using a totally distinct way of managing my secrets tied to Azure’s tooling. Now think about trying to adopt another environment and then sort of reinventing the wheel for the third time. With Vault, on the other hand, we give you a consistent way of doing it—and then we can plug into Azure when you’re in that environment, and we can plug into other environments as you expand those as well. And so again, it’s a workflow orientation and not a technology orientation.
Q: Let’s take on Nomad as an example. How do you position Nomad with other orchestrators?
A: This ties in with what the value of an orchestrator is, in general. In our view, there are two main values. The first one, is how do we decouple the workflow of the developer, who cares about application lifecycle, from the operator, who cares about the lifecycle of the operating system, the VM, and the underlying infrastructure. They’re responsible for security patches and making sure they’re running the latest version of RHEL and all that good stuff. So, I think that first-level value is decoupling these workstreams and letting developers manage application lifecycle while operators manage the machine lifecycle.
The second value is some of the bin packing and being able to improve the resource utilization. In practice what we see is very low resource utilization. There’s one app per VM running, and an orchestrator can automatically pack multiple applications in and increase that utilization that would ultimately reduce total cost of ownership.
When we think about Nomad in the context of these values, for application developers, Nomad is flexible whether you’re running a long-lived service, whether you’re running high-scale batch computing, or whether you’re running system-wide agents. We’re agnostic if you’re using Docker or Rocket, running a HyperV VM or a Java application. So, one of the big differentiators for us is the focus on flexibility of the workload and flexibility of the packaging. We have a lot of users who want to massively scale group compute and they’re running old-style C++ static binaries. They’re not looking to containerize. In summary, the flexibility is the appeal for application developers.
Now, for the operators, they want a system that’s simple to operate, with confidence in their ability to manage it at scale. That’s a big focus for us, making sure that Nomad is operationally simple and that it works at scale. We have customers that now run 10,000-node clusters in Nomad where they’re doing massive-scale batch workloads alongside services. And they’re doing that in a way that’s spanning multiple datacenters. All of this is built into the native experience of Nomad. They don’t have to figure out how it does multi-datacenter work, how to scale up to 10,000 nodes, or what happens when they really start putting a load on the system.
I think these end up being core differentiators for Nomad. And, I think other orchestrators focus more on long-running services or smaller-scale deployments than we do.
Q: Let’s talk about community. HashiCorp has over 50 user groups around the world. What do you hope to accomplish this year within these “HUG” [HashiCorp User Group] sessions?
A: I was just in Atlanta, kicking off our inaugural Atlanta HUG. What’s powerful about user groups is you get first-person perspectives about the technologies, which you just don’t get reading up online. You get to build a local community and interact with your friends and colleagues around real problems. “Okay, this is a thing that I’ve heard about on the internet and do I really want to use it? Do other people use it? Can I talk to someone I know about it?” Solutions become much more concrete and practitioners feel like they’re not going it alone. There are other people who are running into the same issues and there are people they can learn from and ask. Our goal with HUGs is to build local communities – so this isn’t an isolated thing that’s only happening within 10 miles of San Francisco. We just surpassed 10,000 members in February and have received a lot of feedback that people are seeing value.
Then, on a global stage, we have our two big conferences a year. HashiDays, which is product training sessions and deeply technical talks, which moves between cities. This year it’s in Amsterdam in June. And then HashiConf, which is our big global user conference, which this year will be in San Francisco in October. These are great opportunities to engage with the community on a bigger scale. If you’re thinking about using Nomad and you’re saying, ”You know, I’ve been running big scale, is this going to work?” And then you see someone talking about how it goes when they do 5 to 10,000 nodes and it makes it tangible. We try to create this physical community and presence to get people to share best practices and usage patterns in a way that’s much more authentic than if it comes from us.
Q: 2017 was a banner year, you grew from 60 to 160 employees, closed your series C, and launched new partnerships. What’s fueling your rapid growth and what achievements were you most proud of last year?
A: I think there’s a convergence of a few different things. Going back to our approach with building community – we take a very organic approach to it, which is: let’s build a tool that we think solves a problem and make it as good of a user experience as we can. And then those communities naturally just take time to grow. It’s a very organic word-of-mouth kind of spread. These tools were launched in 2014 and 2015 and in 2017 a lot of these communities started hitting critical mass, which was driving some of the growth of the company.
2017 was also a pivotal year for us to try and figure out the commercial nature of HashiCorp. What we wanted to avoid is a situation where people thought “HashiCorp built a lot of great open source tools, but too bad they never figured out a sustainable business model.” So, it was important for us to figure out how we could build a solvent business and continue investing in engineering and tooling. 2017 was a big year for figuring out the product-market fit that would allow us to grow and do more.
Q: What’s next? What’s 2018 going to look like?
A: I think 2018 really builds on a lot of the momentum of 2017. What we are feeling acutely is that the communities are growing tremendously fast and we really need to just hire to keep up with it. So, I expect that we’ll add a few hundred more people this year. There’s a lot of real exciting stuff in terms of the product roadmap and individual products. As we grow, we’re able to dramatically increase the staffing of each project. So, it’s fun seeing how much more is getting released with every new version of the tool. There’s a lot of cool stuff in the pipeline…and then a few surprises that we’ll reveal at our conferences this year.
Follow @HashiCorp and @Armon.
HashiCorp with Azure on Channel 9