<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Helm &#8211; Microsoft Open Source Blog</title>
	<atom:link href="https://cloudblogs.microsoft.com/opensource/tag/helm/feed/" rel="self" type="application/rss+xml" />
	<link>https://cloudblogs.microsoft.com/opensource</link>
	<description>Open dialogue about openness at Microsoft – open source, standards, interoperability</description>
	<lastBuildDate>Wed, 01 Sep 2021 18:16:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Announcing Akri, an open source project for building a connected edge with Kubernetes</title>
		<link>https://cloudblogs.microsoft.com/opensource/2020/10/20/announcing-akri-open-source-project-building-connected-edge-kubernetes/</link>
					<comments>https://cloudblogs.microsoft.com/opensource/2020/10/20/announcing-akri-open-source-project-building-connected-edge-kubernetes/#comments</comments>
		
		<dc:creator><![CDATA[Kate Goldenring]]></dc:creator>
		<pubDate>Tue, 20 Oct 2020 15:00:26 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Go]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Rust]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2020/10/20/announcing-akri-open-source-project-building-connected-edge-kubernetes/</guid>

					<description><![CDATA[<p>Overview One of the hallmarks of "the edge" in computing is the array of sensors, controllers, and microcontroller unit (MCU) class devices that produce data and perform actions. For Kubernetes to be a versatile edge computing solution, a cluster needs to easily find these leaf devices. Most of these devices, however, are too small to<span><a class="read-more" aria-label="Read more about Announcing Akri, an open source project for building a connected edge with Kubernetes" href="https://cloudblogs.microsoft.com/opensource/2020/10/20/announcing-akri-open-source-project-building-connected-edge-kubernetes/" data-bi-cn="Read more about Announcing Akri, an open source project for building a connected edge with Kubernetes">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2020/10/20/announcing-akri-open-source-project-building-connected-edge-kubernetes/">Announcing Akri, an open source project for building a connected edge with Kubernetes</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Overview</h2><p>One of the hallmarks of "the edge" in computing is the array of sensors, controllers, and microcontroller unit (MCU) class devices that produce data and perform actions. For Kubernetes to be a versatile edge computing solution, a cluster needs to easily find these leaf devices. Most of these devices, however, are too small to run Kubernetes themselves. How can they be leveraged by a Kubernetes workload? How can a Kubernetes Pod find or access their outputs? Akri is the answer!</p><p>Today, Microsoft is proud to share the Akri project, designed to electrify the edge for Kubernetes users. <a href="https://aka.ms/akri">Akri</a> is a new open source project that exposes these leaf devices as resources in a Kubernetes cluster. Akri, meaning "edge" in Greek, can also be read as an acronym, as Akri aims to be "A Kubernetes Resource Interface for the edge."</p><p>It provides an abstraction layersimilar to the Container Network Interface (CNI)but instead of abstracting the underlying network details, it removes the work of finding, utilizing, and monitoring the availability of leaf devices such as cameras, sensors, and so on.</p><p>Akri leverages and extends <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md">the Kubernetes device plugin framework</a>, which was originally created to focus on advertising static resources such as GPUs and other system hardware. Akri takes this framework and applies it to the edge, where there is a diverse set of leaf devices with unique communication protocols and intermittent availability. Akri continually detects nodes that have access to leaf devices and schedules workloads for them. Simply put: you name it, Akri finds it and you use it.</p><h2>Architecture and functionality</h2><img loading="lazy" alt="Sonar Architecture single cluster devices diagram" width="1024" height="573" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2020/10/SonarArchitecture-single-cluster-devices-v2-1024x573.webp"><p>Akri is Kubernetes-native. Its architecture is made up of four key Kubernetes components: two custom resources (CRDs), a device plugin implementation, and a custom controller. The first custom resource, the Akri Configuration, is where "you name it," telling Akri the kind of leaf device you want to discover. Then, "Akri finds it," as the Akri Agenta Kubernetes device plugin framework implementationsearches for the leaf devices, checking for the availability of your desired ones. Once your device has been discovered, the Akri Controller helps "you use it." It sees each Akri Instance, which represents a leaf device, and deploys a "broker" pod that knows how to connect to the leaf device and utilize it.</p><h2>Built to make the edge come alive</h2><p>Akri is made for the edge, handling the dynamic appearance and disappearance of leaf devices. A user simply has to apply an Akri Configuration to a cluster, specifying the discovery protocolfor example, Open Network Video Interface Forum (ONVIF) protocoland the pod that should be deployed upon discovery, such as a video frame server.</p><p>Then, Akri does the rest. A user can also allow multiple nodes to utilize a leaf device, thereby providing high availability in the case where a node goes offline. Furthermore, Akri will automatically create a Kubernetes service for each type of leaf device (or Akri Configuration), removing the need for an application to track the state of pods or nodes.</p><p>Most importantly, Akri was built to be extensible. The project currently has ONVIF and udev discovery protocols, but more can be easily added by community members like you. The more protocols Akri can support, the wider an array of leaf devices Akri can discover.</p><h2>Learn more and contribute</h2><p>You can easily deploy and test Akri today on your edge cluster using <a href="https://k3s.io/" target="_blank" rel="noopener noreferrer">K3s</a>, <a href="https://microk8s.io/" target="_blank" rel="noopener noreferrer">MicroK8s</a>, <a href="https://docs.microsoft.com/en-us/azure-stack/aks-hci/" target="_blank" rel="noopener noreferrer">AKS-HCI</a>, or any other certified Kubernetes distribution. To jump into using Akri, try our <a href="https://aka.ms/akri/e2e">end-to-end demo</a>, which discovers mock video devices and ultimately displays footage from the cameras on a streaming application. Want to learn more? Check out our <a href="https://aka.ms/akri/docs">documentation</a> or hear from us at the <a href="https://edgeconference.io/">Edge Conference</a> on October 21, where we will be presenting this technology.</p><p>This project elevates the power of Kubernetes on the edge and is intended to reside in a community-governed foundation in the Kubernetes ecosystem. With your input and help, we can get there. Is there a device you want to use that Akri doesn't have a discovery protocol for? Join us on <a href="https://aka.ms/akri/channel">Slack</a>, create an issue on <a href="https://aka.ms/akri">GitHub</a>, or implement the new discovery protocol with the Akri community. We're excited to continue to refine and extend Akri so that everyone can benefit from this project.</p><p>We are eager to see what you discover with this project and how it evolves. Together, we can build a more connected edge with Kubernetes.</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2020/10/20/announcing-akri-open-source-project-building-connected-edge-kubernetes/">Announcing Akri, an open source project for building a connected edge with Kubernetes</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://cloudblogs.microsoft.com/opensource/2020/10/20/announcing-akri-open-source-project-building-connected-edge-kubernetes/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Is there a Helm and Operators showdown?</title>
		<link>https://cloudblogs.microsoft.com/opensource/2020/04/02/when-to-use-helm-operators-kubernetes-ops/</link>
					<comments>https://cloudblogs.microsoft.com/opensource/2020/04/02/when-to-use-helm-operators-kubernetes-ops/#comments</comments>
		
		<dc:creator><![CDATA[Matt Butcher]]></dc:creator>
		<pubDate>Thu, 02 Apr 2020 18:26:49 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2020/04/02/when-to-use-helm-operators-kubernetes-ops/</guid>

					<description><![CDATA[<p>The questions started around KubeCon San Diego. Maybe because we had just released Helm 3. Or, maybe because a few operator tools had been put up for adoption by CNCF. Whatever the cause, I started receiving questions about Helm and operators. And most of the questions seemed to imply that these two technologies were engaged in<span><a class="read-more" aria-label="Read more about Is there a Helm and Operators showdown?" href="https://cloudblogs.microsoft.com/opensource/2020/04/02/when-to-use-helm-operators-kubernetes-ops/" data-bi-cn="Read more about Is there a Helm and Operators showdown?">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2020/04/02/when-to-use-helm-operators-kubernetes-ops/">Is there a Helm and Operators showdown?</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span>The questions started around </span><span>KubeCon</span><span> San Diego. Maybe because we had just released Helm 3. Or, maybe because a few operator tools had been put up for adoption by CNCF. Whatever the cause, I started receiving questions about Helm and operators. And most of the questions seemed to imply that these two technologies were engaged in </span><span>a</span><span>n epic</span><span> duel</span><span>.</span><span></span></p><p><span>At first, I was bewildered by this comparison. It was as if people were suggesting that this year's Super Bowl would feature a showdown between FC Barcelona and the New York Yankees. But a few months into the new year, I am still being asked variations of the same question: "Who is going to win: Helm or operators?" This is my answer.</span><span></span></p><p><span>To start with, let's dive into the purpose of each technology. What problems does Helm solve? What about operators? From there, we'll look at the areas of overlap. In the end, we'll turn back to the question, and ask which technology "wins."</span><span></span></p><h2>Helm: the package manager for Kubernetes</h2><p><span>The very first version of Helm was </span><a href="https://github.com/helm/helm-classic/tree/0.0.1"><span>released on Nov. 2, 2015</span></a><span>. Kubernetes was at version 1.1.0 and the </span><a href="https://ti.to/kubecon/kubecon-2015/en"><span>very first KubeCon</span></a><span> was about to take place. But even in these early days, Helm </span><a href="https://github.com/helm/helm-classic/tree/0.0.1/docs"><span>proclaimed its vision</span></a><span>:</span><span></span></p><blockquote><p><span>Helm provides package management for Kubernetes</span><span></span></p></blockquote><p><span>We published an </span><a href="https://github.com/helm/helm-classic/blob/0.0.1/docs/architecture.md"><span>architecture document</span></a><span> that explained how Helm was like </span><a href="https://brew.sh/"><span>Homebrew</span></a><span> for Kubernetes.</span><span></span></p><p><span>From the earliest days, Helm was intended to solve one big problem: </span><i><span>How do we share reusable recipes for installing (and upgrading and uninstalling) things on Kubernetes?</span></i><span> We looked at operating system package managers like Homebrew, Apt, RPM, and Chocolatey, and we saw parallels in Kubernetes. Even up through Helm 3, this has </span><span>consistently been our vision</span><span>.</span><span></span></p><p><span>Today, Helm has over a million downloads a month, and we are aware of dozens of Helm-based tools, and thousands of publicly available charts. The conclusion I draw from this is that Helm, as a package manager, has been a success.</span><span></span></p><h2>Operators: a design pattern for operational knowledge</h2><p><span>Almost exactly one year after Helm 0.0.1, Brandon Philips, then CTO of CoreOS, posted one of the most brilliant blog posts of the Cloud Native era: </span><a href="https://coreos.com/blog/introducing-operators.html"><span>Introducing Operators: Putting Operational Knowledge into Software</span></a><span>.</span><span></span></p><p><span>Philips pointed out that we often relied upon humans to manage the runtime needs of applications. But with a system like Kubernetes, much of the material once set down in run books and user manuals could now be transformed into code.</span><span></span></p><blockquote><p><em>An Operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts but includes domain or application-specific knowledge to automate common tasks. </em></p></blockquote><p><span>CoreOS illustrated this new design pattern in an operator designed to manage an </span><span>e</span><span>tcd</span><span> cluster.</span><span></span></p><p><span>Operators contained the institutional knowledge necessary to manage some, if not all, of the operational aspects of a workload inside of Kubernetes. As Philips suggests:</span><span></span></p><blockquote><p><em>An Operator builds upon the basic Kubernetes resource and controller concepts and adds a set of knowledge or configuration that allows the Operator to execute common application tasks. For example, when scaling an etcd cluster manually, a user has to perform a number of steps: create a DNS name for the new etcd member, launch the new etcd instance, and then use the etcd administrative tools (etcdctl member add) to tell the existing cluster about this new member. Instead with the etcd Operator a user can simply increase the etcd cluster size field by 1. </em></p></blockquote><p><span>Philips is absolutely correct: SREs and DevOps engineers spend far too much time manually re-running sequences of commands. The operator design pattern provides a compelling solution: write per-application tooling that codifies common management tasks.</span><span></span></p><p><span>Over the last few years, the notion of an operator has matured, partly due to the completion of Custom Resource Definitions (CRDs) and partly due to the Kubernetes API maturing. These days, authors of operators are likely to talk about how they wrote CRDs and custom controllers to manage applications. While the terminology has changed slightly, Philips' vision is very much still at the heart of today's operators.</span><span></span></p><p><span>Today, there are more than 100 operators available at varying degrees of stability. Leaders in the cloud-native space, including RedHat, IBM, and Microsoft, have released operators. The operator pattern is clearly a successful part of the cloud-native ecosystem.</span><span></span></p><h2>So, what's the problem?</h2><p><span>Helm is a package manager for Kubernetes. Operators are design-pattern-driven pieces of code that encapsulate knowledge for running an application. Yet as I noted at the article's outset, there are questions floating around about which one is "the winner."</span><span></span></p><p><span>With two markedly different technologies, why are we attempting to pit them against one another? Is this not like asking which sports team is better, the New York Yankees or FC Barcelona? In many discussions with a variety of people, I began to understand why people have arrived at the conclusion that Helm and operators are competitors. There are two causes:</span><span></span></p><ul><li><span>The two technologies share a set of common terms, such as "installation," "management,&rdquo; and "lifecycle."</span><span></span></li><li><span>If stretched to its boundaries, each tool can be made to do some semblance of the other tool's job.</span><span></span></li></ul><h2>Same terms, different meaning</h2><p><span>For starters, let's cover the terminology issue. A skim through the documentation for each project will turn up a number of common terms. For example, both will talk about installing or creating resources inside of a Kubernetes cluster. While the terms are the same, though, the meaning is subtly different.</span><span></span></p><p><span>When Helm users talk about installing a thing, we mainly mean something like this:</span><span></span></p><p><em>I want to find a chart and be able to pass in some configuration and have that chart installed into my cluster without ever having to edit Kubernetes YAML. </em></p><p><span>To that end, Helm has focused on a standard packaging format, a template language for parameterization, and a system designed to easily locate and install off-the-shelf packages.</span><span></span></p><p><span>When the user of an operator talks about installing, what they mean is more like this:</span><span></span></p><p><em>I want to create a resource or resources as Kubernetes YAML and have those things spin up and maintain an application for me. </em></p><p><span>Thus, if you look at the </span><a href="https://github.com/coreos/etcd-operator"><span>instructions</span></a><span> for installing the </span><span>e</span><span>tcd</span><span> operator, you will see that you are given guidelines and examples for creating your own Kubernetes YAML file, which will (when installed into the cluster by another tool) create and maintain an </span><span>e</span><span>tcd</span><span> cluster.</span><span></span></p><p><span>When Helm users talk about "management," they are thinking of being able to use a simple tool to see what applications are running, which resources belong to whom, and then perhaps be able to upgrade, rollback, or delete individual applications.</span><span></span></p><p><span>In contrast, when it comes to operators, "management" is often used to talk about the "day two ops" of an application: Managing data integrity, scaling an application up or down, or automatically recovering from a failure.</span><span></span></p><p><span>We could go on to other terms, but the core message is clear: We have a limited (and frequently overloaded) vocabulary that sometimes causes us conceptual headaches. But when we talk it out, we realize that we're dealing with different ideas.</span><span></span></p><h2>Push to the limits</h2><p><span>While both Helm and the operator pattern have their sweet spots, we can definitely push our tools beyond their intended use cases. In fact, we might be able to push the Helm Chart metaphor into performing some operator-like tasks or write an operator that also does some of Helm's package management tasks for itself. In so doing, we might be able to better compare the two technologies. But this is somewhat like asking the soccer player and the baseball player to engage in a kicking contest to see who wins. One is definitely better prepared.</span><span></span></p><p><span>I have seen some truly remarkable Helm charts. One, in particular, was over one megabyte of YAML and could orchestrate hundreds of components configured in an innumerable variety of ways. The chart could not only be used to install things, but also to repair broken clusters and keep all of these systems in sync. (It also used a special chart installer to encapsulate some extra logic.) As amazed as I am by this chart, I do see it as pushing beyond the bounds of what Helm is designed to do.</span><span></span></p><p><span>I know of some operators as well that push up against Helm on the installation and upgrade story, with custom installers that execute in-cluster (sort of like </span><a href="https://cnab.io/"><span>CNAB packages</span></a><span>) and provide a similar workflow to Helm's install/upgrade/delete story. Again, there is nothing wrong with this. But it stretches well beyond Brandon Philips' definition of an operator.</span><span></span></p><p><span>In both cases, though, we must ask ourselves: are we stretching our tools because it is the best overall strategy? Or are we stretching our tools because we have gotten dogmatic about those tools (or, perhaps, because we fear to learn more tools)? We are doing a disservice to the Kubernetes community when we become so entrenched in our chosen tool or tools that we start rewriting them to repel other tools.</span><span></span></p><p><span>Operators and Helm charts have been working together since the early days of these technologies. For example, there are currently around </span><a href="https://hub.helm.sh/charts?q=operator"><span>sixty operators installable via the Helm Hub</span></a><span>. There are even a few Helm operators (notably WeaveWorks' </span><a href="https://github.com/fluxcd/helm-operator"><span>Flux operator for Helm</span></a><span>) in which Helm functionality is provided using an operator that links directly to the Helm APIs. These are clear indications that the two technologies can work well in concert without having to push each technology past its limits.</span><span></span></p><h2>Do we have a winner?</h2><p><span>One might be tempted to look at the numbers and draw conclusions. Helm has more charts, more available tooling, and more users. But this would lead to an incorrect conclusion. After all, Helm is designed to make it easy to write charts and distribute them. Operators, in contrast, are tremendously difficult to write because by design they encapsulate complex operational knowledge. While the typical Helm chart is a few hundred lines of YAML, the typical operator is thousands of lines of code. Thus we would expect the technology to evolve slowly, with emphasis on covering the most widely used services.</span><span></span></p><p><span>On the contrary, one might be tempted to look for a win based on a notion of "architectural purity." On more than one occasion, I have heard fans of operators tout them as </span><i><span>superior</span></i><span> to Helm charts because they are based on CRDs and controllers (two central architectural features of Kubernetes). Helm is, in this estimation, "just templated YAML." This argument also isn't decisive. It ignores whether a problem is solved and begins with an argument by fiat that the solution to the problem </span><i><span>must</span></i><span> use controllers and CRDs. The problem Helm tries to solve is not better solved with CRDs and controllers (though there are available tools that </span><a href="https://github.com/fluxcd/helm-operator-get-started"><span>use controllers to install Helm charts</span></a><span>). Neither CRDs nor operators are necessary pieces for telling the package management story. Adding them simply increases the complexity (and the attack surface) of a package management system.</span><span></span></p><p><span>Again,</span><span> it feels like we're back to the question of which sports team is best, which leads us to a concluding analogy.</span><span></span></p><p><span>In the early 2000s, artist Howard Schatz published a book of photographs of the top athletes from a multitude of sports. From gymnastics to basketball to sumo wrestling, Schatz posed them standing side-by-side. Each of these athletes had honed their bodies to achieve prowess in their chosen sport. Yet the contrast in size and shape could not have been more pronounced. Could we have looked at those athletes, picked one, and said, "you are the best athlete"? Not in any meaningful way. The best we could do is say "you are the best </span><i><span>in your sport</span></i><span>."</span><span></span></p><p><span>Operators have a different focus than Helm. Each has strengths and weaknesses. And there is </span><span>definitely some</span><span> overlap between the two. But we should be unsurprised to see the two tools work well together. After all, they are compl</span><span>e</span><span>mentary rather than opposed.</span><span></span></p><p><span>Collectively, we would be better served by forgoing the mentality of dueling technologies. Instead, we should focus on using the combination of these technologies to truly make Kubernetes an easier platform to operate.</span><span></span></p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2020/04/02/when-to-use-helm-operators-kubernetes-ops/">Is there a Helm and Operators showdown?</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://cloudblogs.microsoft.com/opensource/2020/04/02/when-to-use-helm-operators-kubernetes-ops/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Empowering cloud-native developers on Kubernetes anywhere</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/</link>
		
		<dc:creator><![CDATA[Brendan Burns]]></dc:creator>
		<pubDate>Tue, 19 Nov 2019 18:16:03 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/</guid>

					<description><![CDATA[<p>Hello KubeCon and welcome to San Diego! It&#8217;s fantastic to have the chance to get some warm California sun, as well as the warmth of the broader Kubernetes community. From the very first community meeting, through the first KubeCon and on to today, it&#8217;s been truly amazing to have been able to watch and help<span><a class="read-more" aria-label="Read more about Empowering cloud-native developers on Kubernetes anywhere" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/" data-bi-cn="Read more about Empowering cloud-native developers on Kubernetes anywhere">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/">Empowering cloud-native developers on Kubernetes anywhere</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Hello KubeCon and welcome to San Diego! It&rsquo;s fantastic to have the chance to get some warm California sun, as well as the warmth of the broader Kubernetes community. From the very first community meeting, through the first KubeCon and on to today, it&rsquo;s been truly amazing to have been able to watch and help the Kubernetes community grow. As KubeCon arrives, I&rsquo;m excited to note how we are continuing to innovate and empower cloud-native developers on Kubernetes anywhere.</p><p>In the spirit of innovation, I&rsquo;m thrilled to announce our new open source effort to <a href="https://aka.ms/KubeCon/CC4K">enable trusted execution environments for Kubernetes</a>. Trusted execution environments or &ldquo;enclaves&rdquo; are a hardware-backed secure execution environment that can ensure processes and their memory are secure while they execute. Today, we&rsquo;re enabling trusted computing on Kubernetes anywhere via the Open Enclave SDK.</p><p>We&rsquo;re also releasing a resource plugin that makes Encrypted Page Cache RAM a resource that the Kubernetes scheduler can use to make scheduling decisions. The number of enclaves on a CPU is limited, and this plugin ensures that Pods that need enclaves will be guaranteed to land on a node with an enclave available. This scheduler support is critical to running trusted compute environments in cloud-native applications via Pods.</p><p>Beyond these innovations for secure computing, I&rsquo;m incredibly proud of the work that the Helm community has done to build and <a href="https://cloudblogs.microsoft.com/opensource/2019/11/13/helm-3-available-simpler-more-secure/">release Helm 3.0 last week</a>. The vast majority of workloads deployed to Kubernetes are deployed via Helm, and Helm 3 is the next step in this journey. Over the past few years, the Helm team has carefully listened to user feedback about what was working and where changes were needed.</p><p>Of the many fixes and improvements, the most popular is probably the removal of Tiller from the cluster, making Charts more Kubernetes native and more secure by default. Speaking of security, the recent glowing <a href="https://helm.sh/blog/2019-11-04-helm-security-audit-results/">independent security review of the Helm code base</a> shows how dedicated and careful the Helm community has been in building a tool that is not just incredibly useful, but also secure as well. Many congratulations to the Helm community on this important milestone.</p><p>Just like the Helm team, in Azure, our open source work begins by listening to our customers. In particular, our customers in IoT and telecommunications. This feedback led us to understand how important it was for <a href="https://kubernetes.io/docs/concepts/services-networking/dual-stack/">Kubernetes to support both IPv4 and IPv6 addresses</a> for the same Pods in Kubernetes. Major kudos are due to Kal Henidak for his dedicated and tireless work in engineering both the code and design changes necessary to support multiple addresses per Pod. As you might imagine this change required careful work and coordination across the entire Kubernetes code base and community. Kal&rsquo;s hard work in collaboration with the SIG-Networking community is being recognized with a shared keynote with Tim Hockin. Plan on attending the <a href="https://kccncna19.sched.com/event/UdIA/keynote-the-long-road-to-ipv4ipv6-dual-stack-kubernetes-tim-hockin-principal-software-engineer-google-khaled-kal-henidak-principal-software-engineer-microsoft-azure">keynote to learn more about IPv4 and IPv6 in Kubernetes</a>!</p><p>Finally, by combining both open source community and innovation we have a remarkable collection of open source projects reaching important milestones at KubeCon. The newly announced Buck (Brigade Universal Controller for Kubernetes) project shows how <a href="https://cloudblogs.microsoft.com/opensource/2019/11/15/in-cluster-cnab-management-brigade/">Cloud Native Application Bundles (CNAB) with Brigade</a> radically simplify the development of new operators. The Kubernetes-based Event-driven Autoscaling (KEDA) has shown incredible community interest. It's a great collaboration between Azure Functions, Red Hat, and others. Here at KubeCon, <a href="https://cloudblogs.microsoft.com/opensource/2019/11/19/keda-1-0-release-kubernetes-based-event-driven-autoscaling">the KEDA community is hitting the 1.0 milestone</a> and is stable and ready for production use. I also want to congratulate the Cloud Events community on their recent 1.0 release and I&rsquo;m excited that <a href="https://aka.ms/aeg-cloudeventsv1-blog">Azure Event Grid has correspondingly added support for the 1.0 version of Cloud Events</a>. Cloud Events is a CNCF project for an open and portable API for event-driven programming and it&rsquo;s awesome that it is available in a managed environment in Azure.</p><p>Of course, containers and DevOps are a year-round focus for my teams beyond KubeCon. We've been busy this fall.</p><p>In the four weeks since we launched the <a href="https://cloudblogs.microsoft.com/opensource/2019/10/16/announcing-dapr-open-source-project-build-microservice-applications/">Distributed Application Runtime (Dapr) project</a>, we have seen strong interest from the community and have been listening to the many stories of how people are using Dapr in their projects, including modernizing Java code, building games, and integrating with IoT solutions. The breadth across different industries is amazing to see. The interest in the <a href="https://github.com/dapr/dapr/">Dapr runtime repo</a> has grown beyond our expectations. It's been awesome to see the community come together and continue the momentum. We are excited to announce the release of Dapr v0.2.0, focusing on community-driven components, fixes across the Dapr runtime and CLI, updates to documentation, samples, and the addition of an end-to-end testing framework. You can find out more about the <a href="https://github.com/dapr/dapr/blob/master/docs/release_notes/v0.2.0.md">v0.2.0 release at the Dapr repo</a>.</p><p>Just building distributed systems isn't enough, you need to be able to observe how they are running in production, and the CNCF Prometheus project has emerged as a de-facto standard for exposing metrics on all sorts of servers. But it&rsquo;s still easier to integrate with cloud-based monitoring rather than run your own metrics server. To enable this, Azure Monitor for containers <a href="https://azure.microsoft.com/en-au/blog/improving-observability-of-your-kubernetes-deployments-with-azure-monitor-for-containers/">can scrape the metrics exposed from Prometheus end-points</a> so you can quickly gather failure rates, response per secs, and latency. From Log Analytics, you can easily run a Kusto Query Language (KQL) query and create your custom dashboard in the Azure portal dashboard. For many customers using Grafana to support their dashboard requirements, you can visualize the container and Prometheus metrics in a Grafana dashboard. Azure monitoring combines the best of open technology with the reliability of a cloud service.</p><p>In the last few years, KubeCon has grown from a single-track to many tracks and thousands of people. For me personally, and the community in general, it&rsquo;s been an incredible journey. I&rsquo;m excited to see people in San Diego please stop by the Azure booth and say hello!</p><p>&nbsp;</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/">Empowering cloud-native developers on Kubernetes anywhere</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Helm 3: Simpler and more secure</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/11/13/helm-3-available-simpler-more-secure/</link>
		
		<dc:creator><![CDATA[Bridget Kromhout]]></dc:creator>
		<pubDate>Wed, 13 Nov 2019 15:40:26 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/11/13/helm-3-available-simpler-more-secure/</guid>

					<description><![CDATA[<p>Ecosystem complexity increases every time we look around, our dizzying panoply of choices multiplies by the day, and (now, as always) we need a way to find, share, and operate applications reliably, in production, and at scale. What's a busy Kubernetes user to do? Helm is the well-known and much-used package manager for Kubernetes. If<span><a class="read-more" aria-label="Read more about Helm 3: Simpler and more secure" href="https://cloudblogs.microsoft.com/opensource/2019/11/13/helm-3-available-simpler-more-secure/" data-bi-cn="Read more about Helm 3: Simpler and more secure">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/13/helm-3-available-simpler-more-secure/">Helm 3: Simpler and more secure</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Ecosystem complexity increases every time we look around, our dizzying panoply of choices multiplies by the day, and (now, as always) we need a way to find, share, and operate applications reliably, in production, and at scale. What's a busy Kubernetes user to do?</p><p>Helm is the well-known and much-used package manager for Kubernetes. If Helm is new to you, you're in luck! Join in using Helm, one of the most popular tools in the cloud native toolbox it's great for managing the complexity in your applications, sharing configurations, and easily versioning your updates for maintainability.</p><p><a href="https://helm.sh/blog/helm-3-released" target="_blank" rel="noopener noreferrer">Helm 3 (now available!)</a> is an evolutionary improvement, bringing enterprise-grade security and increased usability. Enhancements in Helm's security and stability are in direct response to and in coordination with community needs, as Helm is a widely-used Incubating <a href="https://www.cncf.io/" target="_blank" rel="noopener noreferrer">Cloud Native Computing Foundation</a> (CNCF) project, with an open source community numbering in the hundreds of contributors shaping its decisions.</p><p>Being almost as old as Kubernetes itself, Helm originally evolved with some configuration options which were parallel but not entirely congruent with Kubernetes. Rethinking Helm 3 from the ground up let the community bring Helm's permissions model, command-line switches, RBAC, and more in line with current Kubernetes implementations. The Helm team edited the Helm 3 architecture carefully and removed the server-side component known as Tiller, which was obviated by improvements to Kubernetes in the years since Helm 2's design. Helm 3 is simultaneously simpler and supports more modern security, identity, and authorization features of Kubernetes.</p><p>"Sounds exciting!" you may say. "How do I get started?" You can use Helm 3 immediately whether or not you've used Helm before. If you have Helm charts you've been using with Helm 2, you will want to read about essential changes, but most charts will work unmodified. A <a href="https://v3.helm.sh/docs/topics/v2_v3_migration/">migration guide</a> and <a href="https://github.com/helm/helm-2to3">Helm 2to3 plugin</a> will help you make the move. And you may find that while you don't need to rewrite your charts right away, you'll delight in the new library charts, which will help you de-duplicate configs you use across many charts for consistency and security.</p><p>Release information is now stored in the namespace with the release. This enables you to use a release name on a per-namespace basis, instead of being limited by all releases needing to share the same Tiller namespace. (One side effect of the removal of Tiller!) And the re-imagined three-way strategic merge patches allow the old and new state on disk to be examined in the context of the live state in the running cluster. This prevents unexpected incidents caused by uncommitted production updates.</p><p>&ldquo;But wait&hellip;&rdquo; comes the worried exclamation. "I don't have time to change anything right now!" Good news: Helm 2 will get bug fixes backported for six months and security patches for a year, so you can update at the pace that fits your organization's needs. And backward compatibility is emphasized if you want to keep using previous command-line flags, in many cases that option is available. The <a href="https://helm.sh/docs/faq/">Helm FAQ</a> covers the breaking changes, and the <a href="https://helm.sh/blog/helm-3-released">Helm blog post</a> on this major version release dives into the technical details you'll want to know.</p><p>Of course, any major shift in an open source project comes with changes, and the upstream chart repository of yore will be one of those we're looking at pushing charts to <a href="https://www.opencontainers.org/">Open Container Initiative</a> (OCI) registries to ameliorate limitations of the Chart Repository API. Experimental features like OCI support are a great place to look if you'd like to get involved with the direction of the project testing and feedback are valuable contributions.</p><p>Open source means giving back to the community and the <a href="https://github.com/helm/helm">Helm Go SDK</a> has proven so handy for our needs that we've refactored it to allow for broader use. If you've integrated Helm into other projects, we'd love to hear from you!</p><p>Whether in person at <a href="https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/">KubeCon + CloudNativeCon</a> next week or on our <a href="https://github.com/helm/community/blob/master/communication.md#meetings">weekly community calls</a>, <a href="https://github.com/helm/community/blob/master/communication.md#slack">Slack</a>, <a href="https://github.com/helm/community/blob/master/communication.md#meetings">mailing lists</a>, <a href="https://github.com/helm/helm" target="_blank" rel="noopener noreferrer">GitHub</a>, and more, the Helm maintainers are delighted to hear what's working, what needs improvement, and how you're using Helm to make package management in your Kubernetes clusters simpler and more secure.</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/13/helm-3-available-simpler-more-secure/">Helm 3: Simpler and more secure</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Helm 3: simpler to use, more secure to operate</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/</link>
		
		<dc:creator><![CDATA[Bridget Kromhout]]></dc:creator>
		<pubDate>Tue, 21 May 2019 14:45:53 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/</guid>

					<description><![CDATA[<p>Helm is the best way to find, share, and use software built for Kubernetes, and the eagerly anticipated Helm 3 alpha is now available for testing. Try it out, give feedback, and help the Helm community get it ready for you to depend upon. Why Helm? Many teams already rely on Helm 2 to deploy<span><a class="read-more" aria-label="Read more about Helm 3: simpler to use, more secure to operate" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/" data-bi-cn="Read more about Helm 3: simpler to use, more secure to operate">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/">Helm 3: simpler to use, more secure to operate</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Helm is the best way to find, share, and use software built for Kubernetes, and the eagerly anticipated Helm 3 alpha is now available for testing. Try it out, give feedback, and help the <a href="https://helm.sh/">Helm</a> community get it ready for you to depend upon.</p><h2>Why Helm?</h2><p>Many teams already rely on Helm 2 to deploy and manage their applications on <a href="https://kubernetes.io/">Kubernetes</a>, the open source project that has become the de facto open source distributed systems kernel. Kubernetes usage is reportedly <a href="https://blog.heptio.com/the-results-are-in-the-state-of-k8s-2018-d25e54819416">above 70% in large organizations as of 2018</a>. At Microsoft, we see customer uptake of the <a href="https://azure.microsoft.com/services/kubernetes-service/">managed Azure Kubernetes Service</a> growing rapidly, and a great deal of our industry is focused on this space.</p><p>Kubernetes orchestrates containers, typically as a collection of services that together enable a microservice application in which various services work together to provide a larger experience. To host these workloads, many different Kubernetes components must be configured. In addition, Kubernetes has no built-in concept of an application as a logical, manageable unit, which makes application operations more difficult unless an organization dedicates staff to focusing on those primitives.</p><p>Typically, we prefer to make it simpler for cluster users to deploy and manage their applications as logical units in a self-service fashion. That's where Helm adds value!</p><h2>Helm is the package manager for Kubernetes applications</h2><p>Last year, the <a href="https://www.cncf.io/blog/2018/08/29/cncf-survey-use-of-cloud-native-technologies-in-production-has-grown-over-200-percent/">CNCF's cloud native survey</a> made it clear that the "preferred method for packaging is Helm (68%) followed by managed Kubernetes offerings (19%)." Users find that Helm is a great way to:</p><ul><li>Manage complexity: describe complex Kubernetes applications in a "chart."</li><li>Share charts: search for shared charts on <a href="https://hub.helm.sh/">public</a> and private chart repositories.</li><li>Easily update Kubernetes applications: in-place upgrades and rollbacks (which are actually roll-forwards; Helm doesn't include time travel!) to past versions, using the release history of charts.</li></ul><p>Making complex things easier to manage is the hallmark of a good tool and the strong adoption of Helm shows us that many people are looking for this kind of tooling.</p><h2>Helm 3 is built for production scenarios in mind</h2><p>Adventure. Excitement. An on-call engineer craves none of these things operational surprises don't help us sleep at night. Predictable and repeatable production-ready software we can operate at scale is delightful and that's why we create tools like Helm.</p><p>If you're using Helm already, try Helm 3 today to help the community ensure there are no surprises for your use cases. Helm 3 is the result of <em>years</em> of community contributions and conversations that clearly show how organizations are using Helm and how they need it to evolve for their production use cases.</p><p>Even if Helm is new for you, your input is welcome! Take a look at Helm 3 and find out how Helm charts help you bring operational simplicity and enterprise-ready stability to your Kubernetes environments.</p><h2>Simpler to use, more secure to operate</h2><p>Wasn't Helm 2 already simple? If we install the same Helm chart with Helm 2 and Helm 3, the application installed will be precisely the same, just as we would expect! So, what's the motivation behind Helm 3?</p><p>Tiller, the server-side component of Helm 2, requires additional security steps and Helm 2 was created for developers to install applications when Kubernetes did not yet have role-based access control (RBAC). This complexity isn't needed in recent releases of Kubernetes, so for Helm 3 we were able to remove tiller entirely.</p><p>As a result, Helm 3 is significantly simpler, while also supporting all the modern security, identity, and authorization features of today's Kubernetes. Helm 3 allows us to revisit and simplify Helm's architecture, due to the growing maturity of Kubernetes identity and security features, like role-based access control (RBAC), and advanced features, such as custom resource definitions (CRDs).</p><h2>Join the cloud-native fun!</h2><p>We're so excited for what Helm 3 will allow you to do! You can read about all the new features, such as chart reuse in <em>chart libraries, </em>in the blog series <a href="https://helm.sh/blog/helm-3-preview-pt1/">Helm 3: Charting Our Future</a> and join the discussion <a href="https://github.com/helm/helm">on GitHub</a> to make this major release the best Helm for you. We're looking forward to connecting during <a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/schedule/">sessions and deep-dives at KubeCon EU</a> this week and can't wait to hear your feedback on the alpha version.</p><p>Your feedback is invaluable, as the Helm community intends to make Helm 3 generally available (GA) at <a href="https://helm.sh/blog/helm-summit-eu-2019/">Helm Summit EU</a> in September. Come share your stories and continue the conversation!</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/">Helm 3: simpler to use, more secure to operate</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Extending Kubernetes in the open</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/</link>
		
		<dc:creator><![CDATA[Brendan Burns]]></dc:creator>
		<pubDate>Tue, 21 May 2019 14:45:42 +0000</pubDate>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[Visual Studio Code]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/</guid>

					<description><![CDATA[<p>Greetings and welcome to KubeCon EU in Barcelona! As always, it is wonderful to see the community come together to celebrate how Kubernetes has made cloud-native ubiquitous and changed the way that we build and manage our software. These conferences are fantastic because they represent an opportunity to meet with our users and learn about<span><a class="read-more" aria-label="Read more about Extending Kubernetes in the open" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/" data-bi-cn="Read more about Extending Kubernetes in the open">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/">Extending Kubernetes in the open</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Greetings and welcome to <a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/">KubeCon EU</a> in Barcelona! As always, it is wonderful to see the community come together to celebrate how <a href="https://aka.ms/k8slearning">Kubernetes</a> has made cloud-native ubiquitous and changed the way that we build and manage our software.</p><p>These conferences are fantastic because they represent an opportunity to meet with our users and learn about how our software is both making them more successful and also hindering their success. While learning about how we empower developers is awesome, it's even more important (and significantly more humbling) to learn about the areas where they have unmet needs, have hit bugs, or are otherwise blocked by choices we've made or bugs that we've written.</p><p>Listening to our users and customers and focusing our energy on solving their real-world problems is the #1 (and #2 and #3) focus for my teams. In that spirit, I'm excited to announce: the Helm 3 release, Kubernetes integration with Visual Studio Code (VS Code), the Virtual Kubelet project 1.0 release, and the Service Mesh Interface (SMI), which is our new community project for collaboration around Service Mesh infrastructure.</p><h2>Helm 3</h2><p>The first, and arguably most significant for the Kubernetes community, is the first Alpha release of <a href="https://aka.ms/k8s/helm3">Helm 3</a>. Helm 3 represents a nearly complete re-factoring of the Helm package manager to evolve it from its origins to a modern application package manager. The Helm project is nearly as old as Kubernetes itself. As a result, its original design pre-dated many advancements in Kubernetes like CustomResourceDefinitions and even Kubernetes RBAC.</p><p>Because of this, the Helm 2 architecture was forced to implement a number of features itself, which made it less tightly integrated with Kubernetes, and meant that managing things like RBAC of Charts and Resources was complicated and disconnected from Kubernetes itself. Helm 3 eliminates this impedance mismatch.</p><p>By replacing custom APIs for charts and deployments with CustomResourceDefinitions, things like Kubernetes RBAC directly apply to Helm and the whole system feels significantly more tightly-integrated and Kubernetes-native. You can now use the 'kubectl' command line to interact with your Helm charts and Kubernetes native RBAC to limit access and resources that users can create.</p><p>Helm has become the defacto standard for packaging and deploying Kubernetes applications by focusing on the end-user and enabling those users to be successful. The advances and improvements in Helm 3 continue this trend and make it even more useful to both existing users, as well as others who may have previously tried other solutions. Many congratulations to the Helm project and community on reaching this milestone!</p><h2>Kubernetes extension for Visual Studio Code</h2><p>Making Kubernetes accessible to everyone is a challenge that I think about every day. It's great that we've built a system that makes it easier to deploy distributed systems, but if we haven't made it possible for everyone to use Kubernetes then we've actually failed. In that vein I have spent a bunch of time recently working with the team that builds the open source <a href="https://github.com/Azure/vscode-kubernetes-tools">Kubernetes extension for Visual Studio Code</a>.</p><p>This extension brings native Kubernetes integration to VS Code. You can easily view the contents of your cluster, see the state of pods at a glance, right-click to get a terminal in a Pod or port-forward network traffic, and easily filter your way through logs to identify problems all within the exact same environment where your code lives.</p><p>Furthermore, in keeping with our open principles, the extension is open source on GitHub and works with Kubernetes anywhere. No matter where you are running Kubernetes, the VS Code integration makes it easier to work with your applications and clusters with fewer windows and context switches.</p><p>Today at KubeCon EU in Barcelona, I'm excited our VS Code integration has reached the 1.0 milestone and is fully supported for production management of your Kubernetes clusters. Even more importantly, we've also added an extensibility API that makes it possible for others, like Red Hat OpenShift, to build their own integration experiences on top of our baseline Kubernetes integration. ('yo dawg, I heard you like extensions, so I added extensibility to your extension.') Much like Custom Resources, adding extensibility enables collaboration at the core while enabling others to build rich experiences targeted to specific environments. It's a testament to the value of an open and extensible approach to tooling.</p><h2>Virtual Kubelet 1.0</h2><p>Speaking of community, I'm also incredibly excited to see the Virtual Kubelet hit the 1.0 milestone. The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. Enabling people to free themselves from the toil of managing an operating system, while still using Kubernetes for orchestration, is a powerful value proposition to startups and enterprises alike.</p><p>I'm really excited that this community is healthy and reaching this important milestone, and that Azure was able to play a role. Earlier this month at //build we also announced the general availability of <a href="https://docs.microsoft.com/en-us/azure/aks/virtual-nodes-cli">AKS virtual nodes</a> powered by the open source community code in the Virtual Kubelet project. Again, showing the value of open source even when delivering tightly integrated features for Azure.</p><h2>Service Mesh Interface (SMI)</h2><p>Finally, I'm thrilled to see the community that is developing around the <a href="https://aka.ms/k8s/smi">Service Mesh Interface</a> specification. It has been clear to us for a while that users and customers are excited by the promise of Service Mesh to provide the next advances in cloud-native application development. However, it is also clear that the monolithic integration of interface and implementation that previously existed for Service Mesh has limited their adoption.</p><p>The world of Service Mesh is fast and evolving and users are concerned that if they are locked to any particular implementation, they may be locked into a complex surface area. By providing a generic API interface that is then implemented by various service mesh providers, like Istio, Linkerd, and Consul Connect, the Service Mesh Interface frees users to use service mesh capabilities without being bound to any particular implementation.</p><p>This means that they are free to experiment, and even change implementations without having to change their applications. This brings Service Mesh on parity with other parts of Kubernetes, like Ingress, Container Runtime (CRI), and Networking (CNI), which all have generic interfaces with pluggable implementations. We've really enjoyed working with our great partners on the SMI specification and we look forward to building an even larger community as we move forward and iterate on the specification.</p><p>Enjoy KubeCon and Barcelona (and make sure to check out the Sagrada Familia). The Azure team is will be busy learning from you, our community, so we can build the next set of great products and features that make you even more productive and happy!</p><p>Questions or feedback? Let us know in the comments below.</p><p>&nbsp;</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/">Extending Kubernetes in the open</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Open Source Weekly: VSCode Python updates, Helm Summit learnings, and more</title>
		<link>https://cloudblogs.microsoft.com/opensource/2018/03/09/open-source-weekly-vscode-python-updates/</link>
		
		<dc:creator><![CDATA[Microsoft + Open Source]]></dc:creator>
		<pubDate>Fri, 09 Mar 2018 19:00:06 +0000</pubDate>
				<category><![CDATA[Application Development]]></category>
		<category><![CDATA[Argentina]]></category>
		<category><![CDATA[Australia]]></category>
		<category><![CDATA[Bahrain]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Brazil]]></category>
		<category><![CDATA[Canada]]></category>
		<category><![CDATA[Chile]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Colombia]]></category>
		<category><![CDATA[Costa Rica]]></category>
		<category><![CDATA[Dominican Republic]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[France]]></category>
		<category><![CDATA[Germany]]></category>
		<category><![CDATA[India]]></category>
		<category><![CDATA[Indonesia]]></category>
		<category><![CDATA[Israel]]></category>
		<category><![CDATA[IT Trends]]></category>
		<category><![CDATA[Italy]]></category>
		<category><![CDATA[Japan]]></category>
		<category><![CDATA[Jordan]]></category>
		<category><![CDATA[Korea]]></category>
		<category><![CDATA[Lithuania]]></category>
		<category><![CDATA[Malaysia]]></category>
		<category><![CDATA[Mexico]]></category>
		<category><![CDATA[Netherlands]]></category>
		<category><![CDATA[New Zealand]]></category>
		<category><![CDATA[Nicaragua]]></category>
		<category><![CDATA[Norway]]></category>
		<category><![CDATA[Pakistan]]></category>
		<category><![CDATA[Peru]]></category>
		<category><![CDATA[Poland]]></category>
		<category><![CDATA[Russia]]></category>
		<category><![CDATA[Serbia]]></category>
		<category><![CDATA[Singapore]]></category>
		<category><![CDATA[Switzerland]]></category>
		<category><![CDATA[UAE]]></category>
		<category><![CDATA[United Kingdom]]></category>
		<category><![CDATA[United States]]></category>
		<category><![CDATA[Django]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[Open Source Weekly]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Visual Studio Code]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2018/03/09/open-source-weekly-vscode-python-updates/</guid>

					<description><![CDATA[<p>4-minute read + demos The February release of Visual Studio Code, our free and open source code editor, includes significant updates for Python devs, including added support for pipenv/pyenv and a preview of the next version of our debugger that offers faster Django debugging for Python. Learn more about what's new in this release, plus<span><a class="read-more" aria-label="Read more about Open Source Weekly: VSCode Python updates, Helm Summit learnings, and more" href="https://cloudblogs.microsoft.com/opensource/2018/03/09/open-source-weekly-vscode-python-updates/" data-bi-cn="Read more about Open Source Weekly: VSCode Python updates, Helm Summit learnings, and more">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/03/09/open-source-weekly-vscode-python-updates/">Open Source Weekly: VSCode Python updates, Helm Summit learnings, and more</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong><em>4-minute read + demos</em></strong><br>The February release of Visual Studio Code, our free and open source code editor, includes significant updates for Python devs, including added support for pipenv/pyenv and a preview of the next version of our debugger that offers faster Django debugging for Python. Learn more about what's new in this release, plus new docs, demos, and more, in this edition of the Open Source Weekly.<br></p><img loading="lazy" width="960" height="150" href="https://marketplace.visualstudio.com/items?itemName=" ms-python.python src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18__Roadmap.png"><p><strong>Build Spring Boot 2.0 apps with Azure Starters and new VSCode extensions</strong>: Spring is an open source application framework developed by Pivotal that provides a simplified, modular approach for creating Java applications. This week, the Azure team rolled out additional support for Spring, including <a href="https://docs.microsoft.com/en-us/java/azure/spring-framework/spring-boot-starters-for-azure">Spring Boot Starters for Azure</a> and new Java and Spring extensions for Visual Studio Code, where you can build production-ready apps and easily deploy them to the cloud. See the complete announcement on the <a href="https://azure.microsoft.com/en-us/blog/build-spring-boot-2-0-apps-with-azure-starters-and-new-vscode-extensions/">Azure blog</a> or check out the <a href="https://github.com/Microsoft/azure-spring-boot">Azure Spring Boot Starters</a> page on GitHub to get started.<br><strong>Public preview of Java on App Service, built-in support for Tomcat and OpenJDK</strong>: This week, the Azure team <a href="https://azure.microsoft.com/en-us/blog/general-availability-of-app-service-on-linux-and-web-app-for-containers/">announced the public preview of Java apps on App Service</a>. This release includes <a href="https://aka.ms/Glbs6q">built-in support for Apache Tomcat</a> 8.5/9.0 and OpenJDK 8, making it easy for Java developers to deploy web or API apps to Azure. Bring your .jar or .war file to <a href="https://azure.microsoft.com/en-us/services/app-service/">Azure App Service</a> and Azure will take care of the capacity provisioning, server maintenance, and load balancing.<br></p><img loading="lazy" width="960" height="150" href="https://open.microsoft.com/2018/03/06/first-helm-summit-notes-recordings/" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Community.png"><p><strong>OSI's new ClearlyDefined project</strong>: <a href="https://clearlydefined.io/">ClearlyDefined</a> is a brand new incubator project from the <a href="https://opensource.org/">Open Source Initiative (OSI)</a> that focuses on crowd-sourcing critical licensing and security data for open source projects. The project aims to increase clarity around open source projects, making it easier to build a community and gain contributors. Learn about how to get involved <a href="https://open.microsoft.com/2018/03/06/clearlydefined-osi/">here</a>.<br><strong>SCaLE 16x: </strong>Southern California Linux Expo (SCaLE), the largest community-run open-source and free software conference in North America, expects to host 150 exhibitors this year, along with nearly 130 sessions, tutorials and special events. The team is proud to support the sixteenth annual event, SCaLE 16x. If you're attending the show, swing by our booth for a pro photo by @JulianCashPhoto and connect with experts in open source software with cloud app development, DevOps, serverless, containers, machine learning, AI and more. Learn more about our talks <a href="https://open.microsoft.com/2018/03/09/microsoft-scale-16x/">here</a>.<br><img loading="lazy" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Demos.png" alt="" width="960" height="150" srcset="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Demos-300x47.png 300w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Demos-768x120.png 768w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Demos-330x52.png 330w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Demos-800x125.png 800w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Demos-400x63.png 400w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Demos.png 960w"><strong>New Open Source Show series</strong>: The new <a href="https://channel9.msdn.com/Shows/The-Open-Source-Show">Open Source Show</a> debuted this week, featuring Microsoft Cloud Developer Advocates, Lena Hall and Suz Hinton, as they discussed data ingestion, stream processing, and sentiment analysis pipeline. In their demos, you'll learn about setting up a data ingestion and processing system, using Twitter client, Event Hubs and Spark on Azure Databricks as an example. Lena and Suz also discuss alternative options for stream processing, and how it can be used for various scenarios, including IoT, and how to apply machine learning to streaming data by showing an example of sentiment analysis on tweets coming in real-time. You'll start seeing code around minute 2:40.</p><p><strong>CosmosDB and .NET Core</strong>: Jeremy Likness discusses ComosDB and what makes it a unique and flexible NoSQL cloud offering. Learn about the CosmosDB support for multiple APIs, including table storage and MongoDB, all accessible from .NET and .NET Core applications.<br><section id="embed-2" aria-label="Embedded Content"><div><p id="embed-2-description">This embed requires accepting cookies from the embed&rsquo;s site to view the embed. Activate the link to accept cookies and view the embedded content.</p><a href="https://cloudblogs.microsoft.com/opensource/2018/03/09/open-source-weekly-vscode-python-updates/" aria-describedby="embed-2-description"><span>This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.<i></i></span></a></div></section><br></p><img loading="lazy" width="960" height="150" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/10/Microsoft-Open-Source_Blog-Section-Headers_FY18_Docs-1.png"><p>Here is some of the documentation supporting the VS Code and Spring updates this week.<br><strong>Python support in Visual Studio Code</strong>: The Python extension for VSCode works with different Python interpreters as well as Anaconda. It leverages all of VS Code&rsquo;s power to provide auto complete and IntelliSense, linting, debugging, and unit testing, along with the ability to easily switch between Python environments, including virtual and conda environments. Lots of documentation <a href="https://code.visualstudio.com/docs/python/python-tutorial">here</a>.<br><strong>Spring on Azure</strong>: These <a href="https://docs.microsoft.com/en-us/java/azure/spring-framework/">quickstarts and tutorials</a> will show you how to develop and deploy Spring apps to the cloud. Documentation includes <a href="https://docs.microsoft.com/en-us/java/azure/spring-framework/spring-boot-starters-for-azure">Spring Boot Starters</a> announced earlier this week.<br><em>If you like these weekly updates, follow us </em><a href="https://twitter.com/OpenAtMicrosoft">@OpenAtMicrosoft</a><em> to stay in touch daily. </em></p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/03/09/open-source-weekly-vscode-python-updates/">Open Source Weekly: VSCode Python updates, Helm Summit learnings, and more</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What we learned at the first Helm Summit</title>
		<link>https://cloudblogs.microsoft.com/opensource/2018/03/06/first-helm-summit-notes-recordings/</link>
		
		<dc:creator><![CDATA[Taylor Thomas]]></dc:creator>
		<pubDate>Tue, 06 Mar 2018 15:24:12 +0000</pubDate>
				<category><![CDATA[Argentina]]></category>
		<category><![CDATA[Australia]]></category>
		<category><![CDATA[Bahrain]]></category>
		<category><![CDATA[Brazil]]></category>
		<category><![CDATA[Canada]]></category>
		<category><![CDATA[Chile]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[Colombia]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[Costa Rica]]></category>
		<category><![CDATA[Dominican Republic]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[France]]></category>
		<category><![CDATA[Germany]]></category>
		<category><![CDATA[India]]></category>
		<category><![CDATA[Indonesia]]></category>
		<category><![CDATA[Israel]]></category>
		<category><![CDATA[Italy]]></category>
		<category><![CDATA[Japan]]></category>
		<category><![CDATA[Jordan]]></category>
		<category><![CDATA[Korea]]></category>
		<category><![CDATA[Lithuania]]></category>
		<category><![CDATA[Malaysia]]></category>
		<category><![CDATA[Mexico]]></category>
		<category><![CDATA[Netherlands]]></category>
		<category><![CDATA[New Zealand]]></category>
		<category><![CDATA[Nicaragua]]></category>
		<category><![CDATA[Norway]]></category>
		<category><![CDATA[Pakistan]]></category>
		<category><![CDATA[Peru]]></category>
		<category><![CDATA[Poland]]></category>
		<category><![CDATA[Russia]]></category>
		<category><![CDATA[Serbia]]></category>
		<category><![CDATA[Singapore]]></category>
		<category><![CDATA[Switzerland]]></category>
		<category><![CDATA[UAE]]></category>
		<category><![CDATA[United Kingdom]]></category>
		<category><![CDATA[United States]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2018/03/06/first-helm-summit-notes-recordings/</guid>

					<description><![CDATA[<p>What a great inaugural Helm Summit! This was a momentous occasion for the community. What started as a hackathon project just under three years ago now is having its own community-driven summit. We had close to 200 people gather in an uncharacteristically snowy and cold Portland, Oregon talking about all things Helm. Over the 2<span><a class="read-more" aria-label="Read more about What we learned at the first Helm Summit" href="https://cloudblogs.microsoft.com/opensource/2018/03/06/first-helm-summit-notes-recordings/" data-bi-cn="Read more about What we learned at the first Helm Summit">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/03/06/first-helm-summit-notes-recordings/">What we learned at the first Helm Summit</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>What a great inaugural Helm Summit! This was a momentous occasion for the community. What started as a hackathon project just under three years ago now is having its own community-driven summit.<br>We had close to 200 people gather in an uncharacteristically snowy and cold Portland, Oregon talking about all things Helm. Over the 2 days we were together, we learned about how members of the community are using Helm in a CI/CD pipeline and using it to manage their environments. We heard some valuable lessons learned, pain points, tips and tricks, and solutions, as well as successful discussions of the future of Helm.<br>Let's review what we learned and the next steps for Helm 3.0. If you are looking for the recordings of the talks, you can find them <a href="https://www.youtube.com/playlist?list=PLVt9l4b66d5EjjJ_VBe_5tEiJrAGLsDb-">here</a>.</p><h2>Where we came from and the current focus of the Helm community</h2><p>We kicked off Helm Summit with a history of the Helm project and how it got to where it is today. The rest of the morning was spent seeing how community members push Helm to its limits with CI/CD using Helm and managing thousands of releases with an operator like <a href="https://github.com/wpengine/lostromos">Lostrmos</a>. Several speakers also showed their work with the popular new tool <a href="https://github.com/kubernetes-helm/chartmuseum">ChartMuseum</a> and how they used it in testing and deployment.<br>The second half of the day contained some deep dives on deploying multiple environments with Helm and how to test large chart repositories. Bitnami and Microsoft detailed the latest iteration of KubeApps with its support for Open Service Brokers and one-click installs of applications. Finally, we heard about the hot topic of how to secure Tiller, what parts of Kubernetes Helm interacts with, and how to secure those endpoints.<br>We closed out the day with lightning talks from across the entire community spectrum. <a href="https://twitter.com/rajashree_28">Rajashree Mandaogane</a> showed us a handy trick for rolling deployments when a ConfigMap changes, while <a href="https://twitter.com/jimmyzelinskie">Jimmy Zelinskie</a> and <a href="https://twitter.com/slicknik">Nikhil Manchanda</a> both proposed ideas for a public App Registry or Chart Repository for Helm users, much like how DockerHub and Quay is for users of Docker.<br>What stood out most to us was the diverse set of companies and use cases presented. All of these talks were enlightening for the community as we learned about the different ways in which people used Helm. These were a perfect way to set the stage for the needs of the community as we went into Day 2 talking about the future of Helm.</p><h2>The Future of Helm</h2><p>The second day was all about the future of Helm, and it was exciting to see so many people invested and looking forward to participating in the discussions. Matt Butcher and Matt Fisher (2 of the 3 core maintainer Matts) set the guidelines for the day on how to approach planning Helm 3 as a community.<br>Throughout the morning, several people from the community presented well thought out proposals for Helm 3. Others gave 1-2 pitches for topics they wanted to discuss on the future of Helm. These included:</p><ul><li><a href="https://youtu.be/VQYH3fjrJQc">Unified aliases, dependency, and value mapping functionality</a></li><li><a href="https://youtu.be/EH3WAOYN_-0">Tiller implementation using CRDs and a custom controller</a></li><li><a href="https://youtu.be/stzGrf5dBDo">Better chart searching</a></li><li><a href="https://youtu.be/LMi3GNJCF90">Measuring Helm usage</a></li><li><a href="https://youtu.be/7NDU7KkWolE">Improvements for managing charts</a></li></ul><p>Brian Grant presented an <a href="https://youtu.be/F-TlC8nIz8s">interesting idea</a> to separate Helm and Tiller into composable tools for the Kubernetes ecosystem. This is an idea that many people expressed interest in implementing.<br>We finished off the morning with a Q&amp;A panel with the Helm and Charts core maintainers. What struck us as most interesting was the number of questions related to the management of charts and how to handle the growing number of them in the repository.<br>The rest of the conference was spent in unconference-style working groups, where we discussed the various proposals and ideas pitched by others in attendance. We used this time to gather requirements and debate different ideas and needs.</p><h3>Requirements</h3><p>So then, what are the requirements we found? The full notes from the working groups can be found on <a href="https://github.com/kubernetes-helm/helm-summit-notes">GitHub</a>, but we have broken up these requirements into 3 categories below for Helm, Tiller, and Charts. These are what we have identified as suggestions for requirements from the Helm community .</p><h4><b>Helm</b></h4><ul><li>Users should be able to push/upload a chart to a chart repository using the Helm CLI</li><li>There should be a way to specify release dependencies (e.g. a release named "my-release" must exist in order to deploy the release they are installing)</li><li>Users should be able to dynamically override non-templated values in an object</li><li>Template engine plugging should be easier</li><li>Templating should use an overlay to add on additional data</li><li>Helm linter should allow for flexibility and customization</li><li>Helm should have full YAML support (for things like anchors)</li><li>There should be a curated list of plugins</li><li>Plugins should have a starter or library to use so people don't have to write it all from scratch</li><li>Re-work the existing `helm test` framework so releases can be more thoroughly tested (this is a very large requirement, see the <a href="https://github.com/kubernetes-helm/helm-summit-notes/blob/master/helmtest.md">notes</a> for more details)</li></ul><h4><b>Tiller</b></h4><ul><li>Tiller will become a custom controller that leverages some sort of CRD</li><li>Remove Tiller entirely in favour of a pure client-side implementation</li><li>Tiller should have a way to interface with application lifecycle management (the current idea being to have a way for Operators to report back to Tiller on the status of the application lifecycle)</li><li>Tiller must be able to enforce RBAC rules for a user making the request</li><li>Support external storage engines for Tiller</li></ul><h4><b>Charts</b></h4><ul><li>Backwards compatibility must be maintained with Charts (all changes will need to be a new chart API version)</li><li>Better measurement of chart usage metrics</li><li>More performant chart index (whether through a new v2 spec or a distributed index)</li><li>Create a chart adoption process for bringing charts into the main charts repository</li></ul><p>We will be discussing these requirements in <a href="https://github.com/kubernetes-helm/community/blob/master/communication.md#weekly-meeting">upcoming dev calls</a> to solidify the actual requirement and then we will move them to the Helm repository.</p><h3>Open Questions</h3><p>Based on the discussion, there are several open discussion points that need to be resolved.</p><ul><li>How much of Brian Grant's proposal to separate the parts of Tiller should be implemented? There is a good case from the Helm developer side of things that would make things more modular. However, from the user perspective, this may not be desired as it has the possibility to make things harder to use (more assembly required)</li><li>Should template rendering be moved client-side again?</li></ul><h2>Thanks</h2><p>From all of the Charts and Core maintainers, thank you so much for all of the support to hel(m) make this summit a success. The talks and energy at the conference were nothing short of spectacular, and we are truly proud to be a part of this community. We are looking forward to scheduling other Helm Summits in the future in Europe and in North America again so we can reach as much of the community as possible. The support from the Helm community is what made this event great and we look forward to continuing to work with all of you.<br>Any questions for feedback? Please let us know in the comments.<br></p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/03/06/first-helm-summit-notes-recordings/">What we learned at the first Helm Summit</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
