<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Azure Kubernetes Service &#8211; Microsoft Open Source Blog</title>
	<atom:link href="https://cloudblogs.microsoft.com/opensource/tag/azure-kubernetes-service/feed/" rel="self" type="application/rss+xml" />
	<link>https://cloudblogs.microsoft.com/opensource</link>
	<description>Open dialogue about openness at Microsoft – open source, standards, interoperability</description>
	<lastBuildDate>Tue, 15 Dec 2020 15:11:35 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Introducing the Cluster API Provider for Azure (CAPZ) for Kubernetes cluster management</title>
		<link>https://cloudblogs.microsoft.com/opensource/2020/12/15/introducing-cluster-api-provider-azure-capz-kubernetes-cluster-management/</link>
		
		<dc:creator><![CDATA[Craig Peters and Jack Francis]]></dc:creator>
		<pubDate>Tue, 15 Dec 2020 17:00:23 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2020/12/15/introducing-cluster-api-provider-azure-capz-kubernetes-cluster-management/</guid>

					<description><![CDATA[<p>Managing Kubernetes clusters is hard. Managing Kubernetes clusters at scale across a variety of infrastructures iswelleven harder. The Kubernetes community project Cluster API (CAPI) enables users to manage fleets of clusters across multiple infrastructure providers. The Cluster API Provider for Azure (CAPZ) is the solution for users who need to manage Kubernetes clusters on Azure<span><a class="read-more" aria-label="Read more about Introducing the Cluster API Provider for Azure (CAPZ) for Kubernetes cluster management" href="https://cloudblogs.microsoft.com/opensource/2020/12/15/introducing-cluster-api-provider-azure-capz-kubernetes-cluster-management/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2020/12/15/introducing-cluster-api-provider-azure-capz-kubernetes-cluster-management/">Introducing the Cluster API Provider for Azure (CAPZ) for Kubernetes cluster management</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Managing Kubernetes clusters is hard.</p><p>Managing Kubernetes clusters at scale across a variety of infrastructures iswelleven harder.</p><p>The Kubernetes community project <a href="https://cluster-api.sigs.k8s.io/" target="_blank" rel="noopener noreferrer">Cluster API</a> (CAPI) enables users to manage fleets of clusters across multiple infrastructure providers. The <a href="https://capz.sigs.k8s.io/" target="_blank" rel="noopener noreferrer">Cluster API Provider for Azure</a> (CAPZ) is the solution for users who need to manage Kubernetes clusters on Azure IaaS. In the past, we have recommended AKS Engine for this common scenario. While we will continue to provide regular, stable releases for AKS Engine, the Azure team is excited to share that CAPZ is now ready for users and will be our primary tool for enabling customers to operate self-managed Kubernetes clusters on Azure IaaS.</p><h2>Do you manage your own Kubernetes clusters?</h2><p>Kubernetes is the dominant cross-platform tool for managing containerized applications. <a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" target="_blank" rel="noopener noreferrer">Azure Kubernetes Service</a> (AKS) is the managed service that makes it easy for users to run Kubernetes on Azure. AKS is mature, scalable, secure, and backed by Azure&rsquo;s excellent support. But some users need to run clusters themselves and can't take advantage of AKS. Some need functionality that is not available in AKS yet or might never be because they require user access to the control plane.</p><p>Some are running a service themselves on Azure that leverages Kubernetes and needs complete control, and others might need to run their own clusters for compliance or regulatory reasons (for example, financial services companies who can&rsquo;t delegate management to another organization). Still, other users are developing new integrations with Kubernetes or Kubernetes features themselves, and need to be able to tweak, control, and test anything and everything. We call these clusters that users run themselves "self-managed" clusters.</p><p>If you need to run self-managed clusters on Azure, whatever your reason, you've come to the right place.</p><h2>Cluster API powers self-managed clusters on Azure</h2><p>The Kubernetes community has long recognized the need for tooling to provide standardized lifecycle management of clusters independent of the infrastructure on which they run. In response SIG Cluster Lifecycle created the Cluster API sub-project:</p><blockquote><p>Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. &ndash; <a href="https://cluster-api.sigs.k8s.io/" target="_blank" rel="noopener noreferrer">The Cluster API Book</a></p></blockquote><p>Cluster API provides our team with a natural place to innovate in open source for users and expand community participation in solving Azure user problems at the same time. Thus, it made sense for us to spend the past 18 months investing in the <a href="https://github.com/kubernetes-sigs/cluster-api-provider-azure" target="_blank" rel="noopener noreferrer">Azure Provider for Cluster API</a> (CAPZ) to make it a fully functional project ready to realize the vision of Cluster API for every user.</p><img loading="lazy" alt="Cluster API Diagram" width="1024" height="781" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2020/12/Cluster-API-Diagram-1024x781.webp"><p>The most recent CAPZ release, <a href="https://github.com/kubernetes-sigs/cluster-api-provider-azure/releases/tag/v0.4.10" target="_blank" rel="noopener noreferrer">v0.4.10</a>, includes new capabilities such as GPU support, private clusters, and Azure API call tracing. Some of you may be reluctant to adopt a tool whose API is labeled alpha (v1alpha3 to be exact). You should take comfort in the knowledge that CAPI enables forward and backward compatibility of API versions so that when the project moves to v1alpha4, and then v1beta1, you'll be able to upgrade, and then use the API to output your objects with the new API version.</p><p>Our team is thrilled with the CAPZ work because more of you will be able to effectively manage your cluster's entire lifecycle on Azure. It has also been fulfilling to drive innovations in the Cluster API community, like <a href="https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20190919-machinepool-api.md" target="_blank" rel="noopener noreferrer">CAPI MachinePool</a>, which enables users to take advantage of each infrastructure provider's native VM scaling group capability. CAPI brings Kubernetes native cluster management and CAPZ enables this naturally on Azure infrastructure. Together in the community, we can deliver better capabilities for users more quickly.</p><p>Users are already taking advantage of CAPI and CAPZ on Azure. The Azure provider community consists of amazing people from Azure, VMware, Red Hat, Weaveworks, and more. Community members are realizing the power of the Cluster API by using CAPZ for use cases that span from building new platforms and products, like <a href="https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.2/vmware-tanzu-kubernetes-grid-12/GUID-mgmt-clusters-azure.html">Tanzu Kubernetes Grid,</a> to testing new hardware on multiple infrastructures.</p><p>Users are also discovering new use cases for CAPI. For example, a <a href="https://medium.com/cooking-with-azure/create-aks-clusters-with-a-helm-chart-its-possible-with-clusterapi-74f991ee5713" target="_blank" rel="noopener noreferrer">recent example</a> uses CAPI and Helm to operate managed clusters. And our team is using CAPZ to validate new versions of, and features in, Kubernetes on Azure. Soon our upstream tests will move from using AKS Engine to CAPZ.</p><h2>But what about AKS Engine?</h2><p>Our team, Azure Container Compute Upstream, has the following mission:</p><ul><li>Enable Azure to efficiently consume innovations from the Kubernetes ecosystem</li><li>Contribute innovations from Azure to the Kubernetes ecosystem</li></ul><p>We maintain AKS Engine as an open source tool for Azure customers, but the narrow focus on Azure-specific APIs is inconsistent with our mission in the Kubernetes ecosystem.</p><p>AKS Engine works by creating ARM templates from a cluster model. ARM templates are a great Azure-specific solution for cluster creation, but this design falls short of empowering ongoing operational needs such as scaling, in-place upgrading, and extension management. And it isn't useful for users who are focused on multi-cloud scenarios like managing fleets of Kubernetes clusters across cloud infrastructures that do not support ARM.</p><p><span>AKS Engine users will continue to receive excellent community support. As more maintainers have joined the AKS Engine community the Upstream team has shifted focus to CAPZ for new Kubernetes features. The community is committed to integrating and validating new versions of Kubernetes into AKS Engine. AKS Engine will remain the tool for creating Kubernetes clusters on Azure Stack Hub. We encourage other AKS Engine users to evaluate moving to CAPZ as it already provides stronger support for managing the cluster lifecycle compared to AKS Engine, and new investments from the Upstream team will be focused there. If you are committed to using AKS Engine longer term and would like to become a project maintainer, please reach out to us!</span></p><h2>Cluster API CAPZ: Getting started, getting help, and getting involved</h2><p>To get started building Kubernetes clusters on Azure with CAPZ, try the <a href="https://capz.sigs.k8s.io/" target="_blank" rel="noopener noreferrer">amazing CAPZ documentation</a>. When you have issues, please look at the <a href="https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues" target="_blank" rel="noopener noreferrer">CAPZ issues</a> and create new ones if needed. If you want to get more involved in developing CAPZ, our team is active during <a href="https://github.com/kubernetes-sigs/cluster-api-provider-azure#office-hours" target="_blank" rel="noopener noreferrer">office hours</a> and invite your participation. Many also find the <a href="https://github.com/kubernetes-sigs/cluster-api-provider-azure#other-ways-to-communicate-with-the-contributors" target="_blank" rel="noopener noreferrer">#cluster-api-azure</a> Slack channel to be a great source of advice, help, and collaboration.</p><p>In our next blog we'll discuss in more detail how you can customize your CAPZ deployment to tune startup time for your application by baking your chosen operating system and patch level, and/or your application binaries and configurations into the virtual machine images. We plan to follow that with a discussion about how to leverage the GitOps principles by synchronizing a git repo with your management cluster. Reach out to us in the Kubernetes Slack (@craiglpeters and @jackfrancis) or on Twitter (<a href="https://twitter.com/peterscraig" target="_blank" rel="noopener noreferrer">@peterscraig</a> and <a href="https://twitter.com/jackfrancis_esq" target="_blank" rel="noopener noreferrer">@jackfrancis_esq</a>) with any other topic you'd like to see us dig into.</p><p>&nbsp;</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2020/12/15/introducing-cluster-api-provider-azure-capz-kubernetes-cluster-management/">Introducing the Cluster API Provider for Azure (CAPZ) for Kubernetes cluster management</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Empowering cloud-native developers on Kubernetes anywhere</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/</link>
		
		<dc:creator><![CDATA[Brendan Burns]]></dc:creator>
		<pubDate>Tue, 19 Nov 2019 18:16:03 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/</guid>

					<description><![CDATA[<p>Hello KubeCon and welcome to San Diego! It&#8217;s fantastic to have the chance to get some warm California sun, as well as the warmth of the broader Kubernetes community. From the very first community meeting, through the first KubeCon and on to today, it&#8217;s been truly amazing to have been able to watch and help<span><a class="read-more" aria-label="Read more about Empowering cloud-native developers on Kubernetes anywhere" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/">Empowering cloud-native developers on Kubernetes anywhere</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Hello KubeCon and welcome to San Diego! It&rsquo;s fantastic to have the chance to get some warm California sun, as well as the warmth of the broader Kubernetes community. From the very first community meeting, through the first KubeCon and on to today, it&rsquo;s been truly amazing to have been able to watch and help the Kubernetes community grow. As KubeCon arrives, I&rsquo;m excited to note how we are continuing to innovate and empower cloud-native developers on Kubernetes anywhere.</p><p>In the spirit of innovation, I&rsquo;m thrilled to announce our new open source effort to <a href="https://aka.ms/KubeCon/CC4K">enable trusted execution environments for Kubernetes</a>. Trusted execution environments or &ldquo;enclaves&rdquo; are a hardware-backed secure execution environment that can ensure processes and their memory are secure while they execute. Today, we&rsquo;re enabling trusted computing on Kubernetes anywhere via the Open Enclave SDK.</p><p>We&rsquo;re also releasing a resource plugin that makes Encrypted Page Cache RAM a resource that the Kubernetes scheduler can use to make scheduling decisions. The number of enclaves on a CPU is limited, and this plugin ensures that Pods that need enclaves will be guaranteed to land on a node with an enclave available. This scheduler support is critical to running trusted compute environments in cloud-native applications via Pods.</p><p>Beyond these innovations for secure computing, I&rsquo;m incredibly proud of the work that the Helm community has done to build and <a href="https://cloudblogs.microsoft.com/opensource/2019/11/13/helm-3-available-simpler-more-secure/">release Helm 3.0 last week</a>. The vast majority of workloads deployed to Kubernetes are deployed via Helm, and Helm 3 is the next step in this journey. Over the past few years, the Helm team has carefully listened to user feedback about what was working and where changes were needed.</p><p>Of the many fixes and improvements, the most popular is probably the removal of Tiller from the cluster, making Charts more Kubernetes native and more secure by default. Speaking of security, the recent glowing <a href="https://helm.sh/blog/2019-11-04-helm-security-audit-results/">independent security review of the Helm code base</a> shows how dedicated and careful the Helm community has been in building a tool that is not just incredibly useful, but also secure as well. Many congratulations to the Helm community on this important milestone.</p><p>Just like the Helm team, in Azure, our open source work begins by listening to our customers. In particular, our customers in IoT and telecommunications. This feedback led us to understand how important it was for <a href="https://kubernetes.io/docs/concepts/services-networking/dual-stack/">Kubernetes to support both IPv4 and IPv6 addresses</a> for the same Pods in Kubernetes. Major kudos are due to Kal Henidak for his dedicated and tireless work in engineering both the code and design changes necessary to support multiple addresses per Pod. As you might imagine this change required careful work and coordination across the entire Kubernetes code base and community. Kal&rsquo;s hard work in collaboration with the SIG-Networking community is being recognized with a shared keynote with Tim Hockin. Plan on attending the <a href="https://kccncna19.sched.com/event/UdIA/keynote-the-long-road-to-ipv4ipv6-dual-stack-kubernetes-tim-hockin-principal-software-engineer-google-khaled-kal-henidak-principal-software-engineer-microsoft-azure">keynote to learn more about IPv4 and IPv6 in Kubernetes</a>!</p><p>Finally, by combining both open source community and innovation we have a remarkable collection of open source projects reaching important milestones at KubeCon. The newly announced Buck (Brigade Universal Controller for Kubernetes) project shows how <a href="https://cloudblogs.microsoft.com/opensource/2019/11/15/in-cluster-cnab-management-brigade/">Cloud Native Application Bundles (CNAB) with Brigade</a> radically simplify the development of new operators. The Kubernetes-based Event-driven Autoscaling (KEDA) has shown incredible community interest. It's a great collaboration between Azure Functions, Red Hat, and others. Here at KubeCon, <a href="https://cloudblogs.microsoft.com/opensource/2019/11/19/keda-1-0-release-kubernetes-based-event-driven-autoscaling">the KEDA community is hitting the 1.0 milestone</a> and is stable and ready for production use. I also want to congratulate the Cloud Events community on their recent 1.0 release and I&rsquo;m excited that <a href="https://aka.ms/aeg-cloudeventsv1-blog">Azure Event Grid has correspondingly added support for the 1.0 version of Cloud Events</a>. Cloud Events is a CNCF project for an open and portable API for event-driven programming and it&rsquo;s awesome that it is available in a managed environment in Azure.</p><p>Of course, containers and DevOps are a year-round focus for my teams beyond KubeCon. We've been busy this fall.</p><p>In the four weeks since we launched the <a href="https://cloudblogs.microsoft.com/opensource/2019/10/16/announcing-dapr-open-source-project-build-microservice-applications/">Distributed Application Runtime (Dapr) project</a>, we have seen strong interest from the community and have been listening to the many stories of how people are using Dapr in their projects, including modernizing Java code, building games, and integrating with IoT solutions. The breadth across different industries is amazing to see. The interest in the <a href="https://github.com/dapr/dapr/">Dapr runtime repo</a> has grown beyond our expectations. It's been awesome to see the community come together and continue the momentum. We are excited to announce the release of Dapr v0.2.0, focusing on community-driven components, fixes across the Dapr runtime and CLI, updates to documentation, samples, and the addition of an end-to-end testing framework. You can find out more about the <a href="https://github.com/dapr/dapr/blob/master/docs/release_notes/v0.2.0.md">v0.2.0 release at the Dapr repo</a>.</p><p>Just building distributed systems isn't enough, you need to be able to observe how they are running in production, and the CNCF Prometheus project has emerged as a de-facto standard for exposing metrics on all sorts of servers. But it&rsquo;s still easier to integrate with cloud-based monitoring rather than run your own metrics server. To enable this, Azure Monitor for containers <a href="https://azure.microsoft.com/en-au/blog/improving-observability-of-your-kubernetes-deployments-with-azure-monitor-for-containers/">can scrape the metrics exposed from Prometheus end-points</a> so you can quickly gather failure rates, response per secs, and latency. From Log Analytics, you can easily run a Kusto Query Language (KQL) query and create your custom dashboard in the Azure portal dashboard. For many customers using Grafana to support their dashboard requirements, you can visualize the container and Prometheus metrics in a Grafana dashboard. Azure monitoring combines the best of open technology with the reliability of a cloud service.</p><p>In the last few years, KubeCon has grown from a single-track to many tracks and thousands of people. For me personally, and the community in general, it&rsquo;s been an incredible journey. I&rsquo;m excited to see people in San Diego please stop by the Azure booth and say hello!</p><p>&nbsp;</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/microsoft-kubecon-2019-announcements/">Empowering cloud-native developers on Kubernetes anywhere</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Kubernetes-based event-driven autoscaling (KEDA) 1.0 release</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/11/19/keda-1-0-release-kubernetes-based-event-driven-autoscaling/</link>
		
		<dc:creator><![CDATA[Jeff Hollan]]></dc:creator>
		<pubDate>Tue, 19 Nov 2019 18:15:20 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Functions]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/11/19/keda-1-0-release-kubernetes-based-event-driven-autoscaling/</guid>

					<description><![CDATA[<p>Event-driven applications are a key pattern for cloud-native applications. Event-driven is at the core of many growing trends, including serverless compute like Azure Functions. Event-driven means your application responds and reacts to different events  business or system events. For example, a "new subscriber" event may trigger some code to send a welcome email and<span><a class="read-more" aria-label="Read more about Kubernetes-based event-driven autoscaling (KEDA) 1.0 release" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/keda-1-0-release-kubernetes-based-event-driven-autoscaling/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/keda-1-0-release-kubernetes-based-event-driven-autoscaling/">Kubernetes-based event-driven autoscaling (KEDA) 1.0 release</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Event-driven applications are a key pattern for cloud-native applications. Event-driven is at the core of many growing trends, including serverless compute like Azure Functions. Event-driven means your application responds and reacts to different events business or system events. For example, a "new subscriber" event may trigger some code to send a welcome email and create an entry in a CRM system.</p><p>Just as important as the code responding to events, is the scaling of that code and compute. Kubernetes-based event-driven autoscaling (KEDA) is an <a href="https://github.com/kedacore/keda">open sourced component</a> that can run in a Kubernetes cluster to provide event-driven autoscaling for every container. Today we are thrilled to announce a 1.0 version of KEDA that is ready for use in production.</p><h2>Serverless scale within Kubernetes</h2><p>Kubernetes provides a powerful container orchestration platform, but by default will only scale based on system metrics like CPU and memory. This means there can be a significant delay in a system&rsquo;s ability to respond to events. Take an event like a message landing on a message queue. Thousands of queue messages may be sent and awaiting processing, but Kubernetes knows nothing about how many events are happening. Kubernetes will see the CPU and memory of the containers start to rise, but will take some time for the system to react. KEDA greatly improves this by enriching Kubernetes autoscaling with metrics on the rate of events.</p><p>With KEDA installed, Kubernetes now can know how many messages are arriving, and use that information to start to scale an app even <em>before</em> the CPU starts to rise. KEDA can assist Kubernetes to scale apps to and from zero as well. If an app is responding to an event that has been idle, KEDA lets Kubernetes know it can scale it to zero to prevent it from consuming any CPU when not needed.</p><img loading="lazy" alt="GIF of KEDA scaling" width="1728" height="1080" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2019/11/kedascale.gif"><p>We've worked closely with users and organizations during the preview who have been using KEDA around event-driven Kubernetes. Two notable examples are Swiss Re Asset Management and CycloMedia. Tom van de Meent, lead architect for CycloMedia, shared, "At CycloMedia we've been running Azure Functions with Azure Storage Queues in Kubernetes. KEDA has been helpful in adding dynamic scaling to handle these workloads.&rdquo;</p><h2>Developed in the open</h2><p>We announced KEDA in collaboration with Red Hat back in April and have been delighted with the engagement from users and the community. With hundreds of merged pull requests, dozens of contributors, and weekly community calls, KEDA has only gotten better. KEDA now supports 13 event sources, including Azure Queues, Azure Event Hubs, AWS SQS, Google Cloud PubSub, RabbitMQ, NATS Streaming, Kafka, and more. KEDA also has an extensible gRPC contract so other scalers can be dynamically discovered to add additional capabilities. The community is also helping to build support on top of the popular Operator SDK for ease of installation and management in many environments including Red Hat OpenShift 4.</p><p>&ldquo;Red Hat is working with the cloud-native community to enable portability of serverless applications in hybrid environments. Part of this effort is contributing to KEDA &mdash; both via the upstream project and by bringing its utility to customers using enterprise Kubernetes and containers with Red Hat OpenShift. We congratulate the community on reaching this milestone, and look forward to continued collaboration with Microsoft and others to help extend choice for developers that want to be able to build and deploy applications anywhere," said William Markito Oliveira, Senior Manager of Product Management, Red Hat.</p><p>We couldn't have gotten to this 1.0 milestone without the help from the community and look forward to continued effort and engagement. We recently presented KEDA as part of the serverless working group in CNCF and are in the process to nominate and donate KEDA to the CNCF as a sandbox project. We believe the best products are made in an open and inclusive way.</p><h2>Serverless functions and serverless scale</h2><p>While KEDA works with any container, you can pair KEDA with the open sourced Azure Functions runtime to enable additional serverless capabilities within Kubernetes. Azure Functions provides a programming model that can run anywhere: in a container running on-premises, fully managed in Azure, or in any Kubernetes cluster.</p><p>With Azure Functions, application developers don't need to worry about writing the code to connect, trigger, and pull from an event source like RabbitMQ, Kafka, or Azure Event Hubs. Instead, they focus only on the code and business logic. With the integrated Azure Functions tooling, you can take any Azure Function app and deploy it to Kubernetes alongside KEDA for event-driven scale and event-driven apps.</p><p>KEDA can also run alongside <a href="https://github.com/virtual-kubelet/virtual-kubelet">Virtual Kubelet</a> and <a href="https://docs.microsoft.com/azure/aks/virtual-nodes-portal">Azure Kubernetes Service Virtual Nodes</a>. With Virtual Nodes you can spin up containers outside of the dedicated nodes for a cluster and run them on serverless containers. The combination of KEDA, Azure Functions, and Virtual Nodes brings event-driven serverless functions on top of on-demand compute for a powerful serverless platform.</p><p>I want to share my sincere appreciation for the response and involvement of the community on KEDA. If you haven't used KEDA, you can learn more at <a href="https://keda.sh/">KEDA.sh</a> or try a <a href="https://github.com/kedacore/keda#getting-started">step-by-step QuickStart</a>. We are excited with how KEDA fits into cloud-native and serverless applications and look forward to more to come in this space as we continue to partner with the community.</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/11/19/keda-1-0-release-kubernetes-based-event-driven-autoscaling/">Kubernetes-based event-driven autoscaling (KEDA) 1.0 release</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Join us for the first Windows Containers in Kubernetes &#8220;Unconference&#8221;</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/07/16/first-windows-containers-kubernetes-unconference-2019/</link>
		
		<dc:creator><![CDATA[Craig Peters]]></dc:creator>
		<pubDate>Tue, 16 Jul 2019 18:16:41 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/07/16/first-windows-containers-kubernetes-unconference-2019/</guid>

					<description><![CDATA[<p>Since Windows containers became a stable feature in Kubernetes earlier this year, we've seen exciting growth in use of Windows container technology. The fact that most cloud providers now have managed services supporting Windows containers through the Kubernetes API is a reflection of the demand.  During conversations among the community at KubeCon + CloudNativeCon in Barcelona a few months ago, it became apparent that there needs to be more effort<span><a class="read-more" aria-label="Read more about Join us for the first Windows Containers in Kubernetes &#8220;Unconference&#8221;" href="https://cloudblogs.microsoft.com/opensource/2019/07/16/first-windows-containers-kubernetes-unconference-2019/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/07/16/first-windows-containers-kubernetes-unconference-2019/">Join us for the first Windows Containers in Kubernetes &#8220;Unconference&#8221;</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span>Since Windows containers </span><span>became </span><span>a stable feature in Kubernetes </span><a href="https://kubernetes.io/blog/2019/04/01/kubernetes-v1.14-delivers-production-level-support-for-windows-nodes-and-windows-containers/"><span>earlier this year</span></a><span>, we've seen </span><span>exciting</span><span> growth in use of Windows container technology. The fact that most cloud providers now have managed services supporting Windows containers through the Kubernetes API is a reflection of the demand.</span><span></span></p><p><span>During conversations </span><span>among</span><span></span><span>the community </span><span>at KubeCon + CloudNativeCon in Barcelona a few months ago</span><span>,</span><span> it became apparent that there needs to be more effort put into sharing knowledge across organizations. </span><span>Windows containers are new relative to Linux containers. The combination of rapid advancement of both Kubernetes and Windows containers means that many questions arise when planning </span><span>for the future</span><span>. Addi</span><span>ng to the challenge,</span><span> the developers who work on the core technology </span><span>are</span><span> mostly </span><span>employed by</span><span> a handful of companies. </span><span></span></p><p><span>With this in mind, </span><span>I'm excited to share that <a href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR5-_q_vEyFVNofZJNZLyGUBUMkNNWk9LWjhTMVRVSlowRVZRUjM3N0FNSi4u">we're hosting the community in the Puget Sound area on July 26th</a> for a meeting of the minds at the </span><a href="https://binged.it/2gsiFqu"><span>Microsoft Reactor in Redmond</span></a><span>. Please join engineers from Microsoft, Google, AWS, Docker, VMware and more to talk about everything from why you </span><span>might</span><span> want to use Kubernetes to schedule Windows containers</span><span> to the inner workings of the container runtime, networking, and storage interfaces and how they're evolving as Windows containers evolve.</span><span></span></p><p><span>Come join your colleagues </span><span>and fellow community members </span><span>in</span><span> shar</span><span>ing</span><span> your knowledge and get your questions answered</span><span>. </span><span>We'll first meet and greet over some coffee and snacks at 9:30</span><span> AM</span><span>, then Taylor Brown will kick us off with a short talk about the state of Windows containers and how they fit into the Kubernetes ecosystem. </span><span>Next, </span><span>we'll figure out which topics float to the top, and break into tracks </span><span>as necessary</span><span>. Finally</span><span>,</span><span> at the end of the day we'll recap, and anyone interested can grab a refreshment and continue the community building conversations.</span><span> For those who can't join us in person</span><span>, we will</span><span></span><span>do</span><span> what we can</span><span> to answer any </span><a href="https://docs.google.com/document/d/1WalzZO3kn2EEsLrvnG8gs3nTCJJBN3Dh0Vb86dL64nU/edit?usp=sharing"><span>questions in this document</span></a><span>.</span><span></span></p><p><span>If you can join us in person please </span><a href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR5-_q_vEyFVNofZJNZLyGUBUMkNNWk9LWjhTMVRVSlowRVZRUjM3N0FNSi4u"><span>s</span><span>ign up</span><span></span><span>and share your thoughts on topics in this form</span></a><span>.</span><span></span></p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/07/16/first-windows-containers-kubernetes-unconference-2019/">Join us for the first Windows Containers in Kubernetes &ldquo;Unconference&rdquo;</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Kubernetes: What it is and what it isn’t</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/06/03/what-is-kubernetes/</link>
					<comments>https://cloudblogs.microsoft.com/opensource/2019/06/03/what-is-kubernetes/#comments</comments>
		
		<dc:creator><![CDATA[Jason Haley]]></dc:creator>
		<pubDate>Mon, 03 Jun 2019 18:00:15 +0000</pubDate>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/06/03/what-is-kubernetes/</guid>

					<description><![CDATA[<p>I'm a developer and I'll admit it, I'm learning Kubernetes. I've been developing web applications now for more than 20 years; however, the past two years I've moved to working with microservices applications. Originally the microservices were web sites on multiple virtual machines. Last year we started moving towards containers to achieve a higher density<span><a class="read-more" aria-label="Read more about Kubernetes: What it is and what it isn't" href="https://cloudblogs.microsoft.com/opensource/2019/06/03/what-is-kubernetes/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/06/03/what-is-kubernetes/">Kubernetes: What it is and what it isn’t</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>I'm a developer and I'll admit it, I'm learning Kubernetes. I've been developing web applications now for more than 20 years; however, the past two years I've moved to working with microservices applications. Originally the microservices were web sites on multiple virtual machines. Last year we started moving towards containers to achieve a higher density on our machines and to standardize the package our deployments shipped in. Once we had a handful of containers running, it was obvious we needed a platform to manage them.</p><p>Kubernetes is a container management platform. Real world applications and services typically contain multiple containers that get deployed across multiple servers. Kubernetes allows you to treat those servers as a pool of compute resources. It monitors the resource usage and needs to determine where and when containers should run and if new instances need to be added. It has many features that makes it easier for me to build systems made of many containers and keep my sanity.</p><p>As I'm learning to build systems using Kubernetes, I have found that it solves these problems well for me:</p><ul><li><strong>Compute scheduling</strong>. By looking at the resource needs of a pod (which is one or more containers), the scheduler will find the right place to run it.</li><li><strong>Self-healing</strong>. If a pod crashes, a new one will be created to replace it.</li><li><strong>Horizontal scaling</strong>. When the resource limits of a pod indicate the number of instances should increase, Kubernetes will add new instances, and it can do the same with decreasing instance count when resources drop.</li><li><strong>Volume management</strong>. It manages the persistent storage for needed by my applications.</li><li><strong>Service discovery &amp; load balancing</strong>. Pods get their own IP address, DNS and multiple instances get load balanced.</li><li><strong>Automated rollouts &amp; rollbacks</strong>. The health of the pods is monitored during updates, and if a failure occurs, the pod can roll back to the previous version.</li><li><strong>Secret &amp; configuration management</strong>. Configuration and secrets are managed at the platform level that is used by the containers.</li></ul><p>Of course, there are some things needed that are not part of Kubernetes. It needs to run on a platform that handles security, governance, identity and access control, networking, data and storage. We have chosen Azure for our hosted Kubernetes provider, but all the large cloud vendors offer a hosted version these days.</p><p>Choosing a hosted Kubernetes provider has shrunk my learning curve by allowing me to focus on managing my application (and not learning <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way">Kubernetes The Hard Way</a>).</p><p>Besides hosting these multiple container systems, developing them has its own challenges. However, I've found the latest round of tools have features for native container development which makes it easier.</p><p>I must admit, I am most familiar with Microsoft's tools, so I&rsquo;ll focus on those here:</p><ul><li><a href="https://visualstudio.microsoft.com/vs/">Visual Studio </a>and <a href="https://code.visualstudio.com/?wt.mc_id=DX_841432">Visual Studio Code</a> both have extensions to help me work with Docker and Kubernetes without me needing to jump in and out of my IDE or editor.</li><li><a href="https://docs.microsoft.com/en-us/azure/container-registry/container-registry-intro">Azure Container Registry</a> gives me a private registry to keep my container images and the Helm charts I use for deploying to Kubernetes.</li><li><a href="https://azure.microsoft.com/en-us/services/devops/pipelines/">Azure DevOps Pipelines</a> is super flexible and works great with my source code repository to build my container images and store them in my registry. It also is great for deploying to my different Kubernetes environments.</li><li><a href="https://docs.microsoft.com/en-us/azure/dev-spaces/">Azure Dev Spaces</a> helps me debug my containers that are deployed in <a href="https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes">Azure Kubernetes Service</a><u>,</u> as opposed to having to duplicate a local setup in order to reproduce the problem.</li></ul><p>If you are also learning Kubernetes, here are a couple of resources that I&rsquo;ve found to be useful: <a href="https://aka.ms/k8slearning">k8slearning</a> and <a href="https://aka.ms/LearnKubernetes">https://aka.ms/LearnKubernetes</a>.</p><p>Questions or comments? Let me know in the comments below.</p><p>&nbsp;</p><p>&nbsp;</p><p>&nbsp;</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/06/03/what-is-kubernetes/">Kubernetes:&nbsp;What it is and what it isn&rsquo;t</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://cloudblogs.microsoft.com/opensource/2019/06/03/what-is-kubernetes/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Helm 3: simpler to use, more secure to operate</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/</link>
		
		<dc:creator><![CDATA[Bridget Kromhout]]></dc:creator>
		<pubDate>Tue, 21 May 2019 14:45:53 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/</guid>

					<description><![CDATA[<p>Helm is the best way to find, share, and use software built for Kubernetes, and the eagerly anticipated Helm 3 alpha is now available for testing. Try it out, give feedback, and help the Helm community get it ready for you to depend upon. Why Helm? Many teams already rely on Helm 2 to deploy<span><a class="read-more" aria-label="Read more about Helm 3: simpler to use, more secure to operate" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/">Helm 3: simpler to use, more secure to operate</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Helm is the best way to find, share, and use software built for Kubernetes, and the eagerly anticipated Helm 3 alpha is now available for testing. Try it out, give feedback, and help the <a href="https://helm.sh/">Helm</a> community get it ready for you to depend upon.</p><h2>Why Helm?</h2><p>Many teams already rely on Helm 2 to deploy and manage their applications on <a href="https://kubernetes.io/">Kubernetes</a>, the open source project that has become the de facto open source distributed systems kernel. Kubernetes usage is reportedly <a href="https://blog.heptio.com/the-results-are-in-the-state-of-k8s-2018-d25e54819416">above 70% in large organizations as of 2018</a>. At Microsoft, we see customer uptake of the <a href="https://azure.microsoft.com/services/kubernetes-service/">managed Azure Kubernetes Service</a> growing rapidly, and a great deal of our industry is focused on this space.</p><p>Kubernetes orchestrates containers, typically as a collection of services that together enable a microservice application in which various services work together to provide a larger experience. To host these workloads, many different Kubernetes components must be configured. In addition, Kubernetes has no built-in concept of an application as a logical, manageable unit, which makes application operations more difficult unless an organization dedicates staff to focusing on those primitives.</p><p>Typically, we prefer to make it simpler for cluster users to deploy and manage their applications as logical units in a self-service fashion. That's where Helm adds value!</p><h2>Helm is the package manager for Kubernetes applications</h2><p>Last year, the <a href="https://www.cncf.io/blog/2018/08/29/cncf-survey-use-of-cloud-native-technologies-in-production-has-grown-over-200-percent/">CNCF's cloud native survey</a> made it clear that the "preferred method for packaging is Helm (68%) followed by managed Kubernetes offerings (19%)." Users find that Helm is a great way to:</p><ul><li>Manage complexity: describe complex Kubernetes applications in a "chart."</li><li>Share charts: search for shared charts on <a href="https://hub.helm.sh/">public</a> and private chart repositories.</li><li>Easily update Kubernetes applications: in-place upgrades and rollbacks (which are actually roll-forwards; Helm doesn't include time travel!) to past versions, using the release history of charts.</li></ul><p>Making complex things easier to manage is the hallmark of a good tool and the strong adoption of Helm shows us that many people are looking for this kind of tooling.</p><h2>Helm 3 is built for production scenarios in mind</h2><p>Adventure. Excitement. An on-call engineer craves none of these things operational surprises don't help us sleep at night. Predictable and repeatable production-ready software we can operate at scale is delightful and that's why we create tools like Helm.</p><p>If you're using Helm already, try Helm 3 today to help the community ensure there are no surprises for your use cases. Helm 3 is the result of <em>years</em> of community contributions and conversations that clearly show how organizations are using Helm and how they need it to evolve for their production use cases.</p><p>Even if Helm is new for you, your input is welcome! Take a look at Helm 3 and find out how Helm charts help you bring operational simplicity and enterprise-ready stability to your Kubernetes environments.</p><h2>Simpler to use, more secure to operate</h2><p>Wasn't Helm 2 already simple? If we install the same Helm chart with Helm 2 and Helm 3, the application installed will be precisely the same, just as we would expect! So, what's the motivation behind Helm 3?</p><p>Tiller, the server-side component of Helm 2, requires additional security steps and Helm 2 was created for developers to install applications when Kubernetes did not yet have role-based access control (RBAC). This complexity isn't needed in recent releases of Kubernetes, so for Helm 3 we were able to remove tiller entirely.</p><p>As a result, Helm 3 is significantly simpler, while also supporting all the modern security, identity, and authorization features of today's Kubernetes. Helm 3 allows us to revisit and simplify Helm's architecture, due to the growing maturity of Kubernetes identity and security features, like role-based access control (RBAC), and advanced features, such as custom resource definitions (CRDs).</p><h2>Join the cloud-native fun!</h2><p>We're so excited for what Helm 3 will allow you to do! You can read about all the new features, such as chart reuse in <em>chart libraries, </em>in the blog series <a href="https://helm.sh/blog/helm-3-preview-pt1/">Helm 3: Charting Our Future</a> and join the discussion <a href="https://github.com/helm/helm">on GitHub</a> to make this major release the best Helm for you. We're looking forward to connecting during <a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/schedule/">sessions and deep-dives at KubeCon EU</a> this week and can't wait to hear your feedback on the alpha version.</p><p>Your feedback is invaluable, as the Helm community intends to make Helm 3 generally available (GA) at <a href="https://helm.sh/blog/helm-summit-eu-2019/">Helm Summit EU</a> in September. Come share your stories and continue the conversation!</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/helm-3-release/">Helm 3: simpler to use, more secure to operate</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Extending Kubernetes in the open</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/</link>
		
		<dc:creator><![CDATA[Brendan Burns]]></dc:creator>
		<pubDate>Tue, 21 May 2019 14:45:42 +0000</pubDate>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Helm]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[Visual Studio Code]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/</guid>

					<description><![CDATA[<p>Greetings and welcome to KubeCon EU in Barcelona! As always, it is wonderful to see the community come together to celebrate how Kubernetes has made cloud-native ubiquitous and changed the way that we build and manage our software. These conferences are fantastic because they represent an opportunity to meet with our users and learn about<span><a class="read-more" aria-label="Read more about Extending Kubernetes in the open" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/">Extending Kubernetes in the open</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Greetings and welcome to <a href="https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/">KubeCon EU</a> in Barcelona! As always, it is wonderful to see the community come together to celebrate how <a href="https://aka.ms/k8slearning">Kubernetes</a> has made cloud-native ubiquitous and changed the way that we build and manage our software.</p><p>These conferences are fantastic because they represent an opportunity to meet with our users and learn about how our software is both making them more successful and also hindering their success. While learning about how we empower developers is awesome, it's even more important (and significantly more humbling) to learn about the areas where they have unmet needs, have hit bugs, or are otherwise blocked by choices we've made or bugs that we've written.</p><p>Listening to our users and customers and focusing our energy on solving their real-world problems is the #1 (and #2 and #3) focus for my teams. In that spirit, I'm excited to announce: the Helm 3 release, Kubernetes integration with Visual Studio Code (VS Code), the Virtual Kubelet project 1.0 release, and the Service Mesh Interface (SMI), which is our new community project for collaboration around Service Mesh infrastructure.</p><h2>Helm 3</h2><p>The first, and arguably most significant for the Kubernetes community, is the first Alpha release of <a href="https://aka.ms/k8s/helm3">Helm 3</a>. Helm 3 represents a nearly complete re-factoring of the Helm package manager to evolve it from its origins to a modern application package manager. The Helm project is nearly as old as Kubernetes itself. As a result, its original design pre-dated many advancements in Kubernetes like CustomResourceDefinitions and even Kubernetes RBAC.</p><p>Because of this, the Helm 2 architecture was forced to implement a number of features itself, which made it less tightly integrated with Kubernetes, and meant that managing things like RBAC of Charts and Resources was complicated and disconnected from Kubernetes itself. Helm 3 eliminates this impedance mismatch.</p><p>By replacing custom APIs for charts and deployments with CustomResourceDefinitions, things like Kubernetes RBAC directly apply to Helm and the whole system feels significantly more tightly-integrated and Kubernetes-native. You can now use the 'kubectl' command line to interact with your Helm charts and Kubernetes native RBAC to limit access and resources that users can create.</p><p>Helm has become the defacto standard for packaging and deploying Kubernetes applications by focusing on the end-user and enabling those users to be successful. The advances and improvements in Helm 3 continue this trend and make it even more useful to both existing users, as well as others who may have previously tried other solutions. Many congratulations to the Helm project and community on reaching this milestone!</p><h2>Kubernetes extension for Visual Studio Code</h2><p>Making Kubernetes accessible to everyone is a challenge that I think about every day. It's great that we've built a system that makes it easier to deploy distributed systems, but if we haven't made it possible for everyone to use Kubernetes then we've actually failed. In that vein I have spent a bunch of time recently working with the team that builds the open source <a href="https://github.com/Azure/vscode-kubernetes-tools">Kubernetes extension for Visual Studio Code</a>.</p><p>This extension brings native Kubernetes integration to VS Code. You can easily view the contents of your cluster, see the state of pods at a glance, right-click to get a terminal in a Pod or port-forward network traffic, and easily filter your way through logs to identify problems all within the exact same environment where your code lives.</p><p>Furthermore, in keeping with our open principles, the extension is open source on GitHub and works with Kubernetes anywhere. No matter where you are running Kubernetes, the VS Code integration makes it easier to work with your applications and clusters with fewer windows and context switches.</p><p>Today at KubeCon EU in Barcelona, I'm excited our VS Code integration has reached the 1.0 milestone and is fully supported for production management of your Kubernetes clusters. Even more importantly, we've also added an extensibility API that makes it possible for others, like Red Hat OpenShift, to build their own integration experiences on top of our baseline Kubernetes integration. ('yo dawg, I heard you like extensions, so I added extensibility to your extension.') Much like Custom Resources, adding extensibility enables collaboration at the core while enabling others to build rich experiences targeted to specific environments. It's a testament to the value of an open and extensible approach to tooling.</p><h2>Virtual Kubelet 1.0</h2><p>Speaking of community, I'm also incredibly excited to see the Virtual Kubelet hit the 1.0 milestone. The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. Enabling people to free themselves from the toil of managing an operating system, while still using Kubernetes for orchestration, is a powerful value proposition to startups and enterprises alike.</p><p>I'm really excited that this community is healthy and reaching this important milestone, and that Azure was able to play a role. Earlier this month at //build we also announced the general availability of <a href="https://docs.microsoft.com/en-us/azure/aks/virtual-nodes-cli">AKS virtual nodes</a> powered by the open source community code in the Virtual Kubelet project. Again, showing the value of open source even when delivering tightly integrated features for Azure.</p><h2>Service Mesh Interface (SMI)</h2><p>Finally, I'm thrilled to see the community that is developing around the <a href="https://aka.ms/k8s/smi">Service Mesh Interface</a> specification. It has been clear to us for a while that users and customers are excited by the promise of Service Mesh to provide the next advances in cloud-native application development. However, it is also clear that the monolithic integration of interface and implementation that previously existed for Service Mesh has limited their adoption.</p><p>The world of Service Mesh is fast and evolving and users are concerned that if they are locked to any particular implementation, they may be locked into a complex surface area. By providing a generic API interface that is then implemented by various service mesh providers, like Istio, Linkerd, and Consul Connect, the Service Mesh Interface frees users to use service mesh capabilities without being bound to any particular implementation.</p><p>This means that they are free to experiment, and even change implementations without having to change their applications. This brings Service Mesh on parity with other parts of Kubernetes, like Ingress, Container Runtime (CRI), and Networking (CNI), which all have generic interfaces with pluggable implementations. We've really enjoyed working with our great partners on the SMI specification and we look forward to building an even larger community as we move forward and iterate on the specification.</p><p>Enjoy KubeCon and Barcelona (and make sure to check out the Sagrada Familia). The Azure team is will be busy learning from you, our community, so we can build the next set of great products and features that make you even more productive and happy!</p><p>Questions or feedback? Let us know in the comments below.</p><p>&nbsp;</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/21/kubecon-microsoft-updates-helm-3-virtual-kubelet-1-visual-studio-code-service-mesh-interface/">Extending Kubernetes in the open</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Announcing KEDA: bringing event-driven containers and functions to Kubernetes</title>
		<link>https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/</link>
					<comments>https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/#comments</comments>
		
		<dc:creator><![CDATA[Jeff Hollan]]></dc:creator>
		<pubDate>Mon, 06 May 2019 15:18:56 +0000</pubDate>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/</guid>

					<description><![CDATA[<p>Event-driven architectures are a natural evolution of microservices, enabling a flexible and decoupled design, and are increasingly being adopted by enterprise customers. Fully managed serverless offerings like Azure Functions are event&#8211;driven by design, but we have been hearing from customers about gaps in these capabilities for solutions based on Kubernetes. Scaling in Kubernetes is reactive, based on the CPU and memory consumption of a container. In contrast, services like Azure Functions are acutely<span><a class="read-more" aria-label="Read more about Announcing KEDA: bringing event-driven containers and functions to Kubernetes" href="https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/">Announcing KEDA: bringing event-driven containers and functions to Kubernetes</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span>Event-driven architectures are </span><span>a</span><span> natural evolution of microservices, enabling a flexible and decoupled </span><span>design</span><span>, and</span><span> are increasingly being adopted by</span><span> enterprise customers. Fully managed serverless offerings like Azure Functions are event</span><span>&ndash;</span><span>driven by </span><span>design</span><span>, but </span><span>we have been hearing from customers about gaps</span><span> in </span><span>these </span><span>capabilities </span><span>for</span><span> solutions </span><span>based on </span><span><a href="https://kubernetes.io/">Kubernetes</a>. </span><span>S</span><span>caling</span><span> in Kubernetes</span><span> is reactive</span><span>,</span><span> based on the CPU and memory consumption of a container</span><span>.</span></p><p><span>I</span><span>n contrast, services like <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a> are acutely aware of event sources and therefore able to scale based on signals coming directly from the event source</span><span>,</span><span> even before the CPU or memory </span><span>are</span><span> impacted. </span><span>We set out</span><span> to </span><span>bring the benefits of event-driven architectures and </span><span>the productivity of functions to Kubernetes</span><span>. The result is </span><a href="https://aka.ms/keda"><strong>KEDA Kubernetes-based event-driven autoscaling</strong></a>.</p><p><span>Microsoft and <a href="https://www.redhat.com/">Red Hat</a> partnered to build</span><span> this</span><span> open source component </span><span>to</span><span> provide event-driven capabilities for any </span><span>Kubernetes</span><span> workload.KEDA enables any container to scale from zero to potentially thousands of instances based on event metrics like the length of a Kafka stream or an Azure Queue.It alsoenables </span><span>containers</span><span> to consume events directly from the</span><span> event</span><span> source instead of </span><span>decoupling with</span><span> HTTP</span><span>. KEDA </span><span>c</span><span>an drive the scale of any container and is extensible to add new event sources. </span><span></span></p><p><span>Because Azure Functions can be containerized, you can now deploy function</span><span>s </span><span>to any Kubernetes cluster while maintaining the same scaling behavior you would have in the Azure Functions service.</span><span>For workloads that may span the cloud and on-premises, you can now easily choose to publish across the Azure Functions service, in a cloud-hosted Kubernetes environment, or on-premises. Our partnership with Red Hat enables Azure Functions to run integrated within <a href="https://www.redhat.com/en/technologies/cloud-computing/openshift">OpenShift</a>, providing the productivity and power of serverless functions with the flexibility and control to choose where and how to host it. The same application can move seamlessly between environments without any changes to development, code</span><span>,</span><span> or scaling.</span><span></span></p><h2><span>Bringing rich events into Kubernetes workloads</span><span></span></h2><p><span>KEDA provides a </span><span>FaaS</span><span>-like model of event-aware scaling</span><span>,</span><span> where deployments can </span><span>dynamically </span><span>scale to and from zero based on </span><span>demand. KEDA also brings more event sources to Kubernetes. W</span><span>hen we think about event</span><span>&ndash;</span><span>driven</span><span> scenarios,</span><span> the first event source we</span><span> often</span><span></span><span>consider</span><span></span><span>is an HTTP event</span><span>. However, we&rsquo;ve learned from running Azure Functions that only 30% of our executions are coming from HTTP</span><span> events</span><span>, while the majority come from other event sources. We wanted to make sure that those other 70% of event </span><span>executions </span><span>were handled in a </span><span>natural</span><span> way in Kubernetes with KEDA</span><span>.</span><span></span></p><p><span>While other Kubernetes offerings have attempted to bring events from other sources, they usually involve converting the event to an HTTP request, which means data and context is lost in the process and direct communication with the event source is limited. KEDA unlocks rich connectivity directly to the event sources, enabling rapid and proactive scaling while preserving direct interaction with the event. KEDA provides a key component for event-driven and cloud native apps, and it also works seamlessly alongside <a href="https://virtual-kubelet.io/">Virtual Kubelet</a></span><span> and <a href="https://aka.ms/aksbook/virtualnode">AKS Virtual Nodes</a>, letting you to run your event-driven apps on serverless containers. </span></p><img loading="lazy" alt="KEDA technical diagram" width="1024" height="879" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2019/05/Build-KEDA-1-1024x879.png"><h2>Running Azure Functions in any environment</h2><p>Thousands of customers have already been using Azure Functions as the serverless compute service to process billions of events every day. Azure Functions provides an unmatched development experience, allowing you to develop and debug functions locally on any platform using editors like Visual Studio, <a href="https://code.visualstudio.com/">Visual Studio Code</a>, and more. While Azure Functions provides a fully managed serverless service, many customers want the freedom to run serverless in an open environment they can manage.</p><p>With the release of KEDA, any team can now deploy function apps created using those same tools directly to Kubernetes. This allows you to run Azure Functions on-premises or alongside your other Kubernetes investments without compromising on the productive serverless development experience. The open source Azure Functions runtime is available to every team and organization, and brings a world-class developer experience and programming model to Kubernetes.</p><p>The combination of flexible hosting options and an open source toolset gives teams more freedom and choice. If you choose to take advantage of the full benefits of a managed serverless service, you can shift responsibility and publish your apps to Azure Functions. If it makes sense for your organization or scenario, you can choose to run those same apps in Kubernetes with KEDA. While the hosting may vary, the developer experience and scaling remains constant.</p><img loading="lazy" alt="KEDA in action - GIF" width="1728" height="1080" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2019/05/20190506-Build-KEDA-gif.gif"><p>You can publish your first Azure Function to Kubernetes by <a href="https://aka.ms/kedaquickstart">following this simple tutorial</a>.</p><h2>Compatible with every event source</h2><p>KEDA will automatically detect new deployments and start monitoring event sources, leveraging real-time metrics to drive scaling decisions. Today KEDA and supporting tooling works with Kafka, Azure Queues, Azure Service Bus, RabbitMQ, HTTP, and Azure Event Grid / Cloud Events. More triggers will continue to be added in the future including Azure Event Hubs, Storage, Cosmos DB, and Durable Functions. KEDA event sources are extensible and we look forward to many additions from the community.</p><p><a href="https://github.com/kedacore/keda">KEDA is open source</a> on GitHub and open to community contributions. We look forward to hearing feedback from the community and working together to make event-driven container scale available to every developer and for every event type. If you want to learn more about KEDA, register today for <a href="https://aka.ms/keda-webinar">our webinar</a> later in May where we can share more on the benefits of event-driven and serverless architectures.</p><p>Questions or feedback on this news? Please let us know in the comments below.</p><p>&nbsp;</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/">Announcing KEDA: bringing event-driven containers and functions to Kubernetes</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/feed/</wfw:commentRss>
			<slash:comments>12</slash:comments>
		
		
			</item>
		<item>
		<title>What is a service mesh anyways?</title>
		<link>https://cloudblogs.microsoft.com/opensource/2018/12/18/what-is-service-mesh/</link>
		
		<dc:creator><![CDATA[William Morgan]]></dc:creator>
		<pubDate>Tue, 18 Dec 2018 16:10:39 +0000</pubDate>
				<category><![CDATA[English]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Linkerd]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2018/12/18/what-is-service-mesh/</guid>

					<description><![CDATA[<p>The service mesh may sound complex, but at its heart, it's a very simple idea: a set of network proxies that transparently run alongside microservices, implementing reliability, observability, and security features by measuring and manipulating inter-service ("east-west") traffic. Led by open source projects such as Linkerd, the service mesh model is increasingly popular because it<span><a class="read-more" aria-label="Read more about What is a service mesh anyways?" href="https://cloudblogs.microsoft.com/opensource/2018/12/18/what-is-service-mesh/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/12/18/what-is-service-mesh/">What is a service mesh anyways?</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The service mesh may sound complex, but at its heart, it's a very simple idea: a set of network proxies that transparently run alongside microservices, implementing reliability, observability, and security features by measuring and manipulating inter-service ("east-west") traffic. Led by open source projects such as <a href="https://linkerd.io/">Linkerd</a>, the service mesh model is increasingly popular because it addresses challenges, both computational and organizational, that occur with the adoption of microservices.</p><p>Over the past few years, the service mesh has risen to become a critical element of the cloud native stack. The first service mesh project, Linkerd, joined the Cloud Native Computing Foundation just last year, and has since grown to power the production infrastructure of companies around the globe, ranging from startups, like Monzo and OfferUp, to well-established companies, such as Comcast and Salesforce. Meanwhile, a host of other projects have followed hot on its heels, from cloud providers and startups alike. (See <u><a href="https://www.youtube.com/watch?v=nhOY2PAJHio&amp;feature=youtu.be">this video</a></u> by Azure's<u><a href="https://twitter.com/LachlanEvenson"> Lachlan Evenson</a></u> on installing and run Linkerd 2.0 on <u><a href="https://azure.microsoft.com/en-us/services/kubernetes-service/">Azure Kubernetes Service</a></u>.)</p><p>In this article, we'll define the service mesh and explain the various factors that are enabling the rise of service mesh adoption. Finally, we'll provide a vision of where the service mesh is heading.</p><h2>What is a service mesh?</h2><p>At its heart, a service mesh is a distributed set of proxies that are deployed alongside microservices. Following common network terminology, these proxies are often referred to as the <em>data plane</em>, and are typically coordinated by a centralized component called the <em>control plane</em>. Critically, the data plane proxies handle both incoming and outgoing traffic for each microservice, typically without the application code even being aware.</p><p>The data plane proxies act as highly instrumented, out-of-process network stacks, and handle all traffic to and from a microservice. Because they're centrally controlled, the service mesh can institute a variety of traffic control techniques globally across the application that provide reliability, observability, security, and more.</p><img loading="lazy" width="902" height="518" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2018/12/Linkerd-Control-diagram.png"><p>For example, the proxies can instrument all calls to a service, and provide "golden metrics," such as success rates, latency distributions, and request volumes to that service, or even to individual paths. They can handle request retries and timeouts. They can encrypt communication transparently with TLS. And they can accomplish all these things without any involvement from the application itself.</p><h2>Why use a service mesh?</h2><p>For an industry that's focused on speed and performance, the idea of adding user-space proxies in between every inter-service call can seem counterintuitive. Why introduce this new layer of complexity and latency?</p><p>The reason is more than the individual features. The service mesh provides something vital to operators: the ability to <em>shift</em><em>responsibility</em> for much of the reliability, visibility, and security requirements out of the application code itself, down onto the underlying infrastructure. This shift means the platform itself can provide these features without developer involvement.</p><p>This decoupling of responsibility, and ability to provide reliability, visibility, and security globally across the application, are the fundamental value proposition of the service mesh.</p><h2>Why are service meshes becoming so popular?</h2><p>The rise of service meshes, such as Linkerd, is tied to a big shift in the industry: the adoption of cloud native architectures, in which an application is built as microservices, deployed as containers, and run on a container orchestration system, such as Azure Kubernetes Service (AKS).</p><p>The cloud native approach, in turn, is rapidly gaining popularity because it is uniquely suited to the increasingly strict demands that we place on our software systems. In the modern world, we expect our software applications to be resilient to failures in hardware, software, and network, as well as handle massive scale and maintain a breakneck pace of feature development. This is a far cry from the expectations we placed on software a decade ago, when "sorry, we're down for maintenance" was acceptable!</p><p>The adoption of microservices especially allows us to meet those demands not only at the technological level, but also at the <em>organizational</em> level. By breaking our applications into loosely-coupled microservices that can be developed and released independently of each other, we isolate our failure domains and provide ways for the application to remain functional even if individual components are failing. More importantly, we also remove organizational bottlenecks and allow our developers to iterate with a minimum of coordination required.</p><p>These changes in organization and software architecture require new types of tooling to provide visibility into what is happening; new ways to manage partial failures and prevent them from escalating into full-blown ones; and new ways to address policy for security and compliance. The service mesh is part and parcel of these changes.</p><h2>The service mesh today</h2><p>While many companies have adopted the service mesh as a critical component of their application infrastructure, the service mesh landscape today is still nascent and continues to evolve rapidly. A heady mix of assorted service mesh projects from cloud providers and startups alike have served to validate the approach, but the bewildering array of options can also be confusing.</p><p>As the <u><a href="https://cncf.io">CNCF</a></u> service mesh project, and the project that "started it all," Linkerd has had the best opportunity to learn from its community of adopters and contributors around the world. Based on these lessons, much of the recent focus has been on dramatically reducing the amount of complexity introduced by the service mesh, following the mantra of "less is more." The recent release of <u><a href="https://linkerd.io/2/getting-started/">Linkerd 2.0</a></u>, focused on zero-config installation process and ultralight, Rust-based proxies, has demonstrated that adding a service mesh to an existing system doesn't require inordinate amounts of configuration, conceptual overhead, or system resources.</p><p>As with all early technology patterns, the service mesh will continue to evolve rapidly over the next few years. It is, in the words of Kubernetes maintainer Tim Hockin, &ldquo;an exciting time for boring infrastructure."</p><p>Linkerd has a thriving community of adopters and contributors, and we'd love for YOU to be a part of it. For more, check out the <u><a href="https://linkerd.io/2/overview/">docs</a></u> and <u><a href="https://github.com/linkerd/linkerd">GitHub</a></u> repo, join the <u><a href="https://slack.linkerd.io/">Linkerd Slack</a></u> and mailing lists (<u><a href="https://lists.cncf.io/g/cncf-linkerd-users">users</a></u>, <u><a href="https://lists.cncf.io/g/cncf-linkerd-dev">developers</a></u>, <u><a href="https://lists.cncf.io/g/cncf-linkerd-announce">announce</a></u>), and, of course, follow <u><a href="https://twitter.com/linkerd">@linkerd</a></u> on Twitter. We can't wait to have you aboard.</p><p>Questions or feedback? Let us know in the comments.</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/12/18/what-is-service-mesh/">What is a service mesh anyways?</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Securing Kubernetes workloads in hybrid settings with Aporeto</title>
		<link>https://cloudblogs.microsoft.com/opensource/2018/08/31/securing-kubernetes-workloads-hybrid-cloud-aporeto/</link>
		
		<dc:creator><![CDATA[The Aporeto Team]]></dc:creator>
		<pubDate>Fri, 31 Aug 2018 14:00:28 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[Azure Kubernetes Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2018/08/31/securing-kubernetes-workloads-hybrid-cloud-aporeto/</guid>

					<description><![CDATA[<p>Centralized visibility and security for applications distributed on Azure Kubernetes Service (AKS) and private clouds Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure.  AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure and handling critical tasks like health monitoring<span><a class="read-more" aria-label="Read more about Securing Kubernetes workloads in hybrid settings with Aporeto" href="https://cloudblogs.microsoft.com/opensource/2018/08/31/securing-kubernetes-workloads-hybrid-cloud-aporeto/">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/08/31/securing-kubernetes-workloads-hybrid-cloud-aporeto/">Securing Kubernetes workloads in hybrid settings with Aporeto</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h4>Centralized visibility and security for applications distributed on Azure Kubernetes Service (AKS) and private clouds</h4><p>Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure and handling critical tasks like health monitoring and maintenance. However, your operational needs may require you to deploy your Kubernetes cluster in a hybrid setting. For instance, your data services may be running in your private cloud while application logic services could be running in AKS.</p><p>Without the proper toolset and configuration, maintaining visibility and security for your distributed environment can be hard to configure and maintain. In this blog, we focus on providing centralized visibility and monitoring for these types of distributed workloads in a manner that is easy to deploy and manage. By following step 5 in the blog below, you will learn how to easily enforce network and service layer access policies in your AKS cluster with Aporeto. You may extend these policies in a hybrid environment without any network configuration or code modification.</p><h4>About Aporeto</h4><p>Aporeto is a Zero Trust security solution for microservices, containers and cloud. Fundamental to Aporeto's approach is the principle that everything in an application is accessible to everyone and could be compromised at any time. Aporeto uses vulnerability data, identity context, threat monitoring and behavior analysis to build and enforce authentication, authorization and encryption policies for applications. With Aporeto, enterprises implement a uniform security policy decoupled from the underlying infrastructure, enabling workload isolation, API access control and application identity management across public, private or hybrid cloud.</p><p>Because Aporeto transparently binds to application components to provide them with identity, the result is security independent from infrastructure and network and reduction of complexity at any scale on any cloud.</p><img loading="lazy" width="840" height="380" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2018/08/Aporeto_Graphic.png"><h4>How Aporeto works</h4><ul><li>Pick an application and visualize it;</li><li>Generate and simulate security policy;</li><li>Enforce the security policy.</li></ul><p>You can visualize the application of your choice by deploying Aporeto as an AKS DaemonSet (see #A in diagram above). If you control the virtual machines on which your application component run, you may also deploy Aporeto as a Docker container or a userland process (see #B in diagram above).</p><p>Aporeto auto-generates application security policy by ingesting Kubernetes Network Policies and RBAC. You also have the option of leveraging your application dependency graph that Aporeto creates to describe the application's behavioral intent as policies. In every case, you may audit and edit auto-generated policies and inject human wisdom when necessary.</p><p>Once you have policies, you may simulate their enforcement at runtime to evaluate the effects of your security policies without interrupting operations. When satisfied that your security policies are solid, you may lockdown your application and protected it with a Zero Trust approach.</p><p>Because Aporeto untethers application security from the network and infrastructure, one key benefit of Aporeto's approach for protecting your containers, microservices and cloud applications is that you can have a consistent security approach even in a hybrid or multi-cloud setting. As you gain experience with Aporeto in a single cluster setting, you will quickly realize how easy it is to have a consistent security posture in multi-cluster and multi-cloud settings without any infrastructure or operational complexity.</p><h4>Five Steps to Enforce Network and Service Layer Access Policies in AKS Clusters</h4><h5>Step 1: Prepare environment</h5><p>You will need the following binaries installed in your path.</p><ol><li>az (see <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest</a>)</li><li>kubectl (see <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a>)</li></ol><h5>Step 2: Setup Aporeto</h5><p>Using a browser login to <a href="https://console.aporeto.com/">https://console.aporeto.com/</a> then select the desired namespace where the cluster will be placed. The select and expand &ldquo;System&rdquo; and then select &ldquo;Kubernetes Clusters&rdquo;. Click on the &ldquo;+&rdquo; icon (top right). Give the cluster the name &ldquo;aks1&rdquo; and leave all defaults as they are. Click on create. This will create the cluster and cause a file with the name aks1.tar.gz (assuming you named the cluster aks1) to be downloaded to your browser download directory. Take note of this file as we will need it later.</p><h5>Step 3: Create AKS (Kubernetes Cluster on AKS)</h5><p>If you have not already done so, log into Azure with the following Powershell or Bash commands:</p><pre>az login</pre><p>Then create a working directory with the following Powershell or Bash commands:</p><pre>mkdir -p aks; cd aks</pre><p>Move the file downloaded in the previous step into the working directory.</p><p>Create the Kubernetes cluster on AKS with the following Powershell or Bash commands.</p><pre>az group create --name aporeto_lab --location eastusaz aks create --resource-group aporeto_lab --name aks1 --node-count 2 --generate-ssh-keysaz aks get-credentials --resource-group aporeto_lab --name aks1 -f kube.cfg</pre><p>Set the kubectl config file in the environment.</p><p><em>With PowerShell</em></p><pre>$loc = Get-LocationSet-Variable -Name "KUBECONFIG" -Value "$loc/kube.cfg"</pre><p><em>With Bash</em></p><pre>export KUBECONFIG=$PWD/kube.cfg</pre><p>and then verify that the nodes are operational with the following Powershell or Bash commands:</p><pre>kubectl --kubeconfig kube.cfg get nodes</pre><p>you should see something like this:</p><pre>-&gt;kubectl get nodesNAME STATUS ROLES AGE VERSIONaks-nodepool1-82983338-0 Ready agent 3m v1.9.6aks-nodepool1-82983338-1 Ready agent 3m v1.9.6</pre><h5>Step 4: Join the AKS Cluster to Aporeto</h5><p>Extract the contents of the file aks1.tar.gz and create the kubernetes resources with the bash commands (or Powershell equivalent. This may require a utility such as 7zip).</p><pre>mkdir -p kube_aporetotar xfv aks1.tar.gz -C kube_aporetokubectl create -f kube_aporeto</pre><p>then check the status with the command:</p><pre>kubectl get pods -n kube-system</pre><p>you should see something like:</p><pre>-&gt;kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEaporeto-enforcer-fkf46 1/1 Running 0 23saporeto-enforcer-v4k5r 1/1 Running 0 23saporeto-kubesquall-h4m5d 1/1 Running 0 21sazureproxy-79c5db744-t2654 1/1 Running 2 4mheapster-55f855b47-drbb2 2/2 Running 0 3mkube-dns-v20-7c556f89c5-mcg6z 3/3 Running 0 4mkube-dns-v20-7c556f89c5-xhts7 3/3 Running 0 4mkube-proxy-h5rqq 1/1 Running 0 4mkube-proxy-s7rkq 1/1 Running 0 4mkube-svc-redirect-92tvv 1/1 Running 0 4mkube-svc-redirect-h2dmp 1/1 Running 0 4mkubernetes-dashboard-546f987686-7gzln 1/1 Running 2 4mtunnelfront-66fd996c74-dlpdm 1/1 Running 0 4m</pre><h5>Step 5: Roll up your sleeves and dig in with a demo app</h5><p>Clone the github repo <a href="https://github.com/aporeto-inc/apowine.git">https://github.com/aporeto-inc/apowine.git</a> and then follow the instructions in the README.md file. By following this tutorial, you will learn how to enforce network and service layer access policies in your AKS cluster.</p><h4>Enjoy your AKS Cluster with Aporeto Security!</h4><p>Now that you have connected your AKS Kubernetes cluster to Aporeto, you can visualize it in real time and on historical bases using the Aporeto UI (<a href="https://console.aporeto.com/">https://console.aporeto.com/</a>). You can also connect your private cloud workload to your Aporeto account and view your distributed application's end-to-end operations centrally.</p><p>You can find instructions for connecting non-AKS workloads to Aporeto by perusing the document set in <a href="https://console.aporeto.com/accounts/welcome">https://console.aporeto.com/accounts/welcome</a> (click on &ldquo;Switch to Accounts&rdquo; (top right corner user icon, immediate right of the "?" mark icon). As always, you can request support directly in Aporeto's Console or via <a href="https://www.aporeto.com/support/">this link</a>.</p><p>Aporeto's powerful security capabilities unlock the following use cases, among others:</p><ul><li>Network segmentation and workload isolation, reducing compliance scope</li><li>Protection against malicious application discovery</li><li>Transparent encryption without code or network modification</li><li>Uniform API access control policy across services in public or private cloud</li><li>Continuous vulnerability analysis of container images</li><li>Runtime threat detection and protection based on behavioral analysis</li></ul><p>To learn more, please visit <a href="https://www.aporeto.com/demo/">https://www.aporeto.com/demo/</a></p><p>Enjoy!</p><p>The Aporeto Team</p><h5><em>Questions or feedback? Let us know in the comments below.</em></h5><h3></h3><h3></h3><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2018/08/31/securing-kubernetes-workloads-hybrid-cloud-aporeto/">Securing Kubernetes workloads in hybrid settings with Aporeto</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
