<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Péter Márton &#8211; Microsoft Open Source Blog</title>
	<atom:link href="https://cloudblogs.microsoft.com/opensource/author/petermarton/feed/" rel="self" type="application/rss+xml" />
	<link>https://cloudblogs.microsoft.com/opensource</link>
	<description>Open dialogue about openness at Microsoft – open source, standards, interoperability</description>
	<lastBuildDate>Tue, 25 Jul 2017 20:56:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Node.js performance monitoring with Prometheus</title>
		<link>https://cloudblogs.microsoft.com/opensource/2017/07/25/node-js-performance-monitoring-prometheus/</link>
		
		<dc:creator><![CDATA[Péter Márton]]></dc:creator>
		<pubDate>Tue, 25 Jul 2017 20:56:17 +0000</pubDate>
				<category><![CDATA[Application Development]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[Management]]></category>
		<category><![CDATA[United States]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[Node.js]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2017/07/25/node-js-performance-monitoring-prometheus/</guid>

					<description><![CDATA[<p>This article helps you to understand what to monitor if you have a Node.js application in production, and how to use Prometheus  an open source solution, which provides powerful data compressions and fast data querying for time series data  for Node.js monitoring. If you want to learn more about Node.js and reliability engineering,<span><a class="read-more" aria-label="Read more about Node.js performance monitoring with Prometheus" href="https://cloudblogs.microsoft.com/opensource/2017/07/25/node-js-performance-monitoring-prometheus/" data-bi-cn="Read more about Node.js performance monitoring with Prometheus">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2017/07/25/node-js-performance-monitoring-prometheus/">Node.js performance monitoring with Prometheus</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This article helps you to understand what to monitor if you have a Node.js application in production, and how to use Prometheus an open source solution, which provides powerful data compressions and fast data querying for time series data for Node.js monitoring.<br>If you want to learn more about Node.js and reliability engineering, check out this <a href="https://trace.risingstack.com/monitoring-ebook">free whitepaper from RisingStack</a>.</p><h2>What is Node.js Monitoring?</h2><blockquote><p><em>The term &ldquo;service monitoring,&rdquo; means tasks of collecting, processing, aggregating, and displaying real-time quantitative data about a system.</em></p></blockquote><p>Monitoring gives us the ability to observe our system&rsquo;s state and address issues before they impact our business. Monitoring can also help to optimize our users&rsquo; experience.<br>To analyze the data, first, you need to extract metrics from your system &mdash; like the Memory usage of a particular application instance. We call this extraction instrumentation.<br>We use the term white box monitoring when metrics are provided by the running system itself. This is the kind of Node.js monitoring we&rsquo;ll be diving into.</p><h2>The four metrics to watch</h2><p>Every service is different, and you can monitor many aspects of them. Metrics can range from low-level resources like <em>Memory usage </em>to high-level business metrics like the <em>number of signups</em>.<br>We recommend you to watch these metrics for all of your services:</p><ul><li><strong>Error Rate: </strong>Because errors are user facing and immediately affect your customers.</li><li><strong>Response time: </strong>Because the latency directly affects your customers and business.</li><li><strong>Throughput: </strong>The traffic helps you understanding the context of increased error rates and the latency too.</li><li><strong>Saturation:</strong> It tells how <em>&ldquo;full&rdquo;</em> your service is relative to capacity. If the CPU usage is 90%, can your system handle more traffic.</li></ul><h2>Instrumentation</h2><p>You can instrument your system manually, but most of the commercial monitoring solutions provide out of the box instrumentations.<br>In many cases, instrumentation means adding extra logic and code pieces that come with a performance overhead.<br>With Node.js monitoring and instrumentation, you should aim to achieve low overhead, but it doesn&rsquo;t necessarily mean that a bigger performance impact is not justifiable for better system visibility.</p><h3>The risk of instrumenting your code</h3><p>Instrumentations can be very specific and usually need expertise and more development time. Also, a bad instrumentation can introduce bugs into your system or generate an unreasonable performance overhead.<br>Instrumenting your code can also produce a lot of extra lines and bloat your applications codebase.</p><h2>Picking your Node.js Monitoring Tool</h2><p>When your team picks a monitoring tool you should consider the following factors:</p><ul><li><strong>Expertise</strong>: Do you have the expertise? Building a monitoring tool and writing a high-quality instrumentation and extracting the right metrics is not easy.</li><li><strong>Build or buy</strong>: Building a proper monitoring solution needs lots of expertise, time and money while obtaining an existing solution can be easier and cheaper.</li><li><strong>SaaS or on-premises</strong>: Do you want to host your monitoring solution? Can you use a SaaS solution &mdash; what&rsquo;s your data compliance and protection policy? Using a SaaS solution can be a good pick, for example, when you want to focus on your product instead of tooling. Both open source and commercial solutions are usually available as hosted or on-premises setup.</li><li><strong>Licensing</strong>: Do you want to ship your monitoring toolset with your product? Can you use a commercial solution? You should always check licensing.</li><li><strong>Integrations</strong>: Does it support my external dependencies like databases, orchestration system and npm libraries?</li><li><strong>Instrumentation</strong>: Does it provide automatic instrumentation? Do I need to instrument my code manually? How much time would it take to do it on my own?</li><li><strong>Microservices</strong>: Do you build a monolith or a distributed system? Microservices needs specific tools and philosophy to debug and monitor them effectively. Do you need to distribute tracing or security checks?</li></ul><p>Based on our experience, in most of the cases an out-of-the-box SaaS or on-premises monitoring solution like <a href="https://trace.risingstack.com/">Trace</a> gives the right amount of visibility and toolset to monitor and debug your Node.js applications.<br>But what can you do when you cannot choose a commercial solution for some reason, and you want to build your own monitoring suite? In this case, Prometheus comes into the picture!</p><h2>Node Monitoring with Prometheus</h2><p><a href="https://prometheus.io/">Prometheus</a> is an open source solution for Node.js monitoring and alerting. It provides powerful data compressions and fast data querying for time series data.<br>The core concept of Prometheus is that it stores all data in a time series format. Time series is a stream of immutable time-stamped values that belong to the same metric and the same labels. These labels cause the metrics to be multi-dimensional.</p><h3><strong>Data collection and metrics types</strong></h3><p>Prometheus uses the HTTP pull model, which means that every application needs to expose a <em>GET /metrics</em> endpoint that can be periodically fetched by the Prometheus instance.<br>Prometheus has four metrics types:</p><ul><li><strong>Counter</strong>: Cumulative metric that represents a single numerical value that only ever goes up.</li><li><strong>Gauge</strong>: Represents a single numerical value that can arbitrarily go up and down.</li><li><strong>Histogram</strong>: Samples observations and counts them in configurable buckets.</li><li><strong>Summary</strong>: similar to a histogram, samples observations, it calculates configurable quantiles over a sliding time window.</li></ul><p>In the following snippet, you can see an example response for the <em>/metrics </em>endpoint. It contains both the counter <em>(nodejs_heap_space_size_total_bytes)</em> and histogram <em>(http_request_duration_ms_bucket)</em> types of metrics:</p><pre><em># HELP nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes.</em><em># TYPE nodejs_heap_space_size_total_bytes gauge</em>nodejs_heap_space_size_total_bytes{space="new"} 1048576 1497945862862 nodejs_heap_space_size_total_bytes{space="old"} 9818112 1497945862862 nodejs_heap_space_size_total_bytes{space="code"} 3784704 1497945862862 nodejs_heap_space_size_total_bytes{space="map"} 1069056 1497945862862 nodejs_heap_space_size_total_bytes{space="large_object"} 0 1497945862862<em># HELP http_request_duration_ms Duration of HTTP requests in ms</em><em># TYPE http_request_duration_ms histogram</em>http_request_duration_ms_bucket{le="10",code="200",route="/",method="GET"} 58http_request_duration_ms_bucket{le="100",code="200",route="/",method="GET"} 1476http_request_duration_ms_bucket{le="250",code="200",route="/",method="GET"} 3001 http_request_duration_ms_bucket{le="500",code="200",route="/",method="GET"} 3001 http_request_duration_ms_bucket{le="+Inf",code="200",route="/",method="GET"} 3001</pre><p>Prometheus offers an alternative, called Pushgateway, to monitor components that cannot be scrapped because they live behind a firewall or are short-lived jobs.<br>Before a job gets terminated, it can push metrics to this gateway, and Prometheus can scrape the metrics from there later on.<br>To set up Prometheus to periodically collect metrics from your application check out the following <a href="https://github.com/RisingStack/example-prometheus-nodejs/tree/master/prometheus-data">example configuration</a>.</p><h3><strong>Monitoring a Node.js application</strong></h3><p>When we want to monitor our Node.js application with Prometheus, we need to solve the following challenges:</p><ul><li><strong>Instrumentation</strong>: Safely instrumenting our code with minimal performance overhead.</li><li><strong>Metrics exposition: </strong>Exposing our metrics for Prometheus with an HTTP endpoint.</li><li><strong>Hosting Prometheus</strong>: Having a well configured Prometheus running.</li><li><strong>Extracting value</strong>: Writing queries that are statistically correct.</li><li><strong>Visualizing</strong>: Building dashboards and visualizing our queries.</li><li><strong>Alerting</strong>: Setting up efficient alerts.</li><li><strong>Paging</strong>: Get notified about alerts with applying escalation policies for paging.</li></ul><h3><strong>Node.js Metrics Exporter</strong></h3><p>To collect metrics from our Node.js application and expose it to Prometheus we can use the <a href="https://github.com/siimon/prom-client">prom-client</a> npm library.<br>In the following example, we create a histogram type of metrics to collect our APIs&rsquo; response time per routes. Take a look at the pre-defined bucket sizes and our route label:</p><pre> // Init const Prometheus = require('prom-client') const httpRequestDurationMicroseconds = new Prometheus.Histogram({ name: 'http_request_duration_ms', help: 'Duration of HTTP requests in ms', labelNames: ['route'], // buckets for response time from 0.1ms to 500ms buckets: [0.10, 5, 15, 50, 100, 200, 300, 400, 500] })</pre><p>We need to collect the response time after each request and report it with the route label.</p><pre>// After each response httpRequestDurationMicroseconds .labels(req.route.path) .observe(responseTimeInMs)</pre><p>We can then register a route a <em>GET /metrics</em> endpoint to expose our metrics in the right format for Prometheus.</p><pre>// Metrics endpoint app.get('/metrics', (req, res) =&gt; { res.set('Content-Type', Prometheus.register.contentType) res.end(Prometheus.register.metrics()) })</pre><p>After we collected our metrics, we want to extract some value from them to visualize.<br>Prometheus provides a functional expression language that lets the user select and aggregate time series data in real time.<br>The Prometheus dashboard has a built-in query and visualization tool:</p><p>Let&rsquo;s look at some example queries for response time and memory usage.</p><h3>Query: 95th Response Time</h3><p>We can determinate the 95th percentile of our response time from our histogram metrics. With the 95th percentile response time, we can filter out peaks, and it usually gives a better understanding of the average user experience.</p><pre>histogram_quantile(0.95, sum(rate(http_request_duration_ms_bucket[1m])) by (le, service, route, method))</pre><h3>Query: Average Response Time</h3><p>As histogram type in Prometheus also collects the count and sum values for the observed metrics, we can divide them to get the average response time for our application.</p><pre>avg(rate(http_request_duration_ms_sum[1m]) / rate(http_request_duration_ms_count[1m])) by (service, route, method, code)</pre><p>For more advanced queries like Error rate and Apdex score check out our <a href="https://github.com/RisingStack/example-prometheus-nodejs">Prometheus with Node.js</a> example repository.</p><h3><strong>Alerting</strong></h3><p>Prometheus comes with a built-in alerting feature where you can use your queries to define your expectations. However, Prometheus alerting doesn&rsquo;t come with a notification system. To set up one, you need to use the <a href="https://prometheus.io/docs/alerting/alertmanager/">Alert manager</a> or another external process.<br>Let&rsquo;s see an example of how you can set up an alert for your applications&rsquo; median response time. In this case, we want to fire an alert when the median response time goes above 100ms.</p><pre># APIHighMedianResponseTime ALERT APIHighMedianResponseTime IF histogram_quantile(0.5, sum(rate(http_request_duration_ms_bucket[1m])) by (le, service, route, method)) &gt; 100 FOR 60s ANNOTATIONS { summary = "High median response time on {{ $labels.service }} and {{ $labels.method }} {{ $labels.route }}", description = "{{ $labels.service }}, {{ $labels.method }} {{ $labels.route }} has a median response time above 100ms (current value: {{ $value }}ms)", }</pre><h2>Kubernetes integration</h2><p>Prometheus offers a built-in Kubernetes integration, supported inside Azure Container Service as well. It&rsquo;s capable of discovering Kubernetes resources like Nodes, Services, and Pods while scraping metrics from them.<br>It&rsquo;s an extremely powerful feature in a containerized system, where instances are born and die all the time. With a use case like this, HTTP endpoint-based scraping would be hard to achieve through manual configuration.<br>You can also provision Prometheus easily with Kubernetes and <a href="https://helm.sh/">Helm</a>. It only needs a couple of steps.<br>To start, we need a running Kubernetes cluster! As <a href="https://azure.microsoft.com/en-us/services/container-service">Azure Container Service</a> provides a hosted Kubernetes, I can provision one quickly:</p><pre># Provision a new Kubernetes cluster az acs create -n myClusterName -d myDNSPrefix -g myResourceGroup --generate-ssh-keys --orchestrator-type kubernetes# Configure kubectl with the new cluster az acs kubernetes get-credentials --resource-group=myResourceGroup --name=myClusterName</pre><p>After a couple of minutes when our Kubernetes cluster is ready, we can initialize Helm and install Prometheus:</p><pre>helm inithelm install stable/prometheus</pre><p>For more information on provisioning Prometheus with Kubernetes check out the Prometheus Helm chart.</p><h2>Grafana</h2><p>As you can see, the built-in visualization method of Prometheus is great to inspect our queries output, but it's not configurable enough to use it for dashboards.<br>As Prometheus has an API to run queries and get data, you can use many external solutions to build dashboards. At RisingStack, one of our favorites is Grafana.<br>Grafana is an open source, pluggable visualization platform. It can process metrics from many types of systems, and it has built-in Prometheus data source support. In Grafana, you can import an existing dashboard or build you own.<br></p><img loading="lazy" alt="Dashboard with Grafana image" width="600" height="323" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/07/Dashboard-with-Grafana.png"><p>Dashboard with Grafana</p><h2><strong>Next steps</strong></h2><p>If you want to learn more about Node.js and reliability engineering, make sure to check out the <a href="https://trace.risingstack.com/monitoring-ebook">free whitepaper from RisingStack</a>.<br>At the Node Summit this week? Check out this <a href="https://open.microsoft.com/2017/07/24/microsoft-node-summit/">blog post</a> for ways to the connect with the Microsoft team there.</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2017/07/25/node-js-performance-monitoring-prometheus/">Node.js performance monitoring with Prometheus</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Packing a Kubernetes Microservices App with Helm on Azure Container Service</title>
		<link>https://cloudblogs.microsoft.com/opensource/2017/05/23/kubernetes-helm-microsoft-azure-container-service/</link>
		
		<dc:creator><![CDATA[Péter Márton]]></dc:creator>
		<pubDate>Tue, 23 May 2017 14:45:12 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[English]]></category>
		<category><![CDATA[United States]]></category>
		<category><![CDATA[Azure Container Service]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<guid isPermaLink="false">https://cloudblogs.microsoft.com/opensource/2017/05/23/kubernetes-helm-microsoft-azure-container-service/</guid>

					<description><![CDATA[<p>In this blog post, I&#8217;ll show how we packed our Kubernetes microservices app with Helm, running on Azure Container Service, and made them easy to reproduce in various environments. Shipping microservices as a single piece of block At RisingStack we use Kubernetes with tens of microservices to provide our Node.js monitoring solution Trace for our<span><a class="read-more" aria-label="Read more about Packing a Kubernetes Microservices App with Helm on Azure Container Service" href="https://cloudblogs.microsoft.com/opensource/2017/05/23/kubernetes-helm-microsoft-azure-container-service/" data-bi-cn="Read more about Packing a Kubernetes Microservices App with Helm on Azure Container Service">Read more</a></span></p>
<p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2017/05/23/kubernetes-helm-microsoft-azure-container-service/">Packing a Kubernetes Microservices App with Helm on Azure Container Service</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><em>In this blog post, I&rsquo;ll show how we packed our Kubernetes microservices app with Helm, running on Azure Container Service, and made them easy to reproduce in various environments.</em></p><p><article><h2 id="shipping-microservices-as-a-single-piece-of-block">Shipping microservices as a single piece of block</h2></article></p><p>At RisingStack we use Kubernetes with tens of microservices to provide our <a href="https://trace.risingstack.com/">Node.js monitoring solution Trace</a> for our SaaS customers.<br>During the last couple of months, we were asked by many enterprises with strict data compliance requirements to make our product available as a self-hosted service. So we had to find a solution that makes it easy for them to install Trace as a single piece of software and hides the complexity of our infrastructure. It&rsquo;s challenging because Trace contains many small applications, databases, and settings. We wanted to find a solution that is not only easy to ship but also highly configurable. As Kubernetes is configuration based, we started to look for templating solutions that brought up new challenges. That&rsquo;s how we found Helm, which provides a powerful templating and package management solution for Kubernetes. Thanks to this process, Trace is now available as a <a href="https://trace.risingstack.com/on-premises.html">self-hosted Node.js monitoring solution</a>, and you can have the same experience in your own Azure cloud as our SaaS customers.In this blog post, I&rsquo;ll show how we packed our Kubernetes microservices app with Helm and made them easy to reproduce in various environments.</p><p><article><h2>Kubernetes resource definitions</h2><p>One of the best features of Kubernetes is its configuration based nature, which allows to create or modify your resources. You can easily set up and manage your components from running containers to load balancers through YAML or JSON files.Kubernetes makes it super easy to reproduce the same thing, but it can be challenging to modify and manage different Docker image tags, secrets and resource limits per different environments.Take a look at the following YAML snippet that creates three running replicas from the <code>metrics-processor</code> container with the same <code>DB_URI</code> environment variable:</p><pre>apiVersion: apps/v1beta1 kind: Deployment metadata: name: metrics-processor spec: replicas: 3 spec: containers: - name: metrics-processor image: myco/metrics-processor:1.7.9 env: - name: DB_URI value: postgres://my-uri</pre><p>What would happen if we wanted to ship a different version from our application that connects to a separate database? How about introducing some templating?</p><blockquote><p>For your production application you would probably use Kubernetes Secret resource that expects Base64 encoded strings and makes it even more challenging to dynamically configure them.</p></blockquote><h3>Templating challenges</h3><p>I think we all feel that we need to introduce some kind of templating solution here, but why can it be challenging?First of all, in Kubernetes, some resources depend on each other. For example, a Deployment can use various secrets, or we want to run some migration jobs before we kick off our applications. This means that we need a solution which is capable of managing these dependency graphs, can run our templates in the correct order.Another great challenge comes with managing our configurations and a different version of our templates and variables to update our resources. We really want to avoid the situation when we need to re-create everything to update our Docker image tag only.This is where Helm comes to rescue the day.</p><h2>Templating with Helm</h2><blockquote><p>Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.</p></blockquote><p><a href="https://github.com/kubernetes/helm">Helm</a> is an open source project maintained by the Kubernetes organization. It makes it easy to pack, ship and update Kubernetes resources together as a single package.<br><img loading="lazy" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/helm-diagram.png" alt="" width="703" height="700" srcset="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/helm-diagram-300x300.png 300w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/helm-diagram-150x150.png 150w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/helm-diagram-251x250.png 251w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/helm-diagram-330x329.png 330w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/helm-diagram-400x398.png 400w, https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/helm-diagram.png 703w"></p></article></p><p>One of the best parts of Helm is that it comes with an open-source <a href="https://github.com/kubernetes/charts">repository</a> maintained by the community, where you can find hundreds of different pre-packed solutions from databases like MongoDB and Redis, to applications like WordPress and OpenVPN.<br>With Helm, you can install complex solutions like a <a href="https://github.com/kubernetes/charts/tree/master/stable/jenkins">Jenkins</a> master-slave architecture in minutes.</p><pre>helm install --name my-jenkins stable/Jenkins</pre><p>Helm doesn&rsquo;t just provision your Kubernetes resources in the correct order. It also comes with lifecycle hooks, advanced templating, and the concept of sub-charts. For the complete list, I recommend checking out their <a href="https://github.com/kubernetes/helm/tree/master/docs">documentation</a>.</p><p><article><article><article><article><article><h2 id="how-does-helm-work">How does Helm work?</h2><p>Helm is working in a client-server architecture where the <em>Tiller Server</em> is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server. It is responsible for combining charts and installing Kubernetes resources asked by the client.<br>While the <em>Helm Client</em> is a command-line client for end users, the client is responsible for communicating with the tiller server.</p><h3>Helm example on Azure Container Service</h3><p>In this example, I&rsquo;ll show how you can install a Jenkins with master-slave settings to Kubernetes with <a href="https://azure.microsoft.com/en-us/services/container-service/">Azure Container Service</a> in minutes.First of all, we need a running Kubernetes cluster. Luckily, Azure Container Service provides a hosted Kubernetes, so I can provision one quickly:</p><article><pre># Provision a new Kubernetes cluster az acs create -n myClusterName -d myDNSPrefix -g myResourceGroup --generate-ssh-keys --orchestrator-type kubernetes# Configure kubectl with the new cluster az acs kubernetes get-credentials --resource-group=myResourceGroup --name=myClusterName</pre><blockquote><p>If you don&rsquo;t have kubectl run: az acs kubernetes install-cl</p></blockquote><p>After a couple of minutes, when our Kubernetes cluster is ready, we can initialize the Helm Tiller:</p><pre>helm init</pre><p>The <code>helm init</code> command installs Helm Tiller into the current Kubernetes cluster.</p><blockquote><p>On macOS you can install Helm with brew: <code>brew install kubernetes-helm</code>, with other platforms check out their <a href="https://github.com/kubernetes/helm/blob/master/docs/install.md">installation</a> docs.</p></blockquote><p>After my Helm is ready to accept charts, I can install Jenkins from the official Helm repository:</p><pre>helm install --name my-ci --set Master.ServiceType=NodePort,Persistence.Enabled=false stable/jenkins</pre></article></article></article></article></article></article></p><blockquote><p>For the sake of simplicity and security, I disabled persistent volume and service exposing in this example.</p></blockquote><p>That&rsquo;s it! To visit our freshly installed Jenkins, follow the instructions in the Helm install output or use the <code>kubectl pot-forward &lt;pod-name&gt; 8080</code> terminal command.<br></p><img loading="lazy" width="840" height="372" src="https://cloudblogs.microsoft.com/uploads/prod/sites/37/2017/05/jenkins-snapshot-1024x453.png"><p>In a really short amount of time, we just provisioned a Jenkins master into our cluster, which also runs its slaves in Kubernetes. It is also able to manage our other Kubernetes resources so we can immediately start to build CI pipelines.</p><p><article><h2 id="trace-as-a-helm-chart">Trace as a Helm chart</h2><p>With Helm, we were able to turn our applications, configurations, autoscaling settings and load balancers into a Helm chart that contains smaller sub-charts and ship it as one piece of the chart. It makes possible to easily reproduce our whole infrastructure in a couple of minutes.<br>We&rsquo;re not only using this to ship the <a href="https://trace.risingstack.com/on-premises.html">self-hosted version of Trace</a>, but we can also easily run multiple test environments or even move/copy our entire SaaS infrastructure between multiple cloud providers. We only need a running Kubernetes cluster.</p><h3 id="keeping-helm-charts-in-sync">Keeping Helm charts in sync</h3><p>For keeping our charts in sync with our infrastructure, we changed our releasing process to update our Helm repository and modify the chart&rsquo;s Docker image tag. For this, we created a small service that uses the GitHub API; it is triggered by our CI.</p><h2 id="outro">Outro</h2><p>The popularity of Kubernetes increases rapidly, while hosted cluster solutions are becoming available by cloud providers like Azure. With Helm, you can ship and install complex microservices applications or databases into your Kubernetes cluster.<br>It was never easier to try out new technologies and ship awesome features.<br></p></article></p><p>Questions? Let me know in the comments.</p><p>The post <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource/2017/05/23/kubernetes-helm-microsoft-azure-container-service/">Packing a Kubernetes Microservices App with Helm on Azure Container Service</a> appeared first on <a rel="nofollow" href="https://cloudblogs.microsoft.com/opensource">Microsoft Open Source Blog</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
