Streaming thoughts

Current projects & general musings


  1. Set up a Kubernetes/OpenShift cluster with DNS enabled.
  2. Create your application image that performs a DNS lookup to find cluster nodes.
  3. Create a service to access your cluster: target it only at the nodes that should be accessible by clients (e.g. Elasticsearch client nodes).
  4. Create a headless service that uses a common label subset - use the service name as the DNS entry that your application image looks up to find cluster nodes.
  5. Create replication controllers for your pods. If you have multiple pod types that should form part of the same cluster remember to use a common subset for your labels.

The details

One of the big promises of Kubernetes & OpenShift is really easy management of your containerised applications. For standalone or load-balanced stateless applications, Kubernetes works brilliantly, but one thing that I had a bit of trouble figuring out was how do perform cluster discovery for my applications? Say one of my applications needs to know about at least one other node (seed node) that it should join a cluster with.

There is an example in the Kubernetes repo for Cassandra that requests existing service endpoints from the Kubernetes API server & use those as the seed servers. You can see the code for it here. That works great for a cluster that allows unauthenticated/unauthorized access to the API server, but hopefully most people are going to lock down their API server (OpenShift comes with auth baked in by the way & secure by default). If you’re going to secure your API server then you’re going to have to distribute credentials via secrets around to every container that wants to call the API server. Personally I’d rather only distribute secrets when absolutely necessary: if there’s a way to achieve what we need to achieve without distributing secrets then I would prefer to do that.

It would be cool if Kubernetes had the concept of cluster seeds baked in & could provide seeds to pods through configuration, but right now it can’t so we’re going to take advantage of a couple of things that Kubernetes provides to do that: headless services & DNS.

Before we go any further, a quick recap of 3 Kubernetes concepts we’ll be using in this post (taken from the excellent Kubernetes documentation):

  1. Pods are the smallest deployable units that can be created, scheduled, and managed. Pods are a colocated group of containers (run on the same node) & share stuff like network space (e.g. IP address) & disk (via volumes). Read more
  2. Replication controllers (RCs) ensure that a specified number of pod “replicas” are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Read more
  3. Services are an abstraction which defines a logical set of Pods and a policy by which to access them. The set of Pods targeted by a Service is determined by a Label Selector. A service is used to access a group of pods through a consistent IP address without knowing the exact pods, especially important considering the ephemeral nature of pods. Read more

A headless service is a service that has no IP address (& therefore no service environment variables, load-balancing or proxying). It is simply used to track what endpoints (pods) would be part of the service. Perfect for simple discovery.

I’m going to assume you have a working Kubernetes cluster up & running. If you don’t then you can really easily set one up on any Docker-enabled host via a script that Fabric8 provides (see here if you’re interested). Note that the script will actually spin up OpenShift3 rather than vanilla Kubernetes as Fabric8 uses some of the extensions that OpenShift provides, like builds, deployment pipelines, etc for other things. Everything below will work on vanilla Kubernetes of course.

You’re also going to need to have the DNS cluster add-on. Btw, this is another capability that OpenShift provides by default.

For a working (hopefully!) example, I’m going to use Elasticsearch as I’m pretty familiar with it & it’s awesome horizontal scalability lends itself very well to hopefully explaining this clearly. This is an application that we provide for one click installation as part of Fabric8 to build up Elasticsearch clusters. To make this a little more interesting we’re actually going to create a cluster of the 3 different types of Elasticsearch node: Master, Data & Client. If you’re building a large cluster this is probably what you would want to do. We’re going to make it so that each type can be scaled individually by resizing the respective replication controller & each node is going to discover other nodes in the cluster via a headless service.

So to action…

Before we actually create anything, let’s prepare our Kubernetes manifests. We’ll send the create requests to the API server at the end - don’t jump the gun!

First let’s create our replication controllers. All 3 look pretty similar - this one’s for the client nodes:

  "apiVersion" : "v1beta1",
  "id" : "elasticsearch-client-rc",
  "kind" : "ReplicationController",
  "labels" : {
    "component" : "elasticsearch",
    "type": "client"
  "desiredState" : {
    "podTemplate" : {
      "desiredState" : {
        "manifest" : {
          "containers" : [ {
            "env" : [
              { "name" : "SERVICE_DNS", "value": "elasticsearch-cluster" },
              { "name": "NODE_DATA",    "value": "false" },
              { "name": "NODE_MASTER",  "value": "false" }
            "image" : "fabric8/elasticsearch-k8s:1.5.0",
            "imagePullPolicy" : "PullIfNotPresent",
            "name" : "elasticsearch-container",
            "ports" : [
              { "containerPort" : 9200 },
              { "containerPort" : 9300 }
          } ],
          "id" : "elasticsearchPod",
          "version" : "v1beta1"
      "labels" : {
        "component" : "elasticsearch",
        "type": "client"
    "replicaSelector" : {
      "component" : "elasticsearch",
      "type": "client"
    "replicas" : 1

Few things of importance here: notice the labels on the pod template:

"labels" : {
  "component" : "elasticsearch",
  "type": "client"

For the replication controllers for data & master nodes, you will need to update the type in the label - leave the component part alone: having a common subset in the labels for all the node types is what we will use when we create our headless service.

The environment variables are something that the fabric8/elasticsearch-k8s uses to configure Elasticsearch so you will need to update the NODE_DATA & NODE_MASTER environment variables appropriately for the other two replication controllers for the other node types.

We want all access to the Elasticsearch cluster to go through the client nodes so let’s create a service to do just that:

  "id": "elasticsearch",
  "apiVersion": "v1beta1",
  "kind": "Service",
  "containerPort": 9200,
  "port": 9200,
  "selector": {
    "component": "elasticsearch",
    "type": "client"

Notice that the selector matches the labels of the client nodes replication controller only.

Finally we create a headless service that we’re going to use to discover our cluster nodes:

  "id": "elasticsearch-cluster",
  "apiVersion": "v1beta1",
  "PortalIP": "None",
  "kind": "Service",
  "containerPort": 9300,
  "port": 9300,
  "selector": {
    "component": "elasticsearch"

Notice the PortalIP is set to None - that means no IP address will be allocated to the service. Sadly we still have to specify the containerPort & port although these are not used at all.

The final thing to take not of is the id: elasticsearch-cluster. This is the DNS name that the service can be discovered under. With a normal service, the DNS entry that is registered is an A record with the IP address that is allocated to the service. With a headless service, however, an A record is created for each service endpoint (pod targeted by the specified selector) for the service name.

Go ahead & create your resources - create the services first so that cluster discovery service is available when the nodes first come up.

Once your resources are created & the pods are up, let’s check the DNS entries have been created properly with a quick dig. On my setup, this is the result:

$ dig @localhost elasticsearch-cluster.default.local. +short

And for the Elasticsearch client service:

$ dig @localhost elasticsearch.default.local. +short

If we use the Elasticsearch client service to check the health of the cluster:

$ curl\?pretty

  "cluster_name" : "elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "number_of_pending_tasks" : 0

Yay - it worked! 3 nodes in our cluster discovered without any need for credentials to be distributed using DNS.

Now play with resizing each of the 3 node types & see what happens - easy cluster resizing FTW.

If you haven’t heard of it yet, hawtio is pretty cool. It’s an open source web console for managing your Java stuff, with loads of plugins already available & really simple to write your own (& hopefully contribute back). In one of my recent projects, I was using OpenShift for deployment & wanted to use hawtio to manage my application. It’s pretty easy to get it going because, as PaaSes go, OpenShift is fantastically configurable.

First step, if you haven’t done so already, is to sign up to OpenShift here. You can get a reasonable amount of resources for free so if you just want to play around you can. Then follow the getting started guide. Only takes a few minutes.

Now let’s get installing & configuring hawtio. First thing to do is to create an application. We’re going to create a Tomcat instance so we can deploy standard Java webapps. As OpenShift is from Red Hat, they provide their “enterprise” version of Tomcat, which they call Red Hat JBoss EWS. OpenShift applications are deployed through what they call cartridges - we’re going to use the jbossews-2.0 cartridge to create our application, which I’m going to call hawtiodemo, by running the following command (you can of course also do this on the OpenShift website):

rhc app create hawtiodemo jbossews-2.0

There will be a load of output on the screen & it will hopefully finish with some details for your new application, something like this:

Your application 'hawtiodemo' is now available.

  SSH to:
  Git remote: ssh://
  Cloned to:  /home/jdyson/projects/hawtiodemo

Run 'rhc show-app hawtiodemo' for more details about your app.

Hitting the URL as specified will bring you to the default application that is included in the OpenShift cartridge - a quick start to get something deployed for you to see working.

Notice the last line saying it’s cloned the git repository that OpenShift uses for your deployments into a local directory - very convenient. If you change into that local directory, you’ll see the normal contents for a Maven webapp project: a pom.xml, README, src/. This is where you would develop your web application, just like a normal Maven web application. I’m not going to do that here as this is about how to get hawtio deployed & configured on OpenShift, but it’s good to see nonetheless.

So now we have a deployed starter webapp, let’s get hawtio ready to deploy. In the cloned git repo, you’ll see a directory called webapps. Deploying hawtio is as simple as downloading the war file from the hawtio website & placing it in the webapps directory. If you’re on Linux you can simply run:

curl -o webapps/hawtio.war

Now you have to add it to the git repo - this feels a bit weird to me, adding a binary archive to a git repo, but this is how OpenShift works so let’s go with it:

git add webapps/hawtio.war
git commit -m "Added hawtio.war"

Deploying this is as simple as any other OpenShift deployment:

git push

There will be loads of output as OpenShift deploys your updated application, finishing with something like

remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success

You can also check the Tomcat logs to ensure it’s started correctly:

rhc tail

Congratulations - you’ve just installed & deployed hawtio. You can hit it at /hawtio under your OpenShift application URL, for me that is You’ll see that you can access hawtio without logging in - not great, as it’s pretty powerful - it can access anything through JMX so you could even stop your application. So let’s secure it.

Hawtio uses JAAS for security. It discovers what app server it’s deployed on to & uses the native app server JAAS configuration so as not to reinvent the wheel. For Tomcat, that means the configuring the tomcat-users.xml file. Luckily, OpenShift makes configuration files available in your git repo for you to configure. Edit .openshift/config/tomcat-users.xml & add in a user & a role, so your tomcat-users.xml file will look something like this:

<?xml version='1.0' encoding='utf-8'?>
  <role rolename="admin"/>
  <user username="admin" password="xxxxxxxx" roles="admin"/>

Set the password to something secure - you can even encrypt the password by following something like this if you want.

Next thing to do is to enable authentication for hawtio. This again is very simple - all you need to do is to add some Java system properties for Tomcat. You can read more about hawtio configuration here. On OpenShift, you can add hooks to run at different stages of your application lifecycle. We’re going to add a script that runs before our application starts to set up CATALINA_OPTS, the standard way to add system properties to Tomcat. Edit a file called .openshift/action_hooks/pre_start_jbossews-2.0 & add the following content:

export CATALINA_OPTS='-Dhawtio.authenticationEnabled=true -Dhawtio.role=admin'

That’s all there is to it - enable authentication & limit access to hawtio to those users with the admin role as we specified in tomcat-users.xml. You can of course change that to whatever you want.

Let’s commit those files & push our updated application:

git add .openshift/action_hooks/pre_start_jbossews-2.0 \
git commit -m "Added hawtio authentication"
git push

Again, you’ll get the wall of text as OpenShift deploys your application - wait for the success at the end. If you run rhc tail again, you’ll see confirmation that hawtio has indeed picked up the system properties & enabled authentication:

INFO  | localhost-startStop-1 | Discovered container Apache Tomcat to use with hawtio authentication filter
INFO  | localhost-startStop-1 | Starting hawtio authentication filter, JAAS realm: "*" authorized role(s): "admin" role principal classes: "io.hawt.web.tomcat.TomcatPrincipal"

Now refresh hawtio in your browser & this time you will be redirected to the hawtio log in page. Log in with the credentials you specified earlier & you are now authenticated in hawtio. Browse around, see what you can do - it really is such a useful application.

I hope that seemed pretty easy to you - a lot of writing here for something that literally takes 5 minutes from beginning to end. Hawtio is similarly easy to deploy in any of your applications, from standalone Java apps to apps deployed on J2EE application servers, super flexible, super useful.