Story Estimation

We were having a chat internally at Bitnami with respect: should we try to standardise T-Shirt sizes across the team and also if we ought to have a mapping between user story points and implementation hours.

I felt that this is a discussion that keeps repeating, in conversations internally and externally, and I thought it would be worth it to put some effort into summarising my  thoughts about it.

Story Points vs Task Hours

Normally you will find Scrum coaches recommending the use of T-Shirt or Points for Story sizing, and hours for estimating Tasks. Stories are the unit of product/user value that it is delivered (tasks are just work).

Story sizing (T-Shirt or Points) is not an estimation of time, rather a way to express complexity. Yes, there is a correlation between complexity, effort and hours.  However, Complexity tends to be less variable across team members than hours.

A Story is complex due to the problems to be solved while implementing it, but the time it takes to solve the same problem might vary depending on the skills and experience of the person solving it.

OK, but how do we established consistency in estimating complexity? Human beings are pretty good at comparing similar problems. Problem A is similar to Problem B. So, I normally start by getting a team to agree the sizes of stories/problems that they have completed/solved together in the past. Then use that as reference point going forward.  This common understanding can only be shared by people in the same squad, since it is required a shared experience.

Story sizing can be and it is used to measure velocity, and also burn down progress. However, IMHO they are most useful to identify misalignment (Foo thinks is XXL but Bar thinks is L, how come? maybe Bar has some context that is missing Foo? or the other way around?). This conversation between team members about complexity is where the real value of Story sizing resides.

I think we can all agree that we can not predict the future.  Hence estimating in hours is always highly inaccurate. Inaccuracy grows as you extend the period of time you are estimating over (i.e. something that can take months or it is months away).

So why is it OK to estimate tasks in hours? Because tasks are small break down of stories that are assigned to an individual to implement.  Basically, the estimation is done over the shortest time span possible and taking the skills and experience of that person into account.

Tracking and managing cost

A different (and valuable) problem to solve is tracking the cost of implementing a feature/product/release for financial reasons. Most of us work on either For Profit or Cost Driven (non-for-profit) organisations, so having a record and an indication of where money (and hence hours) are going can be critical for our organisations.

While estimating time is highly inaccurate,  tracking actual time spent on tasks doesn’t have to be. Clearly there is a (human) cost associated on the granularity of this tracking. Which is the right level of tracking will be highly dependent on your business and accounting model.

Enhancing Helm Charts with Operators

I was interested to see if I could blend a Helm Chart (packing and deployment) and an Operator (day-2 operations) in a single solution, so I developed a proof of concept for ChartMuseum.

I have been wanting to play with the new Operator SDK from RedHat for a while. I like the concept of having an api for the life-cycle management. IMHO, operators are a development pattern that matches declarative Kubernetes API objects to lifecycle events like backup, restore, and other type of maintenance tasks.

In general, most operator implementations that I have played with also take care of the deployment of application. In some cases, like the etcd operator, they manage the life-cycle of pods directly rather than using standard objects like deployments or replicasets .

I have been doing some Helm Chart development and I really like the flexibility that Helm gives you to parametrise deployments.  It seemed to me that I could still use a Helm Chart for packing and deployment, and enhance it with an operator to be included in the releases to manage the application management beyond deployment.

As a proof of concept, I decided to try to extend the existing upstream chartmuseum chart with an operator that took care of adding and packaging charts from a github repo.

The basic operations that I set up to automate were:

  • Pulling a new git repository and helm packaging its contents
  • Regularly pulling updates and repackage a repo
  • Removing a git repository and its contents

I needed to be able to ask the chartmuseum to perform these activities. Using the adapter/sidecar pattern, I developed a new container to expose these as http endpoints and package dependencies (git and helm) to be bolted to chartmuseum’s container within the same pod.

So a Custom Resource for this operator will look like this:

apiVersion: "cm.bitnami.com/v1alpha1"
kind: "Chartmuseum"
metadata:
  name: "myrepo"
spec:
  git: "https://github.com/foo/mycharts"
  updateEveryMinutes: 15
  dependencies:
  - name: bitnami
    url: "https://charts.bitnami.com/bitnami"

The git repository has to be publicly available. The helm charts might be pulling dependencies from other repos, so I also added the ability to define these.

updateEveryMinutes is an integer value that indicates how often, in minutes, the git repo should be updated. Instead of having this functionality baked into the operator, it creates Cronjob objects to trigger the update. This objects are gettable and visible to the user.

On creation of a new custom resource of type Chartmuseum, the operator will:

  • Add dependencies as required –  POST /repo/dependency
  • Then add a new repository, which will trigger a git clone, and a packaging of all folders in the repo that contain a “Chart.yaml” file –  POST /repo/new
  • A new Cronjob will be created with the same name and namespace than the Custom Resource that will updateEveryMinutes hit GET /repo/name/update

On deletion of the custom resource, the operator will:

  • Delete the repo and its artifacts  – DELETE /repo/name
  • Delete the Cronjob object

Incorporating this into an existing Helm Chart was relatively simple. I created a new value flag called operator.enabled, which if set to true will add additional manifests to the release by:

  • modifying the main deployment to add the sidecar container,
  • modifying the main service to expose a new port for sync operations,
  • adding CRD for Chartmuseum and a deployment for the Operator,
  • adding RBAC support for the operator to read CRD and create Cronjobs

In summary, I found the operator-sdk really simple to use and work with. It really takes away the fiddly parts of creating an operator and lets you focus on the logic. I also found that the operator and the helm chart worked nicely together and provided a good transparency and control to the user.

If you want to try it, just clone this repo: https://github.com/vtuson/chartmuseum-chart and run:

helm install . -n mychartmuseum

Scaling WordPress In Kubernetes

Cloud Native applications have been designed to be run in microservices architecture where individual components can be scaled separately, data is persisted and sync across replicas and node failures can be easily survived. This can be more tricky with traditional web applications that have not been designed this way, such as WordPress.

Bitnami produces production-ready Docker images and Kubernetes Helm Charts that you can deploy into your Kubernetes Cluster. This blog will make reference to concepts that have been previously covered by our Kubernetes Get Started Guide.

WordPress Is A Stateful App

Although WordPress uses a database backend to persist and maintain user created content, administration changes are persisted to the local drive. These includes the plugins installed, site configuration and CSS/Media used by the web front-end. Multiple replicas of the same WordPress site would expect to have access to a shared drive. Basically, WordPress likes to keep its state close by.

Our Kubernetes Helm Chart for WordPress uses a Deployment to manage and scale the WordPress pods. In order to share this admin-created content across pods, the deployment mounts a volume provisioned by a Persistent Volume Claim(PVC).

When scaling a web app it is important to understand the different types of Access Modes available for persistent storage in Kubernetes: ReadWriteOnce (RWO), ReadOnlyMany (ROX) and ReadWriteMany (RWX).

If you want to administer your WordPress site separately, you could expose a read-only configuration volume to your pods. In this case, ROX would be you best choice since it can be mounted as read-only by many pods across multiple nodes.

If you want to continue using the WordPress admin interface in all its glory, all your pods will need read-write access to a common volume. It is also likely that you would like your pods run in different nodes (for better site availability and scalability). Since a RWO volume can only be mounted in one node at the time, you would really need to use RWX.

Great, Where Do I Get RWX Volume?!

Unfortunately RWX is not a very commonly supported access mode by the current list of volume plugins (see official documentation). So what can you do if you don’t have access to one in your cluster?

Since, WordPress write access does not require a highly performant solution, we are going to share a RWO volume across multiple pods by remapping it as a RWX NFS volume.

Hold on, this is going to be complicated, right? Nope, it is going to be a single command.

There Is A New Chart In Town

A few days back, a new Helm Chart landed on the official Kubernetes repository – Introducing the NFS Server Provisioner.

What this chart does is to deploy all the bits you need to enable dynamically serving NFS persistent volumes (PV) from any other volume type. Most Kubernetes deployment will have a Storage Class that provides RWO volumes. We are going to map a single RWO persistent volume from this existing Storage Class and share it as RWX Storage Class called ‘nfs’.

The following diagram show a summary of the kubernetes objects involved in this process:diagram

Please note that this solution has low fault tolerance, as an outage of the node that has the RWO mounted will affect the whole deployment availability (should not lose data).

As promised, deploying the NFS-Server provisioner is a simple helm command:

$ helm install stable/nfs-server-provisioner --set persistence.enabled=true,persistence.size=10Gi

Deploying WordPress To Work With NFS

Once you have deployed the Helm release, you can check that a new Storage Class called ‘nfs’ is now available in your cluster:

$ kubectl get storageclass

show storageclass results

Also the RXO volume claim has been created. In this case, it is using the standard storage class in minikube (hostpath) and it is bound to the NFS-server statefulset.show rxo pvc

Next we can deploy a WordPress release, using the NFS Storage Class:

$ helm install  --set persistence.accessMode=ReadWriteMany,persistence.storageClass=nfs stable/wordpress  -n scalable

We can see how we now have 3 PVCs in our cluster:

  • a RWO for the MariaDB backend database
  • a RWO for the NFS Server
  • A RWX for the WordPress pods

Inspecting the new PV, we can see that it is served over NFS:nfs pv

Are You Ready To Scale Horizontally?

Almost, the WordPress admin console will require a user session to be served always by the same pod. If you are using an ingress resource (also configurable by the chart) with a Nginx ingress controller, you can define session affinity via annotations. Just add these annotations to your values.yaml file, under ingress.hosts[0].annotations.

Now you just need to scale your WordPress deployment:

$ kubectl scale deploy scalable-wordpress --replicas=3

Happy Highly Available Blogging!

Introducting TUI – a library for building simple text interfaces

Why Tui?

At Bitnami, we have a bunch of useful command line tools that can be used to operate our stacks. I wanted to build a basic menu for the most used, so I didn’t have to remember each command and each argument. Then I thought It would be great to have a library that would work a bit like the cli go libraries, you can define the inputs and the function to handle the command and the user experience ‘just happens’. So that is why I created TUI

What is Tui?

This is a Basic Implementation of a text ui D.O.S style, that can be used to run CLI commands or you can define your own HandlerCommand to run Go functions. You can find it in my Github repo: https://github.com/vtuson/tui

Basic Structure is:

  • Menu
  • Commands – Each menu can have a set of commands to present
  • Args – Each command can have a set of arguments that get passed

The idea behing TUI is provide a simple library similar to the cli libs like https://github.com/urfave/cli that allows you to build a menu based application for terminal users.

By default a CLI command handler is provided that is able to pass parameters as flag, options or Enviroment variables You can find an example app in the /sampleapp folder

screenshot

Sample app

The project comes with a sample app under the folder /sampleapp

 cd sampleapp
 godep get
 go run app.go

Getting started with tui

A basic shell

Lets write a small application, You need to import the package, and create a basic menu with no option:

package main

import (
   "github.com/vtuson/tui"
)

func main() {
   menu := tui.NewMenu(tui.DefaultStyle())
   menu.Title = "Test"
   menu.Description = "Test app"
   menu.Show()

// This function handles input events like key strokes
   go menu.EventManager()

//Menu contains a channel that lets you know when the user has exited it
   <-menu.Wait

//Quits the menu, and cleans the screen
   menu.Quit()
}

Adding a command without arguments

Now lets add a small OS command that will run without need for arguments.

package main

import (
	"github.com/vtuson/tui"
)

func main() {
	menu := tui.NewMenu(tui.DefaultStyle())
	menu.Title = "Test"
	menu.Description = "Test app"

	menu.Commands = []tui.Command{
		tui.Command{
			Title:       "No Args",
			Cli:         "echo hello!",
			Description: "just being polite",
			Success:     "Yey it works",
			PrintOut:    true,
		},
	}

	menu.Show()
	go menu.EventManager()
	<-menu.Wait
	menu.Quit()
}

The option Printout allows for the output of the command to be shown to the user. The Description and Success are strings that will be use to add more context to the command execution. They are optional and dont need to be set if you don’t want to.

Commands with arguments

You can also add argument to command that a user can input

package main

import (
	"github.com/vtuson/tui"
)

func main() {
	menu := tui.NewMenu(tui.DefaultStyle())
	menu.Title = "Test"
	menu.Description = "Test app"

	menu.Commands = []tui.Command{
		tui.Command{
			Title:       "Args CLI",
			Cli:         "echo",
			PrintOut:    true,
			Description: "test of running a tui.Command with arguments",
			Args: []tui.Argument{
				tui.Argument{
					Title: "Please say hi",
				},
			},
		},
	}

	menu.Show()
	go menu.EventManager()
	<-menu.Wait
	menu.Quit()
}

Arguments can also be boolean flags, they can have a name (which can be passed as input to the command) or can be set as environment variables.

Adding your own handler

The library assumes that the command is an OS command to be executed using the provided OSCmdHandler function if the Execute is nil. If another handler is passed by setting the Excute value, then it will be called when the users selects that command.

Here is a simple echo example:

package main

import (
	"github.com/vtuson/tui"
	"errors"
)

//This is the customer handler c is the command object 
//for the command to be executed. 
//ch is the channel that can be used to return text to be displayed back to the user.
//The handler is executed async from the main program, 
//so you must close the channel when completed or the menu will hang 
//waiting for your command to complete.

func CustomHandler(c *tui.Command, ch chan string) {
	//defer close to make sure when you exit the menu knows to continue giving user the output
	defer close(ch)
	for _, a := range c.Args {
		ch <- a.Value
	}
	//if you want the command to fail set Error in the command variable
	c.Error=errors.New("It failed!")
}

func main() {
	menu := tui.NewMenu(tui.DefaultStyle())
	menu.Title = "Test"
	menu.Description = "Test app"

	menu.Commands = []tui.Command{
		tui.Command{
			Execute:     CustomHandler,
			Title:       "Args CLI",
			Cli:         "echo",
			PrintOut:    true,
			Description: "test of running a tui.Command with arguments",
			Args: []tui.Argument{
				tui.Argument{
					Title: "Please say hi",
				},
			},
		},
	}

	menu.Show()
	go menu.EventManager()
	<-menu.Wait
	menu.Quit()
}

I took a circular saw to the Nextcloud box and you won’t believe what happened next!

Ok, ok.. sorry for the click-bait headline – but It is mainly true.. I recently got a Nextcloud box , it was pretty easy to set up and here are some great instructions.

But this box is not just a Nextcloud box, it is  a box of unlimited possibilities. In just a few hours I added to my personal cloud  a WIFI access point and  chat server.   So here are some amazing facts you should know about Ubuntu and snaps:

Amazing fact #1 – One box, many apps

With snaps you can transform you single function device, into a box of tricks. You can add software to extend its functionality after you have made it. In this case I created an WIFI access point and added a Rocketchat server to it.

You can release a drone without autonomous capabilities, and once you are sure that you have nailed, you can publish a new app for it… or even sale a pro-version autopilot snap.

You can add an inexpensive Zigbee and Bluetooth module to your home router, and partner with a security firm to provide home surveillance services.. The possibilities are endless.

Amazing fact #2 – Many boxes, One heart

Maybe an infinite box of tricks is attractive to a geek like me,  but what it is interesting is product makers is :make one hardware, ship many products.

Compute parts (cpu,memory,storage) make a large part of  bill of materials of any smart device. So does validation and integration of this components with your software base… and then you need to provide updates for the OS and the kernel for years to come.

What if I told you could build (or buy) a single multi-function core – pre-integrated with a Linux OS  and use it to make drones, home routers, digital advertisement signs, industrial and home automation hubs, base stations, DSLAMs, top-of-rack switches,…

This is the real power of Ubuntu Core, with the OS and kernel being their own snaps – you can be sure the nothing has changes in them across these devices, and that you can reliably update of them.  You not only are able to share validation and maintenance cost across multiple projects, you would be able to increase the volume of your part order and get a better price.

20160927_101912

How was the box of tricks made:

Ingredients for the WIFI ap:

 

I also installed the Rocketchat server  snap for the store.

 

Single-node Kubernetes deployment

[Edited]

In order to test k8s you can always deploy a single-node setup locally using minikube, however it is a bit limited if you want to test interactions that require your services to be externally accessible from a mobile or web front-end.

While I wrote this post long time ago this is still a need that I come across when doing training or testing small projects.

I have updated my script to work with kubeadm and Ubuntu 16.04+, this should also help you deploying the lastest k8s version. While the old method should still work, this would be simpler.

Also, we took this concept to production in Bitnami. We have created a kubernetes sandbox that comes with all bells and whistles. You can run this in most mayor public cloud providers.

Look forward to your feedback!

;old-post ———————————

For this reason, I created a basic k8s setup for a Core OS single node in Azure using https://coreos.com/kubernetes/docs/latest/getting-started.html . Once I did this, I decided to automate its deployment via script.

It requires a Core OS instance running, then connect to it and:

git clone https://github.com/vtuson/k8single.git k8
cd k8
./kubeform.sh [myip-address] –> ip associated to eth, you can find it using ifconfig

This will deploy k8 into a single node, it sets up kubectl in the node and deploys skydns add on.

It also includes a busybox node file that can be deployed by:

kubectl create -f files/busybox

This might come useful to debug issues with the set up. To execute commands in busybox run:
kubectl exec busybox — [command]

The script and config files can be access at https://github.com/vtuson/k8single

If you hit any issues while deploying k8s in a single node a few things worth checking are:


sudo systemctl status etcd
sudo systemctl status flanneld
sudo systemctl status docker

Also it is worth checking what docker containers are running and if necessarily check the logs

docker ps -a
docker logs [container-id]

Deploying Heapster to Kubernetes

I recently blogged about deploying kubernetes in Azure.  After doing so, I wanted to keep an eye on usage of the instances and pods.

Kubernetes recommends Heapster as a cluster aggregator to monitor usage of nodes and pods. Very handy if you are deploying in Google Compute (GCE) as it has a pre-build dashboard to hook it to.

Heapster runs on each node, collects statistics of the system and pods which pipes to a storage backend of your choice. A very handy part of Heapster is that export user labels as part of metadata, which I believe can be used to create custom reports on services across nodes.

monitoring-architecture

If you are not using GCE or just don’t want to use their dashboard, you can deploy a combo of InfluxDB and Grafana as a DIY solution. While this seems promising the documentation, as usual, is pretty short on details..

Start by using the “detailed” guide to deploy the add on, which basically consists of:

**wait! don’t run this yet until you finished reading article**

git clone https://github.com/kubernetes/heapster.git
cd heapster
kubectl create -f deploy/kube-config/influxdb/

These steps exposes Grafana and InfluxDB via the api proxy, you can see them in your deployment by doing:

kubectl cluster-info

This didn’t quite work for me, and while rummaging in the yamls, I found out that this is not really the recommended configuration for live deployments anyway…

So here is what I did:

  1. Remove env variables influxdb-grafana-controller.yaml
  2. Expose service as NodePort or LoadBalancer depends of your preference in grafana-service.yaml. E.g. Under spec section add: type: NodePort
  3. Now run >kubectl create -f deploy/kube-config/influxdb/

You can see the expose port for Grafana by running:
kubectl --namespace=kube-system describe service grafana-service

In this deployment, all the services, rc and pods are added under the kube-system namespace, so remember to add the –namespace flag to your kubectl commands.

Now you should be able to access Grafana on any external ip or dns on the port listed under NodePort. But I was not able to see any data.

Login to Grafana as admin (admin:admin by default), select DataSources>influxdb-datasource and test the connection. The connection is set up as http://monitoring-influxdb:8086, this failed for me.

Since InfluxDB and Grafana are both in the same pod, you can use localhost to access the service. So change the url to http://localhost:8086, save and test the connection again. This worked for me and a minute later I was getting realtime data from nodes and pods.

Proxying Grafana

I run an nginx proxy that terminates https  requests for my domain and a created a https://mydomain/monitoring/ end point as part of it.

For some reason, Grafana needs to know the root-url format that is being accessed from to work properly. This is defined in a config file.. while you could change it and rebuild the image, I preferred to override it via an enviroment variable in the influxdb-grafana-controller.yaml kubernetes file. Just add to the Grafana container section:

env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s:%(http_port)s/monitoring"

You can do this with any of the Grafana config values, which allows you to reuse the official Grafana docker image straight from the main registry.

Deploying Kubernetes on Azure

I recently looked to do my first live deployment of kubernetes, after having playing succesfully with minikube.

When trying to deploy kubernetes in public cloud, there is a couple of base options. You could start from scratch or use one of the turnkey solutions.

You have two turnkey solutions fro Azure, Fannel or Weave based. Basically these are two different networking solutions, but the actual turnkey solutions differ more than just the networking layer. I tried both and had issues with both, yeay!! However, I liked the fannel solution over Weave’s straight away. Flannel’s seems to be able to configure and used Azure better. For example, It uses a VM scale sets for the slave nodes, and configures external ips and security groups. This might be because the Flannel solution is sponsored by Microsoft, so I ended up focusing on it over Weave’s.

The documentation is not bad, but a bit short on some basic details. I  did the deployment in both Ubuntu 14.04 and OSX10 and worked in both. The documetation details jq and docker as the main dependencies. I found issues with older versions of jq that are part of the 14.04 Ubuntu archive, so make sure to install the lastest version from the jq website.

Ultimately, Kube-up.sh seems to be a basic configuration wrapper around azkube, a link to it is burried at the end of the kubernetes doc. Cole Mickens is the main developer for azkube and the turnkey soultion. While looking around his github profile, I found this very useful link on the status of support for Kubernetes in Azure. I would hope this eventually lands in the main kubernetes doc site.

As part of the first install instructions, you will need to provide the subscription and tenant id. I found the subscription id easily enough from the web console, but the tenant id was a bit more elusive. Altough the tenant id is not required for installations of 1.3, the script failed to execute without it. It seems like the best way to find it is the Azure cli tool, which you can get from node.js


npm install azure
azure login
azure account show

This will give you ll the details that you need to set it up. You can then just go ahead or you can edit deatils in  cluster/azure/config-default.sh

You might want to edit the number of VMs that the operation will create. Once you run kube-up.sh, you should hopefully get a working kubernetes deployment.

If for any reason, you would like to change the version to be install, you will need to edit the file called “version” under the kubernetes folder setup by the first installation step.

The deployment comes with a ‘utils’ script that makes it very simple do a few things. One is to copy the ssh key that will give you access to the slaves to the master.

$ ./util.sh copykey

From the master, you just need to access the internal ip using the “kube” username and specify your private key for authentication.

Next, I would suggest to configure your local kubectl and deploy the SkyDNS addon. You will really need this to easly access services.

$ ./util.sh configure-kubectl
$ kubectl create -f https://raw.githubusercontent.com/colemickens/azkube/v0.0.5/templates/coreos/addons/skydns.yaml

And that is it, if you run kubectl get nodes, you will be able to see the master and the slaves.

Since Azure does not have direct integretion for loadbalancer, any services that you expose you will need to configure with a self-deployed solution. But it seems that version 1.4  ofKubernetes is comming with equivalent support for Azure that the current versions boast for  AWS and Co.

Standing up a private Docker Registry

First of all, I wanted to recommend the following recipe from Digital Ocean on how to rollout your own Docker Registry in Ubuntu 14.04. As with most of their stuff, it is super easy to follow.

I also wanted to share a small improvement on the recipe to include a UI front-end to the registry.

Once you have completed the recipe and have a repository secured and running, you extend your docker-compose file to look like this:

nginx:
 image: "nginx:1.9"
 ports:
 - 443:443
 - 8080:8080
 links:
 - registry:registry
 - web:web
 volumes:
 - ./nginx/:/etc/nginx/conf.d:ro

web:
 image: hyper/docker-registry-web
 ports:
 - 8000:8080
 links:
 - registry
 environment:
 REGISTRY_HOST: registry

registry:
 image: registry:2
 ports:
 - 127.0.0.1:5000:5000
 environment:
 REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
 volumes:
 - ./data:/data

You will also need to include a configuration file for web in the nginx folder.

file: ~/docker-registry/nginx/web.conf

upstream docker-registry-web {
 server web:8080;
 }

server {
 listen 8080;
 server_name [YOUR DOMAIN];

# SSL
 ssl on;
 ssl_certificate /etc/nginx/conf.d/domain.crt;
 ssl_certificate_key /etc/nginx/conf.d/domain.key;

location / {

# To add basic authentication to v2 use auth_basic setting plus add_header
 auth_basic "registry.localhost";
 auth_basic_user_file /etc/nginx/conf.d/registry.password;

proxy_pass http://docker-registry-web;
 proxy_set_header Host $http_host; # required for docker client's sake
 proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 proxy_read_timeout 900;
 }
 }

docker-compose up and you should be able to have a ssl secured UI frontend in port 8080 (https://yourdomain:8080/)
If you have any improvement tips I am all ears!

PiGlow API: one small snap for humanity…

My first steps into snappifying, I have publish a RestApi for PiGlow (glowapi 0.1.2). I though it might be a good first step and mildly useful for people wanting to set up build notifications, twitter mentions, whatever you fancy!

You can find it in the webdm store…
Code is here: https://code.launchpad.net/~vtuson/+junk/glowapi

And here is how it works:
PiGlow Api exposes PiGlow in your board port 8000, so you can easy accessing by POST in port 8000.

remeber to do the hardware assign, something like: sudo snappy hw-assign glowapi.vtuson /dev/i2c-1

API calls , method POST:

v1/flare
turns all the leds on to max brightness
v1/on
turns all the leds on to med brigthness
v1/clear
turns off all leds
v1/legs/:id
turns all the leds in a leg (:id) to a given brightness
(if not specify it uses a default setting)
parms: intensity , range 0 to 1
eg: http://localhost:8000/v1/legs/1?intensity=0.3
v1/legs/:id/colors/:colid
turns on one led (colid) in a leg (:id) to a given brightness
(if not specify it uses a default setting)
parms: intensity , range 0 to 1
eg: http://localhost:8000/v1/legs/1/colors/green?intensity=0.3
v1/colors/:colid
turn on all leds for a color across all legs
if not specify it uses a default setting)
parms: intensity , range 0 to 1
eg: http://localhost:8000/v1/colors/green?intensity=0.3

ID ranges
legs range : 0 – 2
colors:
green
white
blue
yellow
orange
red