I <3 Playbooks

In my previous post, I discussed learning linux.  Now I want to talk about one of the cool things I’ve done with it. One thing that sucks about getting a new computer is going through all the program installations and configuration you need to do to really make it yours.  Things like: installing your favorite browser, setting up your password manager, choosing a desktop background, etc.  This is especially true for developers, who have to manage multiple tool chains of applications in order to do their work.  Not to mention, we tend to be very picky about our text editors and IDEs.  I’ve also been in the abnormal situation over the last year where I have setup my ubuntu desktop environment on five different computers.  That’s a lot of setup.

And when I’m setting up my computer, there is almost always something I forget until I actually need it, and then I have to both set it up and use it.

There is a better way: automation.  I can write scripts that will do that for me, so when I get a new computer I need to setup, I can login, type a few commands, and then be good to go.   This is difficult to do in the Windows world, though project like boxstarter are making it better.  On UNIX’s like Linux, though, this type of thing is richly supported.  So last spring, I started working on automating my desktop setup using a technology called Ansible.  All the code is available on github.

Ansible is a pretty cool technology. It is a way of declaratively describing your computer. For example, normally if you are installing a piece of software, you get the installer and tell the installer to install the software on the computer.  With ansible, you simply declare that the software should be installed on the computer, and ansible makes it happen.  It will test to see if it’s already installed, and if so, does nothing.  If it’s not installed, ansible will install it.

It’s not perfect, because there are certain things that I haven’t automated yet.  These are mainly around security issues, like setting up my ssh key, pgp key, connecting up my password manager, and things like that.  I also haven’t focused on specifying how the computer should be setup outside of my little user (for the most part).  For example, I don’t do anything with partitioning or video card configuration.  I don’t think it would be useful to add those things due to all the boxes tending to have different hardware.

Here’s what it will do:cowsay

  • Installs all the software I use.
  • Copies my well known configuration for applications like my shell (oh-my-zsh), git, and various editors
  • Sets my desktop theme and background
  • Sets up LaTex
  • Sets up my creative workflow for some vector editing when needed
  • And most importantly: sets up cowsay

So when I need to setup my computer, I just need to type four commands to kick off the automation, and I’m pretty sure I can get it down to one command.  Then I just wait for it to complete, usually 10-15 minutes later, and my computer is ready to be productive.

Any time I find a new piece of software that’s very useful, I just update my playbook to include that software, and then re-run the playbook.  It installs the software, and I’m ready to go.  But even better, I have three ubuntu desktops I use almost daily.  I can use my automation to keep those things synchronized, so my tools are available across all of them.  This makes me very happy.

.NET Open Sourced!

So the big news in the tech world today is that the .NET framework has been open sourced. I was hoping this was going to happen at some point, and it’s great to finally see that realized. Microsoft has been headed in this direction for quite some time. I think the first projects to be developed out in the open was really IronPython and IronRuby, which haven’t seen a lot of love recently. But more recently, ASP.NET vNext has been out on Github, as well as other core technologies like Entity Framework. Very recently, the new .NET compilers were open sourced. You could kind of see that this was in the works, but you couldn’t be sure. It’s just really great news to see today.

The other big announcement is Visual Studio Community Edition. VS had been available for free in the express editions, but they were limited. You couldn’t do multiple projects in your solution, and you couldn’t add extensions like Resharper.

The Community Edition removes these restrictions. As far as I can tell, this is basically the Professional Edition of Visual Studio, but now available for teams less than 10, including individual developers of course.

This is great news for hobbyists, or even just programmers who want to hone their skills outside of their job. You can have complex projects and have the entire VS experience without shelling out several hundred dollars (well, additional dollars after buying Resharper).

It’s a great day to be a .NET Developer!

Falsehoods Programmers Believe

A very informative search to conduct is to start with the phrase “Falsehoods Programmers Believe” and see what Google suggests to finish the phrase.  Here are a few that I’ve gathered over time:

All of these articles are aimed at programmers who have to deal with systematizing life in such a way that can be handled by a computer.  I really appreciate these articles because they highlight the complexities of life that most people have the privilege of ignoring.

How we’re using AmplifyJs

AmplifyJs is a very nice library for wrapping and handling ajax calls and other data operations. It handles requests, storing object locally, and a very nice pub-sub system. We abstracted out all of our ajax requests to use amplify, and it’s paid off hugely for us. This post is not meant to be a tutorial on how to use Amplify, but a few things we stumbled upon that is very useful.

Replace the Default Decoder

This was one of the first things we did.  We wanted to be able to handle requests that error’d in one place, so by replacing the default decoder, we were able to display a nice message to our users, yet still pass through our success data (this code is in TypeScript):

amplify.request.decoders._default = function (data, status, xhr, success, error) {
  if (data.status == "success") {
  } else {
    try {
      if (data.message == 'ValidationErrors') {
      } else {
    } catch (err) {
      messageHub.showError("There was an unspecified problem with the request");

Subscribe to Ajax Events

Another thing we wanted to only handle in one place was the display of a loading animation or message.  Amplify made this incredibly easy through it’s pub / sub functionality.  I was able to subscribe to two events that would track how many requests were currently happening, and show or hide a dialogue box accordingly:

amplify.subscribe("request.before.ajax", () => {
  if (requestsHappening == 0) {

amplify.subscribe("request.complete", () => {
  if (requestsHappening == 0) {

This greatly simplified our request code.  We took both of these pieces of code, and called it amplify_config.js.  That got included in our bundled scripts, and it provided a nice central location to define our requests as well.  I highly recommend this as a way to simplify your ajax code, even if you’re not using the more advanced features of the library.


A little over a month ago, I sat down and started working on something I’m calling Netherpad.  The idea is loosely based on etherpad, though I’m implementing it on .NET using ASP.NET MVC 4, SignalR, and some TypeScript.

The main technology that allows this live shared collaboration is called Operational Transformation.  I’ve had it on my list of things to learn for several years now, and now I’m starting to wrap my mind around it.  What I’m finding is that it’s not as difficult as I thought it would be.  I found a simple implementation of the algorithm in JavaScript called jinfinote. Going through this code made it pretty easy to understand what was going on.  I had to modify it to first use SignalR instead of straight web sockets, and that required a pretty good understanding of what was going on, especially since SignalR uses GUIDs to identify users instead of straight integers.  This has been causing some pain, but I’m working around it.

After nearly all day of working on it, I got the basic live collaboration working, as can be seen in this video:

It’s an ugly hack, but it works.

As the caption says, it’s ugly and requires some trickery in the underlying javascript, but it’s working.  It was a truly magical moment for me to type a key in one browser and see it show up in the other browser.

Moving forward, I’m working on porting the original javascript to TypeScript and will be mixing in some linq.js in order to make up for the fact that Javascript doesn’t have a native dictionary object.  So far, I really like working with TypeScript; it’s a huge improvement over plain Javascript.  Porting the code has also forced me to gain a good understanding of the algorithms involved.  I’m almost at the point where I can throw out the original implementation and rewrite it in a way that makes more sense from a .NET / TypeScript standpoint.

There is still a lot of work to do on the project, but so far, it’s pretty exciting.  Be sure to check out the code if you wish, but beware, it doesn’t really work right out of the box at the moment, though I will be working to make that happen soon.

Introducing Nenetics

This weekend I spent a little bit of time working on the C# genetic algorithm library I’ve created called ‘Nenetics.’ I created an introductory video that demonstrates where I’m at right now. Still have a long ways to go until I get where I want to be:

The Artificial Selection civilization seems to be behaving pretty good at this point, though as the genome size increases, it’s becoming difficult to get past around the 89% fitness hump. I think the problem there is in the breeding code, so I’ll be testing a few things with that. After that, it will be on to open simulations, which I’m really looking forward to.

As always, code is up on github.

Trick for guessing hex color values

I was recently doing a bit of design work on Grease when I was reminded of an old trick I’ve figured out that some people aren’t aware of.  As a young developer hex codes for colors in html / css seemed completely strange.  I knew that #ffffff was white and #000000 was black, but for anything else, I needed to look it up either on the web, or fire up Photoshop or paint.net which could give me the values I wanted (now there are also browser extensions for that sort of thing).  I remember one day I was working on something and I started noticing a pattern.  That’s when I realized the ‘trick.’  Here it is:

Each group of two digits is a RGB value, in that order, with digits from 0 to F

Pretty simple, right?  So if you want something red, you make the ‘R’ values F and leave the rest as zero:


If we plug into your style you get the result:   red.

You can do the same thing with Green (G) and Blue (B).  If you want gray, make all six digits the same.  As the numbers go higher, the color get progressive lighter.  Same thing with RGB.  To use the example above, if you used, say ‘5’ instead of ‘F’ (i.e.’ #550000) then you’ll get a darker red.

In order to make different colors, you first need to forget what you learned in kindergarten.  You think Blue and Yellow make green?  Well, yeah, but here you’re starting with Red, Green, and Blue.  You don’t have yellow and you already have green.  So how do you make yellow?  You combine Red and Green: #FFFF00.   Here is the full set of simple combinations:


With those simple combinations, you now are on you way to easily getting colors ‘close enough’ to keep coding and have a rough idea of where you’re at color-wise.  More importantly, once hex codes become more than magical incantations to get the desired effect, you can start experimenting without needing to go out to another sources.  Want orange?  Maybe that will be some combination of red and green.  You can start experimenting and discover you were right.  A good orange is at #FF9900. 

If you run into trouble, all those other tools that you’ve used in the past are still available, but what I’ve found is that as I use this trick, I’ve needed to rely on those tools less and less.  Good luck!

Grease released

It’s been a year and a half since I’ve mentioned it here, but I’ve just made a general release of Grease. As you can see, it now comes with it’s own nifty website. In the next few weeks, I hope to release a Chocolatey package for it as well. I’m going to wait until Google stops saying this site distributes malware before I release that, though. (As a side note: aren’t the google webmaster tools awesome?)

This release has been a long time coming. There hasn’t been a lot of development on the software, but it hasn’t needed it. I’ve made a few performance tweaks, and there are a few more things I have in mind that should help the app start faster. I’ll probable be posting on those as I get them implemented.

I did have to go through philosophical transition. Grease has always been created to be simple. When I first started writing it, I did so with the specific purpose of not introducing features that I didn’t need and would just create bloat. I think I’ve achieved exactly what I was looking for. A few months ago, though, I received a pull request from someone on github. They had added a number of things, like the ability to change from random to ordered, and had made a few UI tweaks. I accepted the pull request, but when using the app, discovered that it really had lost what I was aiming for. Tonight I removed those additions and returned the project to it’s original aims. In the future, I will probably only be aiming for performance tweaks, especially when dealing with large libraries.

If this kind of app interests you and you find it useful, please let me know! You can reach me on twitter @charlesj.

Adventures in building better Authorizations

Warning: This post contains ugly code and half-ideas.

The problem

I have a confession.  I’ve never used the built in ASP.NET Membership framework.  Early on, it was a pain trying to figure out how to get it integrated with your own database, and later on, I was so used to rolling my own that I didn’t worry about it.  A little more than a year ago, though, I realized this was pretty foolish, so I decided to take another look at it.

Unfortunately, what I found didn’t suite me.  It may work for some applications, but not for what I’m building.  I needed something with some finer control.  Here are a few things I wanted that it didn’t have:

  1. Ability to combine webforms and windows authentication (some users from each)
  2. Ability to store passwords using the bcrypt hashing algorithm (for future safety).
  3. Finely tuned permissions that only give access to a subset of resources in a resource type.

The last one is especially important.  On the basic applications you see in demos, it’s not important at all.  If you have a blog, most people have read access, few people have write access.  For blog demos, users that have read-write access to only a subset of blog posts doesn’t come up very often.  But this is a very common issue in every single web app I’ve ever developed.   Having an “EditBlogEntry” role doesn’t cut it.  That role needs to be able to check to make sure that some user can edit a specific blog entry.

Recently, I’ve set out to tackle that problem.

What I’m working on

It became apparent to me pretty quickly that to get something like this to work, each role would need to be able to execute some unique code to be able to run the logic to insure that a user can access what they’re trying to access.  I would call these “Permissions.”  Permissions would eventually be able to be assigned to both individual users and groups of users, but to begin with, to keep it simple, only users.

Here are some of the requirements:

  1. Users (and groups) can have many permissions (many-t0-many in the db)
  2. Permissions will have unique  checking code for each one (therefore, each permission is a different class).
  3. A filter will be per-controller or per-method (in MVC) and will know which permission that method or controller requires (it will be a string, most likely).
  4. The filter will create an instance of the permission, pass in the values needed and then execute some function to find out if the user is authorized.

I started writing some code.  One issue I realized right away: EF expects one table per class.  With each permission being a different class, and wanting an arbitrary number of classes, this would be a problem.  Instead, I would make all permissions inherit from an interface and store only certain information from the class in the database.

Currently, here is the website permission:

When the applications starts up, it reflects over itself to find all classes that implement IWebsitePermission, and it inserts a record into the Permissions Database (if it doesn’t already exist) for that permission. The database stores the Permission Name, Option, Whether it has an option, description, and System.Type.

To give you an idea how this permission is implemented, here is a partial implementation that checks to see if a user can edit a specific webpage:

Okay, there is a lot of bad code in this file. Ini my defense, for now I’m just trying to get this idea to work. Then I’ll work on fixing the rest.  Above all, the entity framework code in the snippet should be outlawed.  And there’s no exception checking!

So far, I’m pleased with how this method is turning out.  It seems like I’m on the right path, but I’ve got some major architectural issues:

  1. Where should the permissions be hosted?  In the Models namespace?  In the Website namespace?
  2. Is there a better way to pass in the request information and logged in user information?
  3. How is the best way to break up permission access?  Does there need to be a permission for add/edit/delete, and every sub modication (e.g. add version to webpage)

I have a long ways to go here, but this idea has been floating in my head for a few years now, so it’s nice to start making some real progress.