Index | Archives | Atom Feed | RSS Feed

Tools

If you want to start a war between coders, just ask them to start describing their tools. Text editors, programming languages, and even platforms form entrenched camps of dispute. Even so, like the great religions of the world, we cannot but help proselytizing about the virtues of our one true way. This post is no different.

I’ve been coding a long time and I’ve had all that time to become embittered and crotchety about my development environments. I won’t lie, when I see someone coding in Textmate, I judge. Sure, there’s an element of jest, but I do it all the same.

Not all tools are created equally. This is my set up. It is the one true way. If you’re doing something else, may locusts descend upon your backups, and may your keyboards be sticky.

On a serious note, though: Using a good set of tools, whether hardware or software, can make an enormous difference in a developers speed, accuracy, and even happiness. I’ve come to these through a rather convoluted past, and they work really well for me. I will happily promote them to others, but I think it’s more important that you have tools than that you have my tools.

If you’re reading this and you’re a coder and you haven’t spent time really customizing your environment, setting things in just the right way and just the right place, and using just the right brushes; you need to stop what you’re doing and get to it. I don’t care how brilliant your mind is, if you’re writing in Notepad, or TextWrangler, then you’re not at your best.

The list below is a snapshot of what I’m doing now. I’ll give the reasons for each and try and highlight some of the benefits. I hope some of you will find this interesting and useful. Perhaps you’ll even find something here to add to your own collection. If you have questions or comments, lets Disqus.

Dvorak Simplified Keyboard

The Dvorak Simplified Keyboard is the first fundamental difference in my computer interface. For those unfamiliar, the Dvorak key-map rose in response to the outdated keyboard layout at the time, QWERTY. The history in short is this: QWERTY was designed in the age of typewriters, when speed led to jams. The letters used most often were spaced apart to avoid these mechanical problems. Unfortunately, while we’ve outgrown these problems their patch-work solution has remained. QWERTY became the de-facto standard, and its proponents have promulgated through the ages.

Dvorak was created a bit more scientifically, with a focus on speed and conservation of movement. The most commonly used keys (in English) were placed on the home row. The left hand home row contains all the vowels, for instance. Again, unfortunately, time was not kind to Dvorak. Like Betamax, it has lost out. Still, there are users, and quite a number of them. It is available on all major operating systems, and for those that know it, we cannot live without it.

Not only is my typing speed much greater than it was in QWERTY, the real reason for my adoption was more health related than designed to eek out those last few WPM. I had been beginning to develop repetitive stress injuries in my wrists from prolonged typing. Dvorak has all-but-cured that issue.

Now I wont lie. Switching from QWERTY to Dvorak was not easy. It took me at least a month to make the switch, during which time I was not working and my constant typing was not necessary. I also lost my ability to type in QWERTY as I developed the new layout. I know some folks have managed to hold on to both, but not me. I can type my name, common passwords, and that’s about it. Anything else requires me to hunt and peck.

Mac/UNIX

At the heart of my operating systems is the UNIX philosophy, a tiny piece of wisdom:

Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer. Although that philosophy can’t be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools.

In short, “do one thing, and do it well.”

With a UNIX based operating system, you gain the power of composition. You no longer just do a task, but have the power to chain them together, feeding the output of one thing into the input of another. Before you know it, little commands like sed, awk, grep, and so on become instruments of magic.

find "${SRC}" -type f -exec grep -H 'TODO:' {} \; 2> /dev/null | grep -v -e TODO.md -e README.md -e pre-commit | awk '{for (i=1; i<=NF-1; i++) $i = $(i+1); NF-=1; print}' | sed -e "s/.*TODO:[ ${TAB}]*//" | sed -e "s/^/- /" >> $TODO 2> /dev/null

Take the line above as an example. It’s one line of a script I’m using in a pre-commit hook for my latest front-end web boilerplate. Whenever I commit code to my repo, this spiders my source directory and finds any TODO comments I’ve littered throughout the code. It parses them and returns a markdown formatted list of them, which the script then outputs to a README file inside the repository. With some basic use of built-in utilities I can compose a sophisticated script that keeps and up-to-date TODO list for my active projects. How neat is that?

Windows is getting better at this sort of thing through projects like cygwin. Apple means nothing to me, but their decision to buy out NeXT and use it to create OSX was fantastic. That was the game changer that saved the operating system, and it’s the only reason I use their products now. Build a consumer friendly UI on top of the UNIX philosophy and you combine ease of use with true power.

dotfiles

Running OSX or Debian or Ubuntu or whatever is great, but there’s a lot of customization that can be done to make things more personal. The first step of that is your dotfiles.

These are my dotfiles.

I define my environments, common aliases, and even some helper functions. Git settings and shortcuts and my vim (coming soon) customization. I’m very proud of my dotfiles, from the organization and installation to my prompt.

bin

The other half of my working OS is my collection of binfiles that I carry with me from machine to machine.

This is my bin repo. They tie in to my dotfiles quite closely. Some of these are handy things I use all the time, and others are extremely specific tasks I do for work that should never, ever be run unless you know what you’re doing. It’s like a fun minefield. Enjoy!

tmux

I work predominantly at the command line. I build my projects there, use source control, and—as you’ll see in a moment—do my development there. Sometimes it’s necessary to do more than one thing at a time. I could make a new tab, but there are better options. The best option I’ve found for session management is tmux. It’s the inheritor of the old screen program and it enables you to create sessions, windows and panes, jump around, re-size, and dance across your system with ease.

Right now I am in tmux writing this post. I am in the second window, first pane, of the session called “personal”. The pane to my right is running make devserver, a script that runs both a development webserver but also watches the file system for changes to this blog and re-compiles it as they happen. It is a part of Pelican, my blog platform, which I’ve written about in the past.

I have context to my activities, whether it be work or play. This is thanks to tmux. Like everything else, tmux customization is key.

Here is my tmux configuration. It’s a part of my dotfiles repo.

vim

Finally we come to the most important part of my tool box, vim. If arguing developer tools can start a war, arguing with a vim (or emacs) user must signal the end of days.

If you don’t know what vim is, shame on you. Also, go read this explanation in six kilobytes. I couldn’t possibly do a better job than that.

Suffice it to say, vim is what makes my system work. The reason I can develop entirely in the console is because I have a fully featured IDE right there at the command line. I have more power at my fingertips without a mouse than pretty much anyone I’ve encountered in my career. Sublime Text 3 is a great editor. PHPStorm is a great editor. And yet they’re worthless next to vim (or emacs. Seriously… not gonna fight you guys).

I code in vim. I author in vim. I take notes in vim. I’ve done presentations in vim. I rebind my keys in first person shooters based on the HJKL navigation in vim. I play vimgolf. I’ve gotten on the high score board for it too.

The best thing I can say for vim is that it makes my desires transparent. I want to move this block of code to another area, done. I want to mark this particular word so I can jump back to it later, even from another file… done. I want to reverse every line of the file (why? no idea): That’s as easy as typing :g/^/m0.

Vim isn’t easy. It’s a power tool. If you haven’t bothered to learn a real editor yet, or if you’re just starting out your career, then do yourself a favor and master vim. I’m serious, it will change your life.

You can do pretty much anything in vim out of the box, but if you want to simplify some things or don’t want to code it yourself, there’s probably a great plugin that someone has made already to help you. I’d recommend hitting up VimAwesome to see what’s popular. I’ll call out a few of my favorites below as well.

:wq


Regular Expressions

This post is a HOWTO guide I wrote for my development team. I thought it would have some better sharability here.


Regular expressions, or regex, are a symbolic language that can define or identify a sequence of characters. This language can then be used to test, match, or replace a given body of text.

  • By test we mean it can evaluate if text is equivelent to the regex we defined. This is typically used for validation of things like email addresses, zip codes, phone numbers, and so on.

  • By match we mean it will evaluate parts of a body of text and return back the portion that matches our regex. We use this to parse text, grabbing the bits we want and discarding the rest.

  • By replace we refer to a combination of match and substitution. We match something, then replace the matched portion with new content, updating the original string.

Getting Started

In the examples below I will attempt to show you sample regular expressions in three line groupings. The top line represents the string we are attempting to test/match/replace. Our regular expressions will appear below it between the /.../ characters. Finally, the output of the operation will appear on lines starting with >. For example:

"Sample String"
/SampleRegex/
> Output

Note: The > character is just there to show you the result. It isn’t part of the result itself.

You can go to this page to try out any of these regular expression examples or make up your own. Simply copy the string we are trying to match to the big section on the bottom.

Note: Don’t copy the string’s surrounding quotes. It may break later examples.

Then, copy your regex, or retype it onto the top line.

Note: When copying the regex, the surrounding slashes will disappear in the testing tool.

Basic Structure

Regular expressions vary by program, platform, and language, but not by very much. In the examples I’m going to teach you, you will see almost no difference if you are using these in Unix’s sed command or using them in a Windows copy of Excel.

A typical regex looks something like this:

/#?[0-9A-Fa-f]{6}/

Most of you are probably looking at that and seeing:

/ABunchOfStuffSmashedTogether/

That’s where most people stop when it comes to regular expressions. They see the gibberish and say, “That’s way over my head.” I’m here to tell you that despite it looking complex, regex is actually extremely simple.

Regex, like most symbolic languages, treats each character as if were a whole word. When you learn the words (and there are not many), the giant string of gibberish becomes an elegantly simple sentence. In the example above, the sentence would read in English:

Look for an optional pound sign followed by exactly six characters that can be lowercase or capital A through F, or a digit 0 through 9.

This is a regular expression that matches a 24bit hexidecimal color value (think RGB). As you can see, writing the rules in regex was much simpler than doing so in English.

Now, lets take our example and break it apart into its components to see what each one does.

  • /.../ - The outer forward slashes donote the start and end of a regular expression. This is the format you’ll see in JavaScript, ActionScript, sed, vi, sublime, and many more. VBScript in Excel often uses "..." instead.
  • #? - In this case, the # sign means a literal pound sign. The question mark after it means that it is optional. If it’s not there, that’s ok too.
  • [...] - Everything between square brackets is summed up as a single character. This enables us to say, “the next character will be…” then lay out all the rules for it inside the brackets.
  • 0-9 - A number between 0 and 9, inclusive
  • A-F - A character between capital A and capital F
  • a-f - Lowercase is ok too
  • {6} - Whatever the last character rule was, it applies to exactly 6 characters

That is a fairly complex regular expression. If you didn’t follow along for everything, that’s ok. We’ll go over each rule in sequence in a moment. For now just try to understand that regex isn’t made up of sorcery. Each character has a rule, and if you learn them, that’s all there is to it.

Matching Literals

The simplest way to match something using Regex is to use a literal string.

For example:

"The quick brown fox jumps over the lazy dog."
/quick/
> quick

"The quick brown fox jumps over the lazy dog."
/jum/
> jum

"The quick brown fox jumps over the lazy dog."
/moo/
>

When searching a string for a literal, we get back either the literal we just searched for (if found) or nothing (if not found). Normal alpha-numeric characters automatically are literals in most versions of regex. The Perl programming language is one exception, but none of you are using that, so I’ll just move on.

You can probably see that using regular expressions with literals isn’t very helpful. It works, but it doesn’t get you anything a normal find wouldn’t get. Still, it’s important to know because we can use literals with any of the other techniques you’re about to learn.

Optional, Zero or More, One or More

Wildcards are very common in search parameters. There are three types of wildcards in regex:

? - Optional. You’ve seen this before. It means that whatever character preceeded it is optional. We will match the string if it is there or not.

"Spam"
/p?am/
> pam

"Sam"
/p?am/
> am

* - Zero or more. This will match the string if there are 0 of the preceeding character or 5000. Any number is ok.

"aaaaah"
/a*h/
> aaaaah

"Booh"
/a*h/
> h

+ - One or more. Just like the asterix, but we require at least one character to match.

"aaaaah"
/a+h/
> aaaaah

"Booh"
/a+h/
>

See the subtle difference between * and +? Good!

Any character

Sometimes vagueness is a good thing. What if you want to match any character at all? In that case the . character is your friend. Unlike most literals, the . doesn’t just match a period, it matches any character at all.

"Superman"
/.../
> Sup

"Superman"
/S.p.r.a./
> Superman

"Supercalifragilisticexpialadocius"
/.*/
> Supercalifragilisticexpialadocius

In our second example I’ve mixed the . character with the * wildcard we learned in the last section. This regular expression matches zero or more of any character!

Sets

Ok, now we’re getting to the core of regex. Remember those square brackets from our example in the beginning? Those were a set, and sets are what make regex amazing. They let us define a bunch of rules that all apply to a single character.

While we are in between brackets, all letters or special characters are interpretted as valid options for the single character that the bracket represents. For instance, if we wanted to match only even numbers we could write [02468]. If we wanted to match any lowercase letter we can cheat a bit and use a range like [a-z]. Or maybe a combination of the two like [a-z02468]. Lets see how they match in some examples:

"Pennsylvania 6-5000"
/[0-9]/
> 6

"Pennsylvania 6-5000"
/[0-9]+/
> 6

"Pennsylvania 6-5000"
/[0-9].[0-9]+/
> 6-5000

"Pennsylvania 6-5000"
/[A-Za-z0-9]+/
> Pennsylvania

Notice how in the second example we still only match a single six despite looking for bigger numbers. Regular expressions only find the first occurance of a match (by default) and the - prevented our regex from extending to the 5000. In the third example we accounted for the - character and were able to find the whole number section. Finally in the last example note how the space after Pennsylvania ended our match.

If we want to match special characters, spaces, and the like in our brackets, we need to escape them. That’s a term that means, prefix it with a backslash (\). By prefixing those characters, we tell the regex to treat it as a literal and not use it for its special purpose. To match all of our sample string using this method, we might write something like:

/[A-Za-z0-9\ \-]+/

Notice the first \ has a space after it. We can even escape empty spaces! Now all of these characters are considered valid matches, and we are looking for one or more of them.

Negative Sets

What if we wanted to match all characters except for one? That would be an enormous bracket, wouldn’t it? What if there were a shortcut? Regex solves this for us as well.

Introducing the ^ character! The caret serves two purposes in regular expressions depending on whether it is inside a square bracket or not. In this section we’ll just cover what happens when it is inside the brackets.

"abcdefghijklmnopqrstuvwxyz"
/[^g]+/
> abcdef

By putting the caret inside the square brackets as the very first character it means that anything in those brackets does NOT match. It is the complete opposite of a normal set. Handy!


Break

Lets take a little break and review what you’ve learned.

  • Literals
    • Do this by just typing the chars, and using \ to escape regex symbols you want to match.
  • Any Character
    • Use the . (dot) to match any one char.
  • Sets
    • Use […] to make a set, including ranges of characters to match like [0-9]
  • Negative Sets
    • Put a ^ inside a set and it inverts: [^a-z].
  • Optional Modifier
    • Put a ? after a regex symbol, character, or set and it will make that thing optionally matched.
  • One or More
    • A + after a regex symbol, character, or set and it will match one-or-more of them.
  • Zero or More
    • A * after a regex symbol, character, or set and it will match zero-or-more of them.

Beginnings and Endings

Sometimes you want to match something at the very beginning or very end of a string. This happens a lot when testing for validation, but also when trying to grab the first or last word with a match. There are two characters responsible for this behavior, ^ and $.

Remember when I mentioned that the caret worked differently outside of a bracket? Here it is! The caret marks a search as applying only to the beginning of a string.

"Somewhere I have never travelled, gladly beyond"
/^Some/
> Some

"Somewhere I have never travelled, gladly beyond"
/^travelled/
>

Despite travelled being a valid match, it is not at the beginning of the string. Therefore this regex returns nothing.

We can test the end of the string by putting the $ character at the end of our regular expression.

"Somewhere I have never travelled, gladly beyond"
/beyond$/
> beyond

"Somewhere I have never travelled, gladly beyond"
/travelled$/
>

Since travelled isn’t at the end of the string, it doesn’t match either.

You can use both of these together to test an entire string in its completion. This is very common with validation tests. Here is a simple zip-code validation example. We’re almost ready to build something like this ourselves!

/^[0-9]{5}([- \/]?[0-9]{4})?$/

This or That

Sometimes you need to match one thing or another. Maybe your string is valid if it ends in .com or .org. The | operator will do that for you.

In this example, we want to write a regular expression that will match either strings that are all alphabetical or all numeric, but not ones that do both. See if you can pick this regex apart into its pieces and follow along.

/^[A-Za-z]+$|^[0-9]+$/

Here’s an example that will match either .com or .org:

/\.com|\.org/

Some special characters

You have quite a library of tools at your disposal and you can now accomplish most simple tasks with regex. This section aims to simplify some of those tasks by introducing a few special characters to make your lives easier.

\w - Word character. Matches any character that is alphanumeric or an underscore.

\d - Digit character. Matches any digit 0-9.

\s - Whitespace character. Matches spaces, tabs, or line breaks.

\W - NOT word character. Matches anything that isn’t a word character.

\D - NOT digit character. Matches anything not a digit character.

\S - NOT whitespace character. Matches anything not a whitespace character.

In addition to these shortcut characters, there are a few special characters that don’t have a specific keyboard representation. For these we use the special character syntax as well.

\t - Tab.

\n - New line.

\r - Carriage Return.

\xFF - A hexidecimal character represented by it’s code.

\\ - A backslash. You need to escape a backslash in order to test it because a single backslash is reserved for indicating the start of a special character. This rule follows for other special regex characters, such as: .+*?^$[]{}()|/

Specifying a number of characters

Occasionally you need an exact number of characters. In the case of zip codes, you either need 5 or 9 digits. If you need to specify the number of characters, curly braces will denote this.

/.{4}/ - will match any 4 characters

/\d{3|5}/ - will match any 3 or 5 digits

Grouping

The last piece of regular expressions I want to cover is grouping. By wrapping all or part of your expressions in parentheses you can match not only the entire string, but smaller portions as well.

In a real world example from JavaScript, we have grabbed the css class names off of a button. The string we have looks like this:

button button_2 draggable index_15

We want to get the number off of the button_2 portion of this string. Lets start by searching for the first occurrance of one or more digits.

/\d+/
> 2

But what if those classes might not be in that order? What if index_15 happened to be first? We need to look more carefully for the right class name.

/button_\d+/
> button_2

This gets us the right class no matter what, but we have too much information. We only want the number, not the whole word.

/button_(\d+)/
> button_2, 2

By wrapping part of our regular expression in parentheses, that portion is returned as an additional match. In all of our previous examples we were getting back matches that were a list with only one item. Once we start adding grouping to our regex, those lists will grow. In javascript, this list is an Array, and we can easily grab the second item from it. Your various programs may find different ways of getting at these lists.

Global Searches

Regular expressions will capture only the first match by default. They can be configured to act globally, though, and return all matches in a string. This feature is supported by almost every implementation of regex, but often in different ways. The most common way is to append a g after the end of the regex.

/.../g

Ignore Case

If you want your regex to ignore the case of characters it is matching, you can usually dictate that in a similar way to how you make the search global. Instead of appending a g to the regex, you should append an i.

/.../i

Greedy vs Lazy Searches

*? - Matches 0 or more of the preceeding token. This is a lazy match, and will match as few characters as possible before satisfying the next token.

+? - Matches 1 or more of the preceeding token. This is a lazy match, and will match as few characters as possible before satisfying the next token.

Group without creating Capture Group

At times you may want to use the grouping feature of regular expressions but do not want it to return a new capture group. For instance, if you want to look for either the word “center” or “centre”, you might do something like this:

"center"
/cent(re|er)/
> center, er

But by using a special syntax in the group, you can omit the second capture group.

"center"
/cent(?:re|er)/
> center

Advanced Techniques

Positive Lookahead

Matches a group after your main expression without including it in the result.

/(?=ABC)/

Negative Lookahead

Specifies a group that can not match after your main expression (ie. if it matches, the result is discarded).

/(?!ABC)/

Positive Lookbehind

Matches a group before your main expression without including it in the result.

Note: Javascript cannot perform Lookbehinds.

/(?<=ABC)/

Negative Lookbehind

Negative lookbehind. Specifies a group that can not match before your main expression (ie. if it matches, the result is discarded).

Note: Javascript cannot perform Lookbehinds.

/(?<!ABC)/

Pelican

This past weekend I finally shed my Wordpress blogs and moved into the world of static site publishing. This site, and my personal blog are now built using Pelican, a Python based static site generator.

What does that mean, exactly? Well, for one thing, it means I no longer have to worry about someone exploiting a vulnerability in my server-side code to run malicious code or take over my website. My blogs may not be very popular or have much appeal to them from that perspective, but it’s best to be safe anyway. Additionally, since the server no longer has the burden of compiling my web pages whenever they are requested, the site content serves much faster and with fewer cpu cycles. Since I use cloud hosting and pay based upon my traffic and CPU usage, this actually saves me money! (Not a lot, but some.)

The experience of migrating content from Wordpress to Pelican wasn’t very difficult. Setting up Disqus for comments was a breeze as well, though it did take a bit of vim work to convert the URLs to their new locations. All in all, it was about 4 hours work for both blogs, much of which was spent cleaning up small formatting errors in the generated files.

I’m really looking forward to building this blog from the command line going forward. Now that I’ve written this post (in markdown mind you) I can build, test, and publish it by typing:

make html
make serve
make rsync_upload

Open Source Science

Mark this one down in the list of cool ideas I’ll never follow through with.

Github meets Academic Publishing.”

Here’s the full idea: We create an open system for people to share their scientific studies by providing them with all the tools, visualizations, and data warehousing necessary to truly host the science. Then, the community can rate the project, duplicate the results, grow from it, or reference it in another work. The interconnectedness that’s already inherent in academic publishing becomes a network in itself.

There will obviously be studies that don’t measure up to the rigorous evaluation of peers and those that are above the heads of many folks. To solve this, we first invite a group of verified scientists. Who are these folks? They’re people that have been published in academic peer reviewed journals in the past. This status gives their opinions on their peers extra weight. What they say has vastly more influence than the average Joe. It’s not hopeless for the rest of the world, though. When a verified scientist rates another project highly, the authors of that work gain reputation. They in turn can raise the reputation of others they approve of. As you move farther from the verified folks, the effect is lessened.

As the system grows, so too can the list of verified scientists and their sphere’s of influence. Everyone can benefit from what we all recognize as good science, and all the results are free and open to the public.

Iteration ideas: teams, university/college connections, certificate or degrees to add to reputation, invite system for colleagues, bounties on challenging tasks/experiments, bounties on verification through independent duplication of results. Science-on-demand.


git changelist

Today I needed to get a list of all the files that had changed in a git repository over the last two weeks. I played around with some great git commands, awk and sort to make the following git alias (toss it in the [Alias] block of your .gitconfig):

changelist = "!git whatchanged --since='\$1' --oneline | awk '/\^:/
{print \$6}' | sort -u; \#"

To use:

git changelist "2 weeks ago"

You can use a lot of different unix date formats in there.

Enjoy!


Chess Ratings

Chess Rating

See it in action

The development team where I work will soon be celebrating the launch of our new company website a good old fashioned chess tournament. Now, like any good development team, we have our fair share of geeks; geeks with interests in a wide variety of geekery. One such geek is a big fan of fantasy sports, so we tasked him with organizing said tournament. As a result, we will be doing a round-robin tournament to establish a relative ELO rating for each player, then use these ratings to seed a double elimination bracket tournament (I’m about 60% sure I got the names right for all that stuff). Anyway, the key component for the round robin is having a method to establish our ratings.

The ELO rating system is the most widely used in the chess world, and with good reason. When you have a sport played by some of the greatest minds in the world, it only makes sense to have an overly complex and highly accurate way of showing relative strength. In fact, it’s so impressive that just about nobody outside of official chess organizations actually does it properly. The rest of the world kind estimates an ELO, or approximates it. I am happy to be one of those folks.

I have neither the time nor the care to implement a 100% accurate chess rating system. All I need for the tournament is something that works decently well. So, I built it!

My chess ratings page allows you to enter the starting rating for each player, pick the outcome of the game, and it will show you the new ratings. How do I do this? Well, I use a formula I lifted from RedHotPawn.com! I didn’t steal their code or anything. I just followed the instructions on their FAQ (mostly).

Why don’t you go try it out! And if you’re interested in my algorithms and junk, here’s the main code.


Boilerplates

I’ve been doing a lot of work this past week developing my own custom boilerplate for developing HTML5/CSS/JS projects. There’s a ton out there already (e.g., HTML5 Boilerplate) but I wanted to brew up my own environment just the way I like it. More than that, I wanted it to use some of the cool cutting-edge development tools out there, like SASS, Compass, l10n.js, LiveReload, and Sprockets. The result has been extremely gratifying and has already proven itself effective.

Sass-Boilerplate - a name I gave it before it blew up with a bunch of other cool tools, can be found over on my github account. Pull requests and issues welcome!

There is an extensive README file with installation instructions and a few helpful usage guides. It has a few dependencies: ruby, rubygems, bundler, command-line-tools (OSX only), LiveReload browser plugin (if you want to use LiveReload). All of that is written up and linked over on github. Try it out and let me know what you think!

Oh, and as an example, tomasino.org was recently rebuilt using the boilerplate. Feel free to dive in to the source over on github for that site as well!


Reactive Design

Responsive Web Design is a huge movement in the web development world right now. Having one site that will automatically adjust to your device based on its display size is quickly becoming the norm. When I find myself on a website that hasn’t thought about their mobile experience, my face grows an automatic frowny-face. Thankfully, a lot of great html frameworks are giving us easy options to implement this type of development quickly and easily without adding unnecessary overhead into our clients’ budgets. The world is looking pretty swell for responsive.

Color Wheel

But what about other aspects of automatic design change? Responsive design gives great thought to changes in layout based on size, but what about changes to design based on content? What if your website could react to the content it was loading and adjust its entire color palette to properly keep the color relationships as designed?

This lab project is a proof-of-concept of just that idea. This is a reactive design based around, in this case, an image. See this link for a working example. Click on the images to adjust the background color and text color automatically. The background color is generated by simply pulling out the average color of the image. The text color, in this case, is an automatically generated complimentary color with a complimentary luminosity.

If you were to look at a color wheel, the complimentary color would be the one directly across the circle from the color you start with. This is one of many forms of color relationships that generally works well in designs. There are many great tools online where you can discover all sorts of color relationships.

Luminosity is a way of talking about a colors brightness. If you think about an old black and white film, you know everything is actually in color, but you can’t see it. You can, however, see different shades of brightness. These shades (and tints) are part of a color all the time, but you don’t think about them unless you are seeing the color’s saturation removed. On the right is an example of two colors with their respective luminosities revealed below. For design purposes, changing luminosity is very important. In my sample page, I am offsetting the luminosity of the text from the background to make sure there’s always a good contrast.

That’s enough about color theory. Lets talk about code!

Here’s the class I am using to do the majority of the work in my sample page.  The color class handles converting between different color modes (RGB, HSL, HSV, HCY) automatically so we don’t need to handle all that math ourselves. You can probably skip past most of this unless you’re into color math. It’s pretty neat stuff, but looks a lot harder than it is.

Color.class.js

Now that I have the helper methods in place, lets look at the HTML.

index.html

And finally the JavaScript that will actually do the test logic.

scripts.js

The first function, getAverageRGB, uses html5’s canvas object to pull out color information from the image. I’m really not confident in doing this in a production environment. There are two main problems with it. First, although I love the canvas object, it really isn’t everywhere just yet. You’ll end up writing backups for this to work in older browsers, and I’ve never been a fan of writing code twice. ~~Second, there is some really strange behavior with browser cache. You see, the canvas object won’t load images cross-domain, which means if you try to use this with an external image from, say, Flickr, it will throw errors. In my initial test, I had an image in the same folder as my HTML thinking that would be just fine—it is on the same domain, after-all. When I tested the page, it worked the first time, but after reloading it threw a security error. I believe, though I haven’t fully tested this, that when the page was trying to load the image from cache, it was acting as if it were a different domain.~~

Edit: It turns out my mysterious cache bug wasn’t cache at all. I was trying to process the images too quickly, before they had fully loaded. I wrapped my code in a $(window).load function and everything is fine again.

In any case, the canvas method I’m using is just one of many. There are plenty of server-side methods of getting color information from an image.

At the bottom, I am setting up click handlers on each image. When the images are clicked, the clicked image gets passed to the updateColors function which does all the updating for the colors on the page. Inside this function we get the base color of the image, then apply a variety of transformations to it to derive the color palette below.

Obviously there’s a lot more that we could do here. If this were a full site, one could reasonably build the entire css color palette in a relational way to a single base color.  For example, you could create a compound color relationship with shades and tints for highlight areas. All of these things would be coded as mathematical relationships. Then, when a new base color is set (via a new background image, perhaps), the entire site will update, but maintain its excellent color relationships.

I think there’s a lot of room for this experiment to grow into something really cool. Can you think of any other ideas of ways this could be used?

As always, the source can be found over on github.


High Performance JavaScript Class Template

This is a post more for my own future reference than the rest of the world:

I posted earlier with my standard JavaScript class template. While I favor that one for reasons of encapsulation, its performance is really not very optimized. Here is an alternate class template I use in cases where performance (or the quantity of objects) is more important.

Most of this layout was lifted from iScroll. I’ve kept a lot of iScroll’s configuration settings as well. You’ll see those up near the top.

You can grab the latest version of this template from GitHub.


AS3 Asynchronous ExternalInterface

As I may have mentioned before, AS3 performs its ExternalInterface operations synchronously. For the most part, this isn’t a big issue, but what happens when the javascript you need to execute grows extensive, or what if it is a slow operation? In these cases, it’s helpful to be able to make an ExternalInterface call that is asynchronous so your actionscript code can continue.

The following example does exactly that. It provides a layer between your JS and your Flash for converting ExternalInterface.call methods to an asynchronous method.

The first part of this method is the AS3 class, AsyncExternalInterface:

AsyncExternalInterface

This class first tests whether ExternalInterface is available at all, and then whether the necessary JavaScript class is on the page. If the JavaScript is not loaded, it will degrade and use ExternalInterface normally. If everything is in place, then your calls will be passed down to the CallStack JavaScript class.
Here’s CallStack.js:

CallStack

This class acts like a Singleton where you get the instance via the CallStack method made available in window. This means that when you include the JS in the page, you don’t have to do any instantiating yourself, or pass any variable names back to Flash. It’ll figure all that out itself. Of course, if you’re using this with the AS3 class, you don’t need to know about any of that because it’s all done behind the scenes.

What’s happening here is that every function call you make to ExternalInterface is being stored in a stack. This quick storing of the calls means your AS3 can continue on its merry way while JavaScript delays, then makes calls to that stack later. The speed at which the class works through the CallStack is set in the variable INTERVAL_TIME. For our test I’ve set that to 1 second. Before you use this in deployment, you’ll probably want to speed that up dramatically.

The class also has some cleanup methods so that while the stack is empty, it doesn’t waste any time polling an interval.

Finally, it’s time to put the files together into an HTML wrapper.

index.html

For our test, Flash is calling “testFunction” four times. The first three have sequential parameters, and the last has no parameters. If you have a FlashLog.txt file set up, you can also note the appearance of the traces vs the alerts. In the AsyncExample.as file, the traces come after the ExternalInterface calls. Were we using a normal ExternalInterface, you wouldn’t see any of those traces until after the alerts completed. Because of the AsyncExternalInterface solution, though, the traces will appear instantly, before the first alert has time to fire.
The full example code can be found over on github. Enjoy!

© James Tomasino. Built using Pelican. Theme by Giulio Fidente on github. Member of the Internet Defense League.