Sunday, November 25, 2007

Loving to_proc

Anytime I'm pairing with a developer who isn't familiar with ActiveSupport's Symbol#to_proc extension, it's fun to see their reaction when they see how it works. It's the most radical example I've seen of a simple (and pure-Ruby) extension to a core class allowing for awesome improvements in readability.

[If you're not familiar with the method, read about it here or here. For a bizarre step beyond (that I'm not personally fond of) take a look at this old Dr. Nic post.]

The one frustrating thing about it is that it's so limited: you can only send one method with no arguments to each yielded object. If you need to pass arguments or do any more complex calculation, you're back to passing an associated block the old-fashioned way.

A couple of weeks ago Patrick and I were pairing and noticed a beneficial side-effect of the limitations of Symbol#to_proc: sometimes when you can't use it right off the bat, it's because the behavior you were going to put in the block would be better off living in the objects you're working with.

Here's a simple example. Imagine you want to expose the area codes represented in a collection of phone numbers. Your initial implementation might look like this.

def area_codes
  self.phone_numbers.collect do |phone_number|

"Shame that block's so ugly," you might think. Well you're right! It is a shame, and it doesn't have to be that way. It's ugly because it knows about the guts of phone numbers. If phone numbers knew more about themselves, using them could be prettier.

def area_codes

Of course sometimes you really need to pass arguments. We toyed around with introducing an Array#to_proc that looked like this.

Array.class_eval do
  def to_proc
    lambda {|target| target.send *self}

[1,2,3,4,5].select &[:between?, 2, 4]  # => [2, 3, 4]

In the end we decided that, while nicely brief, it wasn't pretty enough to put into our code base -- too much punctuation -- and stuck with the old-fashioned way.

Are there any other core extensions that have really floated your boat? Do share.

Wednesday, November 14, 2007

Mix It Up

Mix, Oracle's new social networking site, is up and running. It's pretty nice looking.

If you love working on a Mac, why don't you register and vote for Toby's idea? Oracle's lack of a native client for Intel Macs is crazy, and this is a good opportunity to let Oracle know how many of us are suffering from it. (The idea is actually about providing XE for the Mac, but it's still a good rallying point.)

Personally, I've spent about a year on a project where we deploy on Oracle in production but develop on Postgres because of the lack of Mac support. Our Cruise build runs against Oracle, so when there are issues we see them pretty quickly, but not quickly enough. The turnaround time is something like ten minutes instead of something like ten seconds, which often means a context switch. It's lame. Oracle can fix it.

Friday, November 02, 2007

Renum 0.0.3

This morning at RubyConf I updated Renum so that when the enum declaration appears inside a class or module, the enum class gets nested appropriately. So this works the way you'd expect.

module MyNamespace
  enum :FooValue, [ :Bar, :Baz, :Bat ]

I also made EnumeratedValue comparable, where the natural ordering matches the array of value names given in the declaration.

Those features are in renum-0.0.3. (renum-0.0.2 was a botched release where I left a new class out of the gem manifest. Apparently I need to improve my pre-release package verification.)

Renum -- a Ruby enum gem

A while back I blogged through trying to implement something in Ruby similar to what you get from Java and C#'s enum keyword (and before there was a keyword, from the type-safe enumeration pattern in Java). It went pretty well.

I mentioned there that I might go ahead and package the thing (or something like it) up as a gem. Tonight I did. It's called Renum.

Renum lets you create enums like this:


enum :Color, [ :RED, :GREEN, :BLUE ] do
  def abbr

As of version 0.0.1, released just minutes ago, those enums work like this:

describe "enum" do
  it "creates a class for the value type" do
    Status.class.should == Class
  it "makes each value an instance of the value type" do
    Status::NOT_STARTED.class.should == Status
  it "exposes array of values" do
    Status.values.should == [Status::NOT_STARTED, Status::IN_PROGRESS, Status::COMPLETE]
  it "enumerates over values" do {|s|}.should == %w[NOT_STARTED IN_PROGRESS COMPLETE]
  it "indexes values" do
    Status[2].should == Status::COMPLETE
    Color[0].should == Color::RED
  it "provides index lookup on values" do
    Status::IN_PROGRESS.index.should == 1
    Color::GREEN.index.should == 1
  it "allows an associated block to define instance methods" do
    Color::RED.abbr.should == "R"
  it "provides a reasonable to_s for values" do
    Status::NOT_STARTED.to_s.should == "Status::NOT_STARTED"

Hopefully someone will find this useful. There are some obvious features to add, like value lookup by name. There are also a lot of things that could be locked down a little, but the library's useful without that so I'm not sure it's worthwhile. (For example, I don't currently prevent you calling 'HAZEL'.)

If this is something you might use, let me know what you want out of it.

Also, tonight I arrived in Charlotte for RubyConf. Say hi if you see me.

Friday, October 26, 2007

Handoff 1.0

Having used it on my current project for the last few months and improved the unit test coverage, I've released version 1.0.0 of the Handoff gem. You can get the gem file at RubyForge or just a wait a while and do

sudo gem install handoff -v 1.0.0

My first priority for future development is to improve the output when specifying delegation that hasn't been implemented yet. (Currently you just get one NoMethodError.) After that I may work on a couple of features requested by Ali, but it's a real hassle TDD'ing a library for making test assertions, so the going will likely be slow.

For those of you who don't work with me, Handoff is a tiny little gem providing a fluent interface for asserting on simple delegation. It aims to make asserting the behavior as simple as implementing it (with Forwardable). See the rdocs for examples.

Friday, October 19, 2007

Some Rambling on xUnit Testing Style

On our drive back to Brooklyn from the client site last night, Patrick and I were talking about testing style.

I mentioned that it had bothered me for a long time that the base class we extend in (most) xUnit frameworks is called "TestCase" even though any good test class describes multiple test cases. [To help fend off semantic confusion, here's my idea of test case identity: One test case is a set of inputs and stimuli to the code under test. There may be several things to assert about what goes on during a test case or the state of things after its run, but the inputs and stimuli are constant. When you vary them, you have another test case.]

Since we try to minimize the number of assertions in a given test (ideally keeping it to one, though in contrast to Jay I'm personally fine with that one assertion being complex) we often have multiple test methods that assert on the same test case (by the definition above) and could therefore share setup and teardown code. But since we also need tests to exercise other scenarios, we can't use the framework's setup method to create the scenario unless we're willing to set up all the stuff we need in all our tests, what you might call a "superset fixture," which feels wrong. Incidentally, I think the reason not many people use the term "fixture" for the stuff you set up for your tests is that it's always been a pretty weak concept in practice: either you have a mess of objects that various tests will use in various ways or you're setting up so little that it's barely worth talking about.

It may be practical to set up a superset fixture when doing simple state-based testing, but if you're dealing with a "wide" object, it means a really noisy setup.* When using mocks it's actually impossible to fully set up more fixture than you'll need in every test: a mock set up but not used will fail the tests that don't satisfy its expectations.

So besides having the name 'TestCase' that doesn't seem to make sense, we have these facilities for setting up and tearing down that we don't leverage much.

Then I had what I thought might be an important insight right there in the car. Maybe the class was called a TestCase and had just one setup method because it was originally intended to describe just one case, with each test method just asserting something different about the scenario created in the setup. If so, the setup could even include the stimulus of the code under test, reducing test methods to nothing but assertions. Maybe what's made both the naming and the use of shared fixture-setup seem awkward all this time is that we've tied ourselves to creating one TestCase class per production class, when all along we could have had a TestCase class for each scenario we wanted to test, with most having a very small number of test methods.

Here's a super-simple example of the sort of thing I was imagining, though my imaginings were a lot more abstract.

require 'test/unit'
require 'set'

class Set::EmptyTest < Test::Unit::TestCase
  def setup
    @set =
  def test_size_is_zero
    assert_equal 0, @set.size
  def test_empty
    assert @set.empty?

class Set::AdditionTest < Test::Unit::TestCase
  def setup
    @set =
    @set.add 5
  def test_size_is_one
    assert_equal 1, @set.size
  def test_contains_added_item
    assert @set.include?(5)
  def test_not_empty
    assert !@set.empty?

class Set::DeletionTest < Test::Unit::TestCase
  def setup
    @set = [:abc, 5]
    @set.delete 5
  def test_size_is_one
    assert_equal 1, @set.size
  def test_no_long_contains_deleted_item
    assert !@set.include?(5)
  def test_still_contains_other_item
    assert @set.include?(:abc)
  def test_not_empty
    assert !@set.empty?

All these little test cases could be a maintenance headache if they each lived in their own file, but it might not be too bad if you gave up the one-class-per-file convention. Although I'm not a good student of history, I knew xUnit frameworks started with one in Smalltalk, and it seemed like this one-TestCase-class-per-scenario approach might have been really convenient in a development environment where all code was organized hierarchically without the bother of source files that might need to be moved, renamed, etc when changing tests. I've only run Squeak long enough to build a trivial Seaside application, so I was speculating, but I could imagine it being pretty handy to organize tests with one package per class under test, then a class per test case, each with a setup, then a test method for each assertion to be verified.

In Ruby we would also do some metaprogramming to reduce the noise and make the test code more intentional. Maybe something like this.

testcase_for 'an empty set' do
  setup { @set = }
  test('size is zero') { assert_equal 0, @set.size }
  test('empty') { assert @set.empty? }

testcase_for 'adding an item to a set' do
  setup do
    @set =
    @set.add 5
  test('size is one') { assert_equal 1, @set.size }
  test('contains added item') { assert @set.include?(5) }
  test('not empty') { assert !@set.empty? }

You probably noticed this looks a lot like RSpec contexts, which gets at why I was so excited. I wondered if Kent Beck's original intent had been something much closer to BDD, and it had just taken the rest of us a long time to catch up.

So when I got home I went digging around for articles about unit testing style and found surprisingly little. I also looked for anything about the original intent of the framework. (Googling these topics was a little depressing because of all the weak information, plagiarism, and content spam.)

My search stopped when I found the Kent Beck article where he originally presented the unit testing framework pattern we now know so well. (I believe that was first published in The Smalltalk Report in October 1994. Thanks to Farley for digging up that obscure nugget.) I was disappointed to find that the example in that first article conforms pretty much exactly to the classic form of unit test we've all seen before, including a setup method that sets up more fixture than any one test method uses.

In case you find the Smalltalk a little painful to read, here's my translation of Beck's example test case to Ruby. (The examples above, you'll now see, are based on Beck's.)

require 'test/unit'
require 'set'

class SetTest < Test::Unit::TestCase
  def setup
    @empty =
    @full = [:abc, 5]
  def test_add
    @empty.add 5
    assert @empty.include?(5)
  def test_delete
    @full.delete 5
    assert @full.include?(:abc)
    assert !@full.include?(5)
  def test_illegal
    rescue NoMethodError
      # expected

So that was a bit of a letdown. On the other hand, I did finally learn why the base class is called a TestCase when it represents so many different test cases.

As a test writer, you tend not to think of your TestCase subclass as a normal class. All the instantiation and running is in the framework, and none of your code ever interacts with TestCase instances, so their life-cycle (their very existence as normal objects) is usually irrelevant to you as a user of the framework. From the framework's point of view, however, their life-cycle is central.

As you may or may not know, your xUnit runner creates one instance of your TestCase class for each test method, passing to the constructor the name of the test method the new instance will run. Sure, well written setup and teardown methods would allow the runner to use one instance for all the test methods, but that would require the test writer not to accidentally leave state hanging around in the instance. Why put that burden on the framework user when the framework can just as easily start with a completely clean slate every time?

So the framework creates one TestCase instance per test method, each of which is a test case of its own. It works in the intuitive sense of "test case" as well as in OO terms. Score one for the forefathers of agile software development!

I'm interested to hear what styles people have used or seen in unit test/spec suites. Have you tried creating multiple TestCases per class to keep individual test case classes more maintainable? Did it work out? What about BDD specs? Do you find having the structure of the test or spec map directly to the purpose of the code (as opposed to having private test helper methods scattered around your source file) advantageous? How far have you gone with keeping test setup all inline in the tests themselves? Let me know.

* Yes, wide objects that are hard to set up for test are a smell that the code under test may be poorly factored, but if there's anything we've learned driving back and forth from Brooklyn to central New Jersey every week, it's that you often have to live with bad smells, though you should always be on the lookout for a route that avoids them without making you late.

Sunday, October 14, 2007

"Sit on this and rotate!"

--Suggestion from a project manager to two relatively senior developers on a team in need of a tech lead [1]

Last week I finished my second eight-week stint as technical lead of a medium-sized team of ThoughtWorkers. Patrick Farley and I alternated in the role over the last seven months or so, switching between development and tech leading at each production release. As we're both approaching one year on the project (and therefore roll-off) I've now handed the reins over to someone new (Paul Gross) who will likely rotate with another team member if the team continues to release on a similar schedule.

Rotating the tech lead position isn't a standard practice on ThoughtWorks projects, but it's worked out really well for us, so I thought I'd document it a bit.


The simplest and most obvious benefit of rotating the role is knowledge sharing. The tech lead will be exposed to aspects of the release process and production deployment (including lots of people and groups in the organization) that the rest of team shouldn't be concerned with if they're to continue developing new functionality at maximum velocity. While it's important to protect the team from distractions where practical, it's also important not to create a one-person knowledge silo. (What's your truck number?) Rotating the role means that you've likely got more than one person available who's been through the whole cycle, without having had to simultaneously dedicate multiple capable developers to work that's largely about always being available for a context switch to ask or answer the right questions.

Besides specifically spreading knowledge relevant to the current project, rotating the leadership role also creates more well-rounded developers, giving them some of the experience necessary to lead other teams. Whether in the context of a consulting organization or a company employing developers for its own projects, this is hugely valuable for everyone involved. The developers don't just benefit from being in multiple roles: they also get to see how a number of other people deal with the same issues. (Even the devs who don't rotate into the lead role benefit from that.)

Rotation also helps keep the lead's technical tools from getting dull. This goes for general development chops as well as knowledge of how the team's systems work. During my runs I knew a lot about what was coming up in terms of requirements, and I was always there for team tasking sessions when the shape of the implementation was generally hashed out, but once something was being implemented I generally only learned more about things that came up in team-room discussions or where the pair working the story sought my advice. No good developer wants to become an ivory-tower architect, and rotating back to a role where you can focus on making the rubber meet the road is a great way of making sure it doesn't happen.

There's also a therapeutic advantage to rotation. The tech lead, like the project manager but to a lesser extent, bears responsibility for the work of a whole lot of people, and that can be exhausting and frustrating. I was extremely fortunate to have an incredible team behind me, so when it came to their work I felt comfortable trusting them to deliver on reasonable commitments. But our application was in production, with new releases rolling out regularly, and that meant lots of external groups playing a vital role in our team's delivery.

Or not.

It's a sad fact of enterprise software consulting that we're often brought in because an organization's had difficulty delivering reliably. They probably haven't identified all the causes of their difficulties, and many are likely outside the scope of what our team can remedy. The development group and business sponsors may be completely aligned on maximizing delivery value while minimizing process overhead for the project, and we may achieve incredible results in developing working software that meets the business need. But that only gets you so far before you need to coordinate with the release management group, the configuration management group, the database development group, the database infrastructure group, the quality assurance group, the network design group, the network support group, the network monitoring group, etc. Chances are the organization's difficulty in delivering has had a lot to do with the way these organs have worked together (or against each other). Our software has to work its way through the same system to deliver actual value. Certainly we can try to influence these groups to streamline their processes, but if our charter is to develop software, this is a delicate balance. The orchestration of dozens of informal conversations, formal meetings, ticketing system requests, and emails to achieve seemingly simple goals can be hilarious and even fun (when it all goes right) but it's decidedly not technical leadership, and it's easy to burn out.

Rotating the lead role could also pay off if someone doesn't work out. There are a couple ways rotation mitigates the pain and project risk introduced by a bad technical lead. First, the struggling lead has at least one former leader whose experience and advice they can draw on to try to improve as well as to directly supplement their own efforts. Failing that, the worst case is that a former lead is reinstated, which will likely be a far greater comfort to the customer than having someone new brought in. The security of having past leaders available makes this a great way to let someone stretch into a more senior role than they've had before without betting an entire project on their success.


What are the downsides to rotating developers through the role? The only one we've seen is that you have to bring someone new up to speed more frequently than you otherwise would. Though the cost here is not negligible, I think it's justified, particularly in a consulting situation where eventual roll-offs and transitions are inevitable.


As far as how to do the transition, we tried it two ways.

The first time we rotated, Patrick was focusing on our initial release to production, but we also had a bunch of new stories for upcoming releases that needed to be estimated and have some open questions resolved. It was too much for one person, so I focused on the upcoming stories while he dealt with the production roll-out. As support work on Release 1 quieted down, we shifted developers over to work stories for Release 2. The upside of that was there were enough hours in the day for Patrick to get production straightened out without collapsing from exhaustion, while stories for Release 2 got the attention they needed. The most immediate downside was that I hadn't been exposed to much of the release and deployment process, so as we began dealing with production support, I had to lean on him more than I would have liked, sometimes pulling him out of normal development work.

When the next rotation came up, we felt it would be inefficient having us once again simultaneously acting as tech leads for different releases, so we did more of a hard cut-over. This reduced confusion among the client and the team regarding who to talk to about what, but it may not have been practical if we hadn't both already had deployments under our belts. The cut-over felt good, so we've stuck with that since.

Try it?

Here's some background on the scenario where it worked for us. Our team size has varied: during the time we've been rotating we've had between seven and ten developers, two to three business analysts, a dedicated iteration manager only briefly, and a project manager/client principal mostly but not entirely dedicated to our project. Before we started, the client organization already had a large body of software in production and established processes for release and deployment.

Another thing to keep in mind that might be important for any team looking to try this is that Patrick and I had worked together for months before the team down-sizing that led to the rotation. We'd developed a strong mutual respect, each feeling that the other was capable of succeeding in the role. We were also both happy to take a normal development role on a team with the other leading. It's hard for me to see the rotation succeeding without that, and it may be the most difficult element to recreate on another team.

Obviously there are lots of other factors that may have helped this work for us. Your mileage will vary. For what it's worth, both our Client Principal and Patrick have said they want to try something similar at other projects, and I do too, assuming we seem to have the right ingredients.

Have you been in situations where rotating team members through leadership positions might have helped? Have you tried it? Let us know. Thanks.

[1] No, no one really said that.

Sunday, July 15, 2007

Abuse of method_missing?

Am I the only one who thinks the following DSL-ey trickery is an abuse of method_missing?

Here's the creation of some named routes.

ActionController::Routing::Routes.draw do |map|
  map.home '', :controller => 'main', :action => 'start'
  map.user_page 'users/:user', :controller => 'users', :action => 'show'

You call arbitrary methods on the map object, and that creates a route whose name is the method you called.

Here's the declaration of an ActiveRecord model's attributes using Hobo's new migration-generating style.

class User < ActiveRecord::Base
  fields do
    name :string, :null => false
    email :string
    about :text, :default => "No description available"

I haven't looked into the code, but I assume the block is instance_eval'ed against some object whose method_missing builds up attribute meta-data where the name of the missing method becomes the attribute name.

Introducing new symbols into your system by invoking them as methods on bizarre (sometimes hidden) objects strikes me as a nearly useless twisting of Ruby's flexibility. If you're working with something that's purely DSL-ish, that's one thing, but if we're talking about a tiny little internal DSL embedded in otherwise idiomatic Ruby code, and, needless to say, being edited by Ruby developers, this just seems to introduce confusion.

Of the two uses, I actually prefer Hobo's because it goes farther than Rails' away from idiomatic Ruby and towards a DSL. Since I actually see that I'm sending messages to map when creating a named route, it frustrates me that this object exposes its functionality through method_missing, and I'm therefore unable to look up the API reference in the normal way. What is that thing? Does it have any methods that might conflict with my route names? We know about connect. Hopefully that's the only one.[1]

Do you think this sort of use of method_missing is advisable? How have you used and abused it?

[1] Actually, a look at routing.rb shows that ActionController::Routing::Routes.draw yields an ActionController::Routing::RouteSet::Mapper, which also (in the neighborhood of line 1000[2]) defines named_route: not a likely name conflict, but arguably a clearer way to define your routes.

ActionController::Routing::Routes.draw do |map|
  map.named_route :home, '', :controller => 'main', :action => 'start'
  map.named_route :user_page, 'users/:user', :controller => 'users', :action => 'show'

[2] Yeah, line one-thousand. The Rails team are trained professionals: please don't try that at home.