Being Agile… Part II: Never stop changing


The point I stop reading an article about agile development is when it starts quoting from the agile manifesto. No, I do not have any qualms with the manifesto; I think it is an excellent minimalist document. However, when people starts to preach about it, I tune out.

Same is the case when someone brings up a specific set of practices usually with a cute name as The Process. What I know from my last 7 years is, the only process that stays is changing processes.

This was very true in the beginning. We went back and forth and back to SCRUM as the organizing process, but played with its format and deliverables for quite a long time. We have tried weekly iterations as well as those that are longer than a month. We filled our walls with multi-colored post it notes. Built weird looking shared Excel sheets and used formal project management software with custom extensions to track the sprint.

We have been using unit testing to some extend even before our full plunge into the agile pond. But, the development and testing phases were mostly separated. After introducing SCRUM, and co-located teams, we were not sure how to interact with each other for a while. Lingering mistrust between the programmers and testers were quite palpable.

Most of our automated acceptance tests were UI tests. While these are in some sense the ultimate integration tests, precisely due to this overarching scope, they require most of a feature, including a near final UI ready before it can be run. In those early days of UI automation, the tests were extremely sensitive and just adding one control in a form would break a whole bunch of tests in mysterious ways. We actually had a campaign “does it break automation?”. (These days we ask, how can I come up with a breaking acceptance test, or a directed failure in existing tests to implement a feature or to fix a bug. More about that later.)

One of those days Michael Feathers came to our work and told us to go find fracture points and start clawing from there – I am paraphrasing. We have been looking at a really huge block of rock and wanted to see nice score marks, tap it with a soft mallet and there you get the nicely shaped pieces. He wanted us to look for fissures, cracks. That is what we did. Delphi, the language of our code base is not very refactor friendly. After 5 years of yearly releases, the original architecture was starting to form into a tangled web. (Hmm, so we have a huge rock with tangled web around it. See, I have my metaphor still going strong. How many of you have pulled the dry stems of climbers from rock surfaces? you have to be very careful or it will break. They do have a tendency to go into cracks!) Though we were a bit unconvinced about the feasibility of a test driven agile development strategy for our code base (we of course wanted to build from scratch!) we looked at our Java brethren with green eyes! They have all the tools, full reflection, managed code… We wanted it!

Once we started to pull at these stems, things started to happen. We tried to follow TDD as much as possible, but at the unit test level. However, once these frequent changes started to spill over and break automation, things became serious. Mind you, we are also working at break neck speed for a new release. While there was some consideration to the additional burden of process adoption, it did not change the deliverables much. So, if the automation is not passing, we cannot say a feature is done. If the feature is not done, we cannot get to stand in front of everyone and get an ego boost during the sprint retrospective.

Our early sprint retrospective started with a science fair-y demo of our features in a lab. We even got to sell our new ideas. After a few sprints though, we decided to do the demo in a formal fashion, power point or actual demo one at a time with a specified amount of time. Then we decided to do the demo at the team rooms and let the stakeholders walk from room to room. Then we decided to do it as a presentation for everyone, then we went back to team rooms, then we decided to record them and post them the previous day…

They all worked.

Coming back to the automation dilemma, soon it was clear that UI centric acceptance automation is not enough to support the new way of development. It was quite complex and time consuming to write and maintain. They also took awfully long to execute making it unviable to use as part of continuous integration. If we were to have some confidence in what we were doing – remember, much of the code we wanted to refactor were units that we seldom touched – we needed acceptance tests that are run with every build, or at least once or twice every day. Jealousy is a good motivation. We had been drooling over Fitnesse from the moment Uncle Bob showed us what it can do. There was no support for Delphi in Fit at that time, so we ported Fit to Delphi Win32 and started writing some tests. This was the same time when Delphi came out with a .Net version. We had to try it. We managed to compile enough of our code in .Net to allow us to cover the core and common business rules. This exercise to cross compile also created an opportunity to redefine layer boundaries by package restructuring. So we abandoned our win32 Fitnesse plan and started using the .Net version of the code to write Fitnesse tests for core functionality. This along with the Business Objects Framework that was introduced mostly through unit testing finally started to carve into the block making the cracks bigger and bigger.

We had a very supportive upper management during this transition stage. But, as the release progressed, each sprint, they will find some of the things that were supposed to be done was not done. This naturally brought up the question of accuracy of the estimates. Even though we were quite aware of the arbitrariness of the numbers we put in the remaining work column, it was never really sunk in. This gave rise to a series of estimation games and strategies. We had long planning sessions upfront. Longer planning sessions in the beginning of the sprint. More accountability of estimates. Complexity, risk and confidence factors, 0 to 1, 1 to 10, percentage… Attempts at accurate time reporting. And the all powerful velocity. We must have multiplied and divided every measurable and quantifiable aspect of the development process with every other to come up with a velocity unit. Ideal team, developer days, 2 hours allowance every day for meeting, fractional contributors.

They all worked…

Even when many of them did not bring forth the result we hoped for. But, when they didn’t, we got a chance to learn why it did not. Isn’t that the spirit of any scientific enquiry!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: