Friday, May 13, 2011

Agile Development: A retrospective

Some thoughts I've had.
It helps keep things on track. The short iteration times, documentation, and emphasis on working "demos" at the end of each cycle means something can't get too out of hand.
I wish I had known about that evidence based scheduling towards the start. It really shows how to create good time estimates, even without the simulating you have info. Plus, it captured the whole idea of concrete "what needs to be coded in the next x hours" stuff that I think the system my team used missed
If you're going to introduce agile development, introduce it in multiple classes. A single semester gave me and my team a basic understanding, but a multi-semester learning process would have helped me learn it better and use it much better in this class.

You are a software gardener, not a software engineer

An interesting analogy I came across, with a bit of a rant, found here.
Some of the highlights:
Don't expect to know where every leaf will fall when you plant the seed.
Environment matters, both in where it's being designed and where it's being sold to.
Gardens will grow weeds. They are never "finished". They need to be maintained
The quality of the gardeners matters a great deal.

I think it's an interesting read. How true it is...well, that's something I'll only be able to tell with experience over the years.

Decorating, and Facades

The decorator design pattern is what we have seen as a wrapper class. Take some class, wrap an instance of it, and add some functionality. It's a form of delegation in the cases of the old class's functionality.
Facade design pattern is about putting forth a simple, common interface to several sub-systems.

Abstract Factories

The Abstract Factory pattern is an abstraction of creating related objects (such as subclasses of some abstract class) without having to directly specify what class. This is done by creating a static method (possibly within its own class) that uses some state to decide which type of object to return. By doing this and having the calling program influence this state, decision of which implementation can be put off until runtime.

Using SVN Repositories part 4: Merging and Reintegrating

Merging from Trunk to a Branch: So you've been working on your branch, and they've been working on the trunk. You want to merge their changes into your branch so it doesn't drift too far from the original. This is done with the merge command, which is used as "svn merge URL" where URL is the url of the trunk directory. Just like with updates (only more so), you have to resolve conflicts and make sure things didn't break your changes. Then you just commit, and it commits to your branch directory. SVN will even keep track of what was merged

Reintegrating your branch.
first, merge any changes from the trunk, test, and commit. Then, switch to the trunk. Reintegrating still uses the merge command, but this time from the branch directory and with the --reintegrate flag, which is needed because the branch is a combination of branch-specific updates and merges from the trunk. It tells SVN to just look at the differences between the HEAD of the branch and the trunk.As with merging in the other direction, you'll want to check nothing broke before committing the result.

That's all I'm going to cover in this mini-series of posts.
A more complete guide can be found here

Using SVN Repositories 3: Branching and Switching

This is a big one, and something I think would have been nice to see in earlier classes.

Why you want them: Simply put, to minimize disruption. Say you want to take a module out and add new features, and it would leave broken while you were working on it. Instead of creating that disruption, create a branch and work on the feature from there

As it turns out, branches are more or less copies, and in fact are made by making copies of the files and putting them in another area (usually done on the server because the way the files are represented letting it be much faster there). But they're SVN copies. That means that they share the revision history of everything before the split.

Quickly switching between branches:
The svn command makes a working copy reflect a different branch, as well as effectively running an update. so using it in the root directory of your working copy makes the updates and commits go to that branch, allowing you to quickly switch between multiple branches. Where it starts to get odd is that you can set different directories to different branches.

Using SVN Repositories part 2: Revision Keywords

Revision keywords:
These matter more if you're working from the command line although similar functionality is built into most GUI clients for places you use them. HEAD is the only one you'll be interested in when sending requests to the repository, and simply is the latest commit. The other three are used when looking at the working copy. BASE is the revision number of the local copy, COMMITTED refers to the last reversion number with a change to that file, and PREV is COMMITTED-1, referring to the version just before that change.

Using SVN Repositories part 1: cleanup and locks

Most of us have, at one point or another, set up an svn repository for a project. Maybe it was a requirement, or was heavily recommended. But we just got by on the most basic stuff available. Commit, update. If we were lucky, we ran into a situation where we had to revert, and on occasion a team might have to deal with merging of a conflict. But what other functionality can SVN offer? And what exactly does that cleanup command do?

Well, to answer the second question, SVN updates to working copies behave similar to a journaled file system. The client will make a private "to-do" list of the actions it's going to take. Then it makes the updates, locking the parts as it's working on them. Finally, it releases the locks as it finishes each part and removes it from the todo list. If something goes wrong, that list is still there, and cleanup just goes through and finishes things, releasing any locks along the way.

Locks: you can lock svn files so that others can't edit it while you're doing so, but they're "soft" locks, very easy to break. Treat them as more of an additional communication measure.

What makes a good software engineer

I found an article here describing the qualities of a good software engineer. Several of the points are similar to ones we heard in class. Some of the points that resonated with me:

Having a "right way" of doing things, and don't let it slip. Good quality, maintainable code is a result of this, and compromising just because you're short on time will likely end up bad

Be willing to suffer a bit trying to figure things out yourself before going for help. I think the class took this a step further with the whole "chain of command" idea, where you rely on yourself, then a little research, then your peers, then whoever's above you.

Never stop learning. This one kind of applies to all walks of life, but especially with a software engineer. There's always a new language, some new paradigm, a design pattern that makes things easier, more maintainable, or more understandable. We live in a fast-changing field, and it pays to keep up and keep ahead

Share that knowledge. A strong team that you had a hand in making is going to have more value than you alone having that knowledge. Keeping what you know too secret within your team just opens the way to over-competitiveness and backstabbing, neither of which is going to help the team as a whole

Project Value Analysis

Cool projects are great, but any project in the professional world needs to have some value. This, as we heard during presentations today, is especially true with potential entrepreneurs. I found a couple of suggested points to keep in mind when considering this in an article here

1. Revenue generation from the new application
2. Cost reductions from the new application or upgrade
3. Indirect revenue generation
4. Increase in exiting user base due to some new features
5. Increase in market share
6. Revenue increase in companion application

It even explains why it might make sense to consider these when you're not directly involved with the selling and such. "Taking the business and value aspects into consideration during development enables you to view the project from the clients perspective. This in itself makes taking decisions regarding certain technical issues easier"

Writing good atomic requirements

A good atomic requirement should encompass a few key points.
First, a requirement number. This allows easy tracking
Second, a brief description and what type of requirement it is (functional, aesthetic, etc)
Third, a rationale for implementing it
Fourth, what criteria will be looked at to judge it completed
Fifth, a priority
Sixth, what user story it came from

This allows all the relevant information of a requirement to be read at a glance. Imagine a 3x5 card

Delegation and adaptors

So you have this code you want to reuse. You can extend the class, inheriting not only what you need, but everything else along with it. In java, this also limits you, as you can only extend one object at a time.
An alternative is to make a class that delegates to the reused code.

An adaptor is a class specifically used in between legacy code with one interface and new code that wants another interface. The adaptor presents the new interface, effectively translating things and allowing the legacy code to exist within the new design.

What do unit tests and Agile Programming have in common?

Unit tests should be able to be run fast and often. Agile methods emphasize fast iterations. In both cases, the idea is that you find out mistakes quickly. And you will make mistakes.

Kangaroos with missiles, or Reusing code for fun and profit...but mostly fun.

Ah, the killer kangaroo story. Maybe you've heard of it, maybe you haven't. As the story goes, work was being done by the Australian Defense Science and Technology Organization's Land Operations/Simulations on a simulator for helicopter pilot training. It included, among other things, herds of kangaroos, since startled animals could give away a helicopter's position. “Being efficient programmers, they just re-appropriated some code originally used to model infantry detachments reactions under the same stimuli, changed the mapped icon from a soldier to a kangaroo, and increased the figures' speed of movement.

“Eager to demonstrate their flying skills for some visiting American pilots, the hotshot Aussies "buzzed" the virtual kangaroos in low flight during a simulation. The kangaroos scattered, as predicted, and the Americans nodded appreciatively . . . and then did a double-take as the kangaroos reappeared from behind a hill and launched a barrage of stinger missiles at the hapless helicopter.”
It makes a good story, and shows quite nicely that even when reusing code you have to be careful, but as it turns out, there's a bit more to the story, and it wasn't a mistake so much as a bit of fun. The kangaroos weren't there out of necessity, the interesting bug was discovered rather early, and the “kangaroos” were firing beach balls, the default weapon of the simulation code.

Initial story found somewhere on the net, the rest from the snopes article here

The Null Object Design Pattern

I came across an interesting design pattern recently. Say you have a service that fetches a foo object and calls foo.bar() if some condition is met. Under normal circumstances, you'd have to check if the foo object is null. What if, instead, you could have an object that represents a null foo, and just has no-effect methods. That way, the “null” object could be treated the same way as a normal object. To do this, just write a private inner class that extends foo, have its methods do nothing and return false on condition methods. The outer class will have a static final instance of this, and any outside methods that are supposed to fetch a foo then return this “null” object if they fail.

I found it here

Evidence Based Scheduling

Developers don't like writing schedules. They're a pain to write and often don't seem realistic. I found a system for writing them called evidence based scheduling. The complete description can be found here but here are the highlights:
1:break things down into small timeframes, no more than 16 hours. Large timeframes like days and weeks leaves what actually needs to be done extremely nebelous. With smaller timeframes you need to find out what's actually coded.
2: track elapsed time. Chart estimated time for tasks vs how long they actually took. This gives a nice velocity metric. Keep this 6 months back.
3: when estimating the time it will take to complete future tasks, run a Monte Carlo simulation assigning random values from the person's velocity history to each task. The estimates and their conversion to calender dates can be automated, and the averaging against the person's velocities will help give a better estimate.
Interesting enough, because of the way the estimates are calculated, interuptions to the coding time don't need to interupt the “clock” with elapsed time. The number of times that velocities where time taken involves one of those interruptions are used is very similar to the chance that those interuptions will happen.
4: manage the projects actively. You can see how cutting lower priority effects ship dates, estimate ship dates for each person, and use these to make changes

The method kind of reminds me of some of the artifacts used in SCRUM and other agile programming methods. I'm guessing something like this would be easy to integrate.

The Waterfall Development Method

One of the older development methods out there, the waterfall method specifies a single major iteration, starting with requirements analysis, then going through design, implementation, testing, release, and maintenance. Its has a few advantages, such as clear start and end points to each part, and improved quality because the specifications must be finished up front, helping prevent feature creep. There are, however, several major criticisms to this method. First, the requirements may not all be known at the start, but rather will emerge as the project progresses. Also, because there's no return to previous steps, what you have when you finish one is what you have. In some cases, you'll hit the implementation process and find out that there are roadblocks.

More reading here

UML class diagrams

UML, or Unified Modeling Language, encompasses several documenting diagrams, including ones representing use cases and sequences of interactions between the code, but one of the most basic, and the one that often first presented, is the Class Diagram. A class diagram gives a clear, visual representation of the class hierarchy and the interactions. It helps with planning a system, as well as being able to tell at a glance how things interact.

Test Driven Development

Test driven development, or the practice of writing the unit tests before writing the code. This means you have to know the requirements before you start coding, and in such a way that you can write a set of tests that encompass that specification. This gives a few interesting benefits. First, once you have the tests written, you have a metric to measure completeness of the code. The more tests passed out of the set, the closer you are. Second, during refactoring, you have a way to make sure you haven't broken the functionality tested.

Friday, May 6, 2011

How to write unmaintainable code

I found another "what to avoid" article cleverly disguised as "how to make things bad" related to programming. I like articles like this because they're memorable and funny, making it all the more likely for the concepts to stick. Sure, some of them are a bit out there (using a baby names book for variable names, camouflaging commented-out code), but others point to rookie mistakes (obvious commenting, ignoring coding conventions, not validating input), and suggestions to use features of languages like assert.

Article here