ConDroyd, my new ALA 2011 schedule app for Android

01 03 2011

ConDroyd is a FREE Anime Los Angeles (ALA) 2011 schedule app for Android phones & tablets that I wrote. It includes the latest published schedule and all the cosplay gatherings posted on cosplay.com. It lets you star your must-attend events, and then you can filter to display just those events. You can also enable reminders for your starred events so you don’t miss them.

For more info, please go to http://www.condroyd.com/

If you’re coming to ALA and have an Android, please download my app, give it a try, and let me know what you think.



Every Software Change Has Risk

11 05 2009

Almost invariably at the end of a software project, someone will insist on fixing an annoying but non-critical bug. Perhaps the browser will freeze if you hit refresh 200 times in a row, or maybe button text overlaps when you get more than 20 incoming calls during an outgoing call. The cry will go out, “We should fix this bug! It’s a simple change! It couldn’t possibly break anything!” Sometimes this will come from marketing; other times the engineer for that component will want to make the change.

Looking at the proposed fix, the change will appear to be fool-proof. Perhaps a variable didn’t get initialized properly, or a resource had a typo, or someone forgot a break in a switch statement. You may say to yourself, “There’s no way something this simple could possibly break anything, so let’s do it!”

Before you agree to a claimed “zero-risk” change, read this true story from Raymond Chen at Microsoft. It demonstrates how every change has risk, sometimes in unusual and unexpected ways. During the middle of a project, there’s time to find and react to any unexpected side-effects from a change; at the end of a project, the side-effects may be serious enough that dealing with them will delay the software’s release. Do you want to be the one explaining to the CEO that the product can’t ship on time because a “zero-risk” cosmetic fix introduced a serious problem into the planned final build?

So avoid the temptation, and at the end of a project only fix bugs that actually prevent the product from shipping.



Read the Diffs, addendum

02 06 2009

Just as I was about to write a post extolling the virtues of watching your coworkers’ checkins, I saw that Eric Sink beat me to it.

One additional thing that Eric didn’t mention: Read your own diffs. To some people this may sound stupid — why would you look at the diffs after your own checkins? To answer this criticism, I would like a quick show of hands — how many of you have several unrelated changes open in your source tree on a regular basis? OK, how many of you have unintentionally checked in changes because of that?

Looking over your own diffs immediately after checkin is a quick and easy sanity check that lets you catch problems before they affect your fellow engineers or QA/QE:

  • Did you check in extra files?
  • Were there unrelated changes in any of the files that shouldn’t have been checked in yet?
  • Did you miss any files?
  • Did you forget to update a section of code? Looking at diffs makes it easy to spot patterns where, for example, 2 of 3 related functions were modified, and remind you that the 3rd function may also need updating.

A minute or two right after each checkin can save you from a mob of your fellow engineers out for blood because you broke the build with a checkin mistake. In my book, that makes it time well spent.



Faster Hardware or Faster Software?

12 25 2008

In his post Hardware is Cheap, Programmers are Expensive, Jeff Atwood asserts that companies should “always try to spend your way out of a performance problem first by throwing faster hardware at it.” While there are certainly cases where this is true, most of the time throwing hardware at a performance problem as the first step is the wrong thing to do.

Though it’s not clear in his post, I’m going to assume he’s talking about server software. When you’re writing software that runs on a client (desktop, laptop, smartphone, or other embedded device), upgrading the hardware is not an option. Trying to sell shrink-wrapped software that will only run on 5% of your target market’s current hardware is generally not a good idea (except perhaps if you’re selling a PC game).

On the server side, Jeff overlooks several key points. Most importantly, a poorly-written app will often times not be able to take advantage of faster hardware. A single-threaded, extremely-inefficient program will run only marginally faster when given a multi-core, 8GB of RAM, fast-CPU box. An app that uses a very large database with no indexes also may not benefit much from faster hardware. Throwing hardware at a bad architecture will not make it better.

In addition, new hardware imposes additional costs beyond the initial purchase price. You need to pay sysadmins to set up and maintain the hardware. You need to pay power and cooling for the new hardware, which is not cheap if you’re at a good datacenter. Because your datacenter and network get more complex as you add lots of servers, management/overheard costs do not increase linearly either. Going from one server to ten is usually less than a 10x increase in operational costs, while going from ten servers to a hundred is usually more than a 10x increase in operational costs.  Upgrades are also disruptive, and will impact your users/customers.

If you have a more complex application, it may not always be clear what needs to be upgraded. If your site is running slow, do you upgrade your load balancers, firewalls, webservers, appservers, database servers, or the SAN?

Jeff also overlooks the most important metric when looking at servers apps: cost per user. Many dot-coms went under because their software required too much hardware per unit of revenue. In other words, if you need one $2000 server for every 200 users, and an average user generates $0.10 of revenue per month, it will take 100 months  (over 8 years!) to pay back the cost of the hardware. (Remember, this is before overhead and other fixed costs.) At that rate, getting bigger doesn’t help; it just makes you lose money faster! If you could support 2000 users on the same server, you will pay back the cost of the hardware in less than a year. Now you would have a shot at getting to profitability.

Put another way, Jeff is weighing the cost of a software engineer versus one server, but this is not a fair comparison. Most server-side applications run on many systems, with some form of load balancer to spread out the workload. The alternative to improving the code usually isn’t buying one new server; it’s buying ten, twenty, or fifty new servers (assuming the application will scale to that many servers.) If you have 20 servers, a 2x improvement in speed saves at least $40,000 right off the bat (not counting the overhead savings discussed earlier); now the ROI is starting to favor improving the code before buying new hardware. There may also be multiple deployments — one for each customer, or one per department, plus one for QA, one for engineering, etc — which also increases the number of servers that would need to be upgraded.

So when is it appropriate to buy new hardware as the first step? If your app is running on one or two servers that are a few years old, buying new hardware as the first step makes sense. If your servers have exceeded their useful life (3-5 years is usually what I plan on), replace them. If the workload on your application has increased, it’s easy to justify replacing the hardware. If upgrading your existing hardware (adding RAM or replacing the disks) will improve your app’s performance, this is an easy and relatively inexpensive first step.

Finally, it’s important to keep in mind that code optimization yields diminishing returns. For a small app it may only be worth spending a day or two on optimization. For a large app it may be worth taking a week or a month. In either case, after a while, all the low-hanging fruit have been optimized; then the cost of upgrading the hardware vs the cost of continuing to optimize the software needs to be compared, and a decision made. Replacing hardware can be the right thing to do; usually, though, it’s not the right thing to do first.



A Small Feature

08 27 2008

Software engineering in a medium or large organization, especially for an existing application, is a lot different than programming at a startup or on my own. It’s not inherently worse, but it can be frustrating if you walk in the door expecting to spend most of your time writing code.

Startup

  1. Idea: Wouldn’t it be nice if…
  2. Design: You keep the idea in the back of your brain for a day or two. (You could do it immediately, but in my experience it’s better to wait a bit before implementing a great idea, because frequently you will realize there’s a better way.)
  3. Implementation: Code it up
  4. Test: Make sure your change didn’t break anything

Total time: 4 hours

Medium/Large Organization

  1. Idea: Wouldn’t it be nice if…
  2. Track: File a bug to track the issue
  3. Approval: Get approval to include the feature in the current release
  4. Design Review: Write up the design and implementation of the proposed change
  5. Engineering Signoffs: Discuss change with co-workers and get signoffs as needed
  6. Cross-department Signoffs: If the feature impacts other groups, get their signoff. (If it doesn’t, you may need to get their signoff anyway to confirm they agree it doesn’t affect them.)
  7. Implemention: Code it up
  8. Engineering Test: Make sure your change didn’t break anything
  9. Internal Docs: Update internal documentation, release notes, upgrade procedures, etc
  10. Discuss with QA/QE: So that they can test your change
  11. QA/QE Test: QA/QE should test every new feature
  12. External Docs: Work with doc writer to update customer documentation

Total time: 2-3 days, assuming things go smoothly

It’s important to note that none of the steps required in a Medium/Large Organization are unnecessary or even overly bureaucratic. Any change to the code has the potential to impact a lot of people and groups. For an existing application, no change is better than a bad change, and even a good change isn’t that useful unless everyone else knows how to take advantage of it. Going through a change control process makes it much more likely that all your changes will be good changes.