Thoughts From The Trenches (Giant Brain Dump Incoming!)

A random assortment of random thoughts (and rants!) from the trenches…

You Know You’re In Trouble When…

  1. You have to convene three people to figure out how to create an instance of one of the core objects in your framework.  I think this is directly related to having an anemic domain model – it just isn’t obvious which “service” you should be calling to set the properties on the object.  It seems like the whole thing would be easier if you could just call the constructor or a static initializer on the class to get an instance; this is the most basic premise of an object oriented system (and one that gets thrown to the wayside much too often).  Constructors are the most natural way to create an instance of an object; why not use them?
  2. Your team members are afraid to update their code (in fact, they’ll wait days before updating because it’s always a painful, time-consuming excursion to get your codebase compiling not to mention your environment working afterwards).  This could be a symptom of many different ills.  In this case, the problem is three fold:
    1. The source control system is painful to use. The culprit is Accurev; it is perhaps one of the worst source control systems I’ve ever used (not to mention it’s very obscure and uses obtuse terms for common source control actions).  A quick search on Dice yields 6 results for the keyword “Accurev” while “svn or subversion” yields some 786 results.  Of course, the big problem with this is that it takes an extraordinarily long time to ramp up a new addition to the team to the peculiarities of the source control system.  (I still haven’t figured out how to look at changesets, run “blame” on a file, and why it’s so slow…)
    2. There are no automated unit tests for the most basic and important of functionality: data access code. The lack of a structured way to unit test your core data access code makes the entire codebase seem….fragile. Changes in code that are not regression tested tend to break things, which tends to ruin productivity.  I can understand not testing code that is dependent on external libraries which are difficult to test (it really requires a lot of thinking and work to do right), but I can’t understand why any team wouldn’t test their core data access code.
    3. There is no software support for tracking breaking changes. What I mean by this is, for example, changes to a database schema or a stored procedure.  The standard way some teams “resolve” this issue is by emailing people when a breaking change is entered.  However, the problem with email is that it’s easy to forget someone and, even if you remember everyone, it’s not easy to backtrack and find all of the different email notices.  For example, if I’m in the process of writing an intense piece of code, I’ll ignore a breaking change and deal with it the next time I update.  But by that time, there could be two or three breaking changes.  It’s difficult to sort these out in email and much easier to sort them out with some pretty basic software support.  On FirstPoint, we used a Trac discussion to track breaking changes.  Developers checking in breaking changes were required to document the steps that the other developers would need to take to ensure that the environment remained stable.
  3. You’re worried about deadlines, but you roll off two people who’ve been working on your project for two years and replace them with one person who’s been working on the project for two months.  Fred Brooks’ The Mythical Man-Month covers this pretty succinctly:

    adding manpower to a late software project makes it later

    The problem is that the new resource cannot possibly have the richness of experience with the existing codebase that is require to be productive right away.  In a system that’s sparsely documented (and by that I mean there is no documentation on the core object model), it means that a new developer has to interrupt the workstream of more seasoned developers to get anything done.  This is probably okay when the going is slow and steady, but in crunch time, this becomes a big productivity issue.  I know I hate being interrupted when I’m in the zone, so I personally hate to interrupt others, but in this scenario, I have no choice since there is no documentation, the codebase is huge, and it’s not at all obvious how to get the data that I need.

  4. When there are multiple ways to set the value of a property on a core object in your model.  What I mean by this is say I have an object called Document and somehow, there were two or more ways to set the value of VersionId (and each way getting you a different type of value) when you use a data access object to retrieve an instance.  Again, this is a byproduct of an anemic domain model.  Because the rules of how to use the object are external of the object itself, the proper usage of the properties becomes open to interpretation, based on the specific service populating the object.
  5. Your object model is littered with stuff ending in “DAO”, “Util”, “Service”, or “Manager”.  It means that you haven’t really thought about your object model in terms of object interactions and the structural composition.  These are suffixes that I use only when I can’t think of anything better.  More often than not, when I write these classes, they truly are utility classes and are usually static classes.  If this is a big portion of your codebase, you have some serious problems.

You Can Make People Productive If…

I think the role of any senior developer, lead, or principal on a project is not to watch over everyone’s shoulder and make sure that they are writing good code.  I’ve learned pretty early on that this doesn’t work; you can’t control how people write code and if you try to, you’ll just get your panties in a twist all the time, raise your blood pressure to unhealthy levels, and piss off everyone around you.  So then the question is how can you get a group of diverse individuals with a diverse level of experience to write consistently good code?

It’s a hard question and one that I’m still trying to answer.  However, I’ve learned a few lessons from my own experiences in working with people:

  1. Make an effort to educate the team.  This means reading assignments, group discussions, and making learning a basic requirement of the job, not an optional extracurricular activity.  Pick a book of the month and commit to reading a chapter a day.
  2. Have code reviews regularly.  One of the surest ways to help get everyone on the same page is through code reviews.  The key is to keep it focused and not let the process devolve into a back-and-forth debate regarding the little things, but rather focus on the structural elements of the objects and interactions.
  3. The smartest guys on the team work on the most “useless” code.  What I mean by “useless” here is that the code doesn’t yield immediate benefits; in other words, framework code.  Typically, this involves lots of interfaces, abstract classes, and lots of fancy-pants design patterns.  The idea here is to make it easy for the whole team to write structurally sound code, regardless of skill level, by modeling the core interactions between objects and the core structure of the objects.  I think a key problem is that project managers see this as a zero-sum activity early on in the game (the most important time to establish this type of code) when in reality, it usually returns a huge ROI when done with the right amount of forethought and proper effort to refactor when the need arises.
  4. Document things…thoroughly.  One of the easiest ways to mitigate duplication and misuse is to use documentation in the code.  For framework level code, it’s even more important to have solid documentation about the fields, what type of values to expect, how the objects should be used, how instances are created, what special actions need to be performed for cleanup, etc.  Documentation done right can also help improve code consistency if you add examples into your documentation.

Writing good code is productive.  It becomes easier to maintain, easier to bugfix, easier to ramp up new developers, easier for one developer to take over for another, and it means a generally more pleasant and insightful workday, every day.  Which brings us to…

Sound Software Engineering Is Like…

Exercise!  Project managers seem to lose this very basic insight when they make the transition from a developer.  Like exercise, it’s always easier to put in the effort to do it regularly and eat a healthy diet than to wait until you’re obese and then start worrying about your health and well-being.  Sure, it feels like hard work, waking up at the crack of dawn and going out into the rain/snow/dark, eating granola and oatmeal, skipping the fries and mayonaise, but it’s much easier to keep weight off than to lose weight once you’re 200lbs overweight!

Likewise, it’s always going to be easier to refactor daily as necessary and address glaring structural issues as soon as possible than to let them linger and keep stuffing donuts in your face.  It’s like carrying around 200lbs of fat: you lose agility, it becomes difficult to move, everything seems to take more effort – even simple things like climbing the stairs becomes a chore.  The lesson is to trim the fat as soon as possible; don’t let serious structural issues linger — if there’s a better, cleaner, easier way to do something, do it that way.  Every excuse you make to keep fat, ugly code around will only make it heavier and harder to maintain.

How To Reinvent The Wheel…

It seems like a pretty common problem: a lead or architect doesn’t want to use a library because it’s not “mature” enough.  What this means, exactly, still baffles me to this day.  Mature is such an arbitrary measure that it’s hard to figure out when software becomes mature.  What this usually leads to is reinventing the wheel (several times over).

When evaluating third party libraries, I really only have a handful or criteria to consider whether I want to use it or not:

  1. Is it open source and is the license friendly for commercial usage? I’ll almost always take a less feature-rich, open source library over a more complete licensed library.  The reason is that there’s less lock-in.  I won’t feel like I’ve just wasted $1000 (or whatever) if I encounter a scenario where the library is insufficient or plain doesn’t work.
  2. Does it have sufficient documentation to get the basic scenarios working? This is perhaps the only measure of “maturity” that matters to me.
  3. Does it solve some scenario that would otherwise take the team an inordinate amount of time to impelment ourselves? I hate wasting time duplicating work that’s freely available and well documented with a community of users who can help if the problem arises.  And yet, time and time again, there is no end to the resistance against using third party libraries.  Part of it is this very abstract definition of “maturity” (objections by technical people) and part of it is a fundamental misunderstanding and general laziness about different licensing models (the business folks).

That’s it.  I don’t need the Apache software foundation to tell me whether log4net is mature or not.  I look at the documentation, I write some test code, I use it and I evaluate it, and I incorporate it once I’m satisfied.

Software Estimation And Baking Cakes…

Fine grained software estimation is most assuredly the biggest waste of everyone’s time.  Once it comes down to the granularity of man-hours, you know that someone has failed at their job since there is no way to even quantify that level of absurdity.  Once you start having meetings about your fine-grained estimates that pull in all of the developers, then you really know that you’re FOCKED.

If I handed you a box of cake batter and asked how long it would take you to bake the cake, you’d probably take a look at the directions, read the steps, and estimate how long it would take you to perform all of the steps and add the baking time and come up with 50 minutes.  Okay, we start the timer.  You’re off and cracking eggs and cutting open pouches and what not.  But wait, your mother calls and wants to talk about your trip next week.  -5 minutes.  You open the fridge and find that you’re half a stick of butter short so you run to the grocery store.  -30 minutes.  Oh shoot!  You forgot to pre-heat the oven.  -5 minutes.  Finally, you’ve got the batter mixed up and ready to bake.  The directions say to bake for 40 minutes but you’ve already used up 40 minutes and only 10 minutes left of your original estimate: now what?

Well, you could turn up the heat, but that’d only serve to singe the outside of the cake while leaving the inside uncooked.  You could just bake it for 10 minutes, but your cake would still be uncooked — but hey, you’d meet your estimate.  More likely than not, you’d just bake the cake for 40 minutes and come in 30 minutes late since late, edible cake is better than burnt or mushy cake.

Software estimation is kinda like that (and look, in the case of baking a cake, all of the directions and exact steps are already well defined and spelled out for you — writing software is rarely so straightforward).  It’s mostly an exercise in futility once it becomes too granular since there are just too many variables to account for.  The answer — if it must be implemented feature complete — is that it’s going to take as long as it’s going to take (and probably longer!).  For most non-trival tasks, I feel like the only proper level of granularity is weeks.  Don’t get me wrong, I’m not saying that you shouldn’t estimate, but that you should estimate at the right level of granularity and accept that once you’ve reached your estimation and the work isn’t done, your only real choices are to:

  1. Extend the deadline.
  2. Trim the unnecessary features.

So that’s it; feels good after a brain dump!

You may also like...

5 Responses

  1. Happy AccuRevver says:

    As a developer at a company that adopted AccuRev a little over a year ago in replacement of a failed SVN instance, I can tell you that either you haven’t taken the time to figure out how the tool can help you or your company has not properly implemented it.

    The symptoms and problems you mention are _precisely_ the issues that AccuRev resolves! Afraid to update code? AccuRev allows that automatically, dramatically reducing the need to merge. Broken code base, won’t compile? Work in isolated streams yet still have the ability to deliver when your code is stable. Emailing when things are wrong? Use the visual nature of the stream browser to see what’s changed, what’s wrong, etc. You can’t figure out how to "blame" or look at Change Packages? Then you haven’t grasped what AccuRev takes only two hours in an online course to teach average folks.

    The terms they use are not obtuse; they have to be different because the methodology is different (and superior) from "common" source control. Again, takes just a few minutes to learn. AccuRev is so far ahead of rudimentary tools like Subversion that there really isn’t any comparison.

    I can appreciate your brain dump here, but you’re way off base with regards to AccuRev.

  2. Chuck says:

    AccuRevver,

    I think that the fact that the number of job listings on Dice that include "AccuRev" as a requirement is in the single digits says a lot about how superior it really is 😉

    Some of the issues that I have with it include:

    * No listing of changesets. This is perhaps one of the most useful features of a source control system.

    * Many common commands (like file rename) require a command line operation.

    * While integration with an issue tracking system is usually a good thing, the issue tracking system itself is so severely neutered, I’m not sure why anyone would want to use it. The text entry box is a rather useless text box, incapable of adding formatted text, code blocks, etc). The commenting system is non-existent so developers and testers end up modifying the description and adding "[CHUCK 8/20] Did you try this?" and then followed by "[STEVE 8/21] No, not yet, let me try". This doesn’t make sense. Try Trac and see how much better an issue tracking system can be.

    * It’s slow. Very slow. SVN over HTTPS (across several states) is faster than AccuRev over the local network.

    * There aren’t many tools that integrate with it. Because of the popularity of source control systems like SVN and Mercurial, you’ll find that there are many tools that extend and integrate with them.

    * It’s nearly impossible to track external files because there’s no way to exclude files from the externals. Using TortoiseSVN with SVN, for example, you can set exclude paths for directories like \Debug or \obj or \bin (directories for your binary output). Every time do a check-in with new files, it’s a challenging 10 minute ordeal to find them in externals and add them to the depot.

    * The lack of expertise in AccuRev leads to severe productivity issues since on any given project, especially if you bring on consultants, only a very small number of people actually know what they are doing and how to handle the "advanced" use cases (AccuRev seems to have a very low bar for what it considers "advanced"). It means that certain resources become bottlenecks for resolving issues with the source control system on a near daily basis.

    * Even when using a separate stream, upstream changes get pushed down automatically. While this is handy, it also means that if you are working on an experimental/refactor branch with another dev separate of the main line of development, you may end up with a lot of conflicts when you update. In SVN and Mercurial, this type of merge is an explicit merge from one branch to another.

    * The UI is overly complex with way too many buttons and icons that make no sense. Who would intuitively think that a green lightning bolt means update?

    * When you do an update or commit, there isn’t a handy listing of what changed that you can use to view files, diff, view the log of the file, etc from a single list.

    * There doesn’t seem to be support for line-by-line SVN blame type functionality. Having lived with this in SVN, using a source control system that doesn’t support this is very annoying – it becomes difficult to figure out who put in a breaking change. Maybe this functionality exists? But the fact of the matter is that in working with three consultants from my company who have been on this project for two years, none of them can tell me how this is done. No one knows how to figure out who wrote a particular breaking change. To some degree, it almost doesn’t matter if it supports this feature or not. If it’s hard to use or obscure, it might as well not exist.

    These are just some of the complaints off the top of my head. I do agree that AccuRev is probably better for some scenarios and specific roles (like an integration manager), but for developers, it’s a kludgy, heavy, hard to use, poorly designed pain in the behind.

  3. Damon says:

    Hi,

    Open source systems are the norm in the industry, so it isn’t surprising that there are more listings on dice.

    If you want to see the change sets, just run history on either a file, stream, or depot as you need.

    Rename is available via right-click or the little RN icon in the GUI. What else do you go to the cli for?

    It shouldn’t be slow. Let your admin know about it or contact AccuRev support directly and they will help to troubleshoot.

    What would you like to integrate with?

    You can exclude external files with ACCUREV_IGNORE_ELEMS. Check the docs.

    The whole point of AccuRev is inheritance. But, if you don’t want files from other streams, just set a time basis on your stream and you will block it.

    A "what changed in my update" feature is a good idea. I’ll add it to the backlog.

    If you want "blame," that is the "annotate" command in AccuRev, available from the command line and the GUI right click. In the GUI, it supports a real-time slider which changes the content as you slide the slider.

    You may also want to look into which version you are currently on. We are always working hard to satisfy our users, check out the latest version! You may also want to check out our new web interface.

    If there are particular features you are looking for, a great place to write to make sure we hear you is on our forums (Support & Services/User Forum) .

    Cheers!

    Damon

  4. Chuck says:

    Damon,

    While your point is valid, that SVN will have a larger userbase, I think you’ve missed _my_ point: having a small userbase, even if it _were_ a superior product, means that it increases ramp-up time for new resources and creates resource bottlenecks in day-to-day development since few people are adept at using AccuRev and understand the peculiarities of the system.

    The larger userbase that SVN has means that it’s easier to onboard a new resource and that that resource is more likely to be proficient in SVN usage.

    With regards to integration of SVN, we need only look at TortoiseSVN (and likewise, TortoiseHg for Mercurial — both much easier to use and more intuitive than the desktop client for AccuRev) and Trac (for both SVN and Mercurial with a wide variety of plug-ins to support other source repositories). Other systems that AccuRev may plug into (say CruiseControl.NET) suffers from the fact that there is less community usage of such integration and thus less documentation when something goes wrong or an advanced scenario arises. Why put your team through that?

    Even more basic than that is usability: AccuRev lacks it. From the ambiguous buttons (all the same size? the most commonly used buttons need to be organized much better than they are now) to the straight out of 2001 look-and-feel as well as the overly simplistic and nearly useless ticket description/editor (come on, how hard can it be to add support for bold, italics, color, and code snippets? how about the ability to add comments outside of the description text box?), I don’t see much there is to like about AccuRev as a developer (again, I can see it as a much more attractive product from the point of view of an integration manager or the deployment team).

    Shell integration would be a nice addition as well. Why make people jump through the desktop client instead of a much more natural Explorer integrated solution?

  5. Damon says:

    Hi Chuck,

    I just realized that I forgot to thank you for spending the time to provide feedback at all. Thank-you.

    Can you elaborate on the button size issue? I’m not following.

    You’ll get a better editor in the near future, but in the web interface first. Check it out, it also has our new graphing and charting features.

    There is a shell integration, you can get it from our downloads page. It is under Other Downloads, AccuBridge, AccuBridge for Windows Explorer .

    We are currently adding tons of new features via the web interface every 2-3 months. Your input will definitely be part of that process.

    By the way, which IDE do you use?

    Cheers!

    Damon