Technical Debt or Evolutionary Design

I’ve had several interesting discussions over the last few weeks about the topic of “Technical Debt”.  Besides the negative connotation of the term, it is used to refer to a software design that was not fully implemented to meet a given set of requirements.  Another way to explain it is that developers “cut corners”.

Wikipedia defines Technical Debt as follows:

Technical debt (also known as design debt or code debt) is a neologistic metaphor referring to the eventual consequences of poor or evolving software architecture and software development within a codebase. The debt can be thought of as work that needs to be done before a particular job can be considered complete. As a change is started on a codebase, there is often the need to make other coordinated changes at the same time in other parts of the codebase or documentation. The other required, but uncompleted changes, are considered debt that must be paid at some point in the future.

Source: https://en.wikipedia.org/wiki/Technical_debt

In contrast, the Agile methodologies prescribe that software should be built in an evolutionary fashion.  This suggests that designs are crafted for only what is required at the time the software is written.  

This leads to an interesting crossroads.  In the early phases of writing a website, it is acceptable to optimize performance for a few hundred to a few thousand users as part of the requirements at that time.  The development team does not know the full scope of the traffic the website will receive, and performance can be tuned at a later date.  When visitor traffic patterns change, performance concerns start to become more noticeable.  

How should an agile team address these concerns?  Is it technical debt, or have the requirements for the system changed?

MongoDb scalability?

I’m attempting to use MongoDb as a session manager for Asp.Net and I’m seeing some immediate issues.  Consider:

  1. ASP.Net state is not stored and used in set based operations.  All queries from the webserver are key-based, which makes a NoSQL storage option ideal.
  2. MongoDb stores most of its data in RAM, which makes the queries lightning fast
  3. MongoDb can be sharded, allowing for the session data to be spread across multiple servers as your application grows.

The scalability of MongoDb lies in the 3rd item there – sharding.  When we refer to sharding the database, what is meant is that the database will split the contents of a database or even and individual table (or collection in the case of MongoDb) across multiple service instances.

The challenge with MongoDb is that the database service has a single write-lock.  This means that in high-write scenarios there can be contention for writing to the data-store.  The Gilt Groupe recommends keeping requests under 50/second:

http://tech.gilt.com/post/32734187989/mongodb-performance-at-gilt

Still others demonstrate performance that indicates we should keep our writes under 25/s:

https://whyjava.wordpress.com/2011/12/08/how-mongodb-different-write-concern-values-affect-performance-on-a-single-node/

In my state server scenario, we need to keep all writes “safe” – that is, they need to be written so that the next read will find the changes.  Using the numbers from the last benchmark, we should be able to see writes to a single MongoDb node in the neighborhood of 5000 per second.

In a very high-throughput application with several hundred thousand users accessing it concurrently, we will not be able to keep up with a single Mongo node.  This is where sharding will help us.  To manage 100,000 hits per second, we should distribute those writes across a 20-node MongoDb deployment.

Is this good performance?  Is this a valid scalability solution?  I look forward to your comments 

Forgotten c-sharp language features: implicit operator

Note:  This post has been added to The Code Project at: http://www.codeproject.com/Tips/452213/Forgotten-Csharp-language-features-implicit-operat

Every now and again, I run into a situation when coding that makes me think: “Isn’t there a native c-sharp feature that does this?”  … and today I hit another one of those situations.

I was writing some code to isolate the conversion of one class to another.  I knew this conversion was going to be done in several locations throughout my codebase, so I wanted to write the conversion once and re-use that function.

My initial impulse was to write a static class and use a .Net 3.5 extension method similar to those we see when using LINQ.  I wrote some code that looked like:

public static class Converters {
    
    public static Receipt AsReceipt(this Order myOrder) {

        return new Receipt {
            // set properties in the Receipt from the Order object
        };

    }
}

I could then call this code as follows:

Receipt thisReceipt = myOrder.AsReceipt();

Clean… simple… easy.

But then it hit me: I can do this automatically with the implicit operator keywords.  Here is the link to the technical article on MSDN describing the feature:

http://bit.ly/implicitCsharp

To summarize that article: this feature allows us to define how to implicitly convert to and from an enclosing user-defined reference type (a class) with a static method.  Sweet!  I moved my code from the static “Converters” class back into the Receipt class and it now looks like:

public class Receipt {

    // other properties and methods...

    public static implicit operator Receipt(Order myOrder) {

        return new Receipt {
            // set properties in the Receipt from the Order object
        };
    }
}

and now my code to perform the conversion looks like this:

Receipt thisReceipt = myOrder;

That made my code so much easier to manage without having to litter static classes with conversion functions or use interfaces throughout my code.  As an additional benefit, now my Receipt object is now aware of how to convert to and from other types.  I prefer the isolation of this conversion logic, as it keeps me from searching my codebase to determine how best to convert from one custom type to another.

If you would prefer to be more declarative in the conversion statement, there is also an explicit operator keyword that you can use in the same fashion.  If you were to mark the conversion function as explicit operator, the usage statement would then look like this:

Receipt thisReceipt = (Receipt)myOrder;

We still maintain a very simple syntax that is descriptive of the code operation desired.  

Conclusion

These operator keywords are a powerful tool, one that many of us don’t know about or we forget that they are available to us.  Let’s try to make better use of these native language features, as they will help us improve our class design by keeping all of our object’s conversion concerns in one location.

I have a few more of these ‘forgotten features’ that I’ll highlight over the next few weeks.  I hope you check back to catch some of the other language features that I intend to discuss in the future.

Until next time, may all your code compile, and all of your unit test pass!

Should I submit to CodeMash?

This is a tough one for me..  I am not usually one to stray from the spotlight, but I saw Jim Holmes post on Twitter that CodeMash is now open for speaker submissions, and I got nervous.

CodeMash is a different conference for me.  I have only ever spoken at User-Group and CodeCamp conferences… this would be my first “pay to attend” conference that I would be submitting to.  

Additionally, the timing for CodeMash is goofy for me.  My wife just started back to college full-time and my daughters both attend elementary school.  CodeMash is scheduled for the middle of the first week of January, in Ohio… a good 8 hour drive from home.

I’m nervous about this one… a first for me.  I think I gotta do it.. after all, I’ve already given a shot at Tech Ed 2012.  How am I going to feel when THAT call for submissions goes out?

You may have heard of me before…

Okay okay… I’ve made the rounds on the internet over the past few weeks:

I’ve been discussing all kinds of good things surrounding the Windows 8 release and coding applications with WinJS, particularly using my open source tool QUnitMetro.

For the next few weeks, I am completing a pair of Windows 8 applications and preparing a series of presentations that I will be giving at the following events:

Its going to be an absolute thrill to be at these three events where I know I’ll run into some of my favorite voices in the .Net community, and get to discuss all of the new bits from Microsoft…

Additionally, I’ve had several inquiries about unit testing strategies…  After my initial ‘test screencast’ a few weeks ago, I’ve started outlining a series of ‘unit testing recipes’ that I’ll be making available.  Depending on interest, we’ll see where these screencasts land  😉

I hope to see you at one of my events in September.  Thanks for reading!