Personal Asp.Net with IISExpress builds

Recently, I ran into an issue where my team wanted to run their local Asp.Net project builds so that their IISExpress instances were visible to the other team members.  This is a non-trivial setup, as a simple checkbox in the Asp.Net project properties window activates IISExpress to host the project, but does not make it visible outside of the local machine.

To configure IISExpress to answer to other names (besides localhost) the following steps are needed:

From an administrative command prompt issue the following command:  

netsh http add urlacl url=http://mymachinename:50333/ user=everyone

“mymachinename” is the hostname that you want to respond to, and 50333 is the port that Visual Studio assigned to my project.  Your port may be different, change this value to match the port that Visual Studio assigned to you.

In the applicationhost.config file at %userprofile%DocumentsIISExpressconfigapplicationhost.config find the website entry for the application you are compiling and add a binding like the following:

<binding protocol="http" bindingInformation=":50333:mymachinename" />

Finally, restart IISExpress to force the web server to re-load the settings you have just changed.  This should allow the service to answer to the appropriate host.

However, if you would like to configure Visual Studio to launch and browse to “mymachinename” instead of localhost, you need to change 1 more setting in your web project’s config.  On the “Start Action” section, set this to start with a ‘Specfic Page’ and key in the appropriate full machine name and port combination in the textbox to the right.

This preference entry is saved in the YOURWEBPROJECT.csproj.user file. By making this decision, you will allow the .csproj file to be checked in to your source control without impacting the other members of your team.

Data Access Layer decisions

I am now more than 3 months into the development of a greenfield application for my employer.  In October, after 2 weeks of heated discussions and gnashing of teeth (jk), we decided on using the Entity Framework as a data access layer for this project. This decision came about for the following reasons:

  • Abstraction of database access, allowing us to focus on the business entities we need to construct
  • Automatic generation of SQL to access the database, significantly reducing the amount of code we need to hand-write
  • Through a simple repository model and partial class structure, we’re empowered to easily abstract the data access and unit test our service layers
  • This is a known tool distributed by Microsoft that any .Net developer who is actively working should have some familiarity with

This decision has allowed us to our application without needing any hand-written SQL stored-procedures or functions.  All of our business logic is encapsulated in a service layer that is 98% covered with unit tests.  This assurance provided by the service layer gives our team the confidence to make changes to the data access layer without significantly impacting the rest of the application.

Our application lives in a hosted web application model, that is attempting to achieve 99.99% uptime.  To meet these needs, we have redundant web servers, redundant application servers, but only 1 database.  As a software analyst, I want to protect the database processors and memory from unnecessary work.  To that end, I want to move as much business logic away from this single point of failure so that it can focus on what it does best: store and retrieve data.

When we do need something more sophisticated to access data from the database, we can augment our model with stored procedures that populate and maintain our entities.

Our project team has profiled our Entity Framework code, and have not found any significant “n+1” query issues.  I will find it very hard to believe that anyone would want us to write additional SQL code, that needs to be maintained. Entity Framework is already generating and maintaining that code for us automatically.

AJAX performance and Session Management

Asynchronous loading of content on a web page is an effective way to make web pages with slow loading content appear to run faster.  However, beware abusing this technique, as you may end up making your initial problem worse.

Consider a web page with 2 div elements that will be loaded by a jQuery ajax load function.  If you simply list these two load commands one after another in a javascript function, the browser will submit both requests back to back, effectively simultaneously. 

If you are like me, and don’t modify your Asp.Net MVC controllers too much from their boilerplate defaults, you are getting burned by this configuration.  By default, the two requests from jQuery will request read-write access to their server-side session. These requests are translated as GetItemExclusive calls to the Session provider, which block other requests for session while they are operating.

The first request will request a exclusive access to session, succeed and return promptly.  However, this request will block the second request.  When the second request to the server is processed and requests exclusive access, it will be denied by the first request.  In Asp.Net, the attempt to reload session will block for half a second and then re-attempt to acquire the lock. (MSDN:  http://msdn.microsoft.com/en-us/library/aa479024.aspx)

How should we work around this ‘limitation’?  Use an Asp.Net controller decorated with the SessionState attribute  When you mark the controller that will generate this content with a ReadOnly or None session state, the requests Asp.Net will make for session will NOT block each other.  The result you should see is near simultaneous return of your two requests.

Windows Phone testing begins

I have been struggling with this decision for months.  Finally today, I’ve taken the plunge, and I have a Windows Phone.  A Verizon HTC Trophy with Mango is now in my possession.

I’m still carrying my iPhone 4S, but the Windows Phone is just so much easier to use and work through social interactions with my family and friends.  But honestly, this is not the real reason I purchased the phone.

I have several applications that I have written and are just about ready to publish in the Window Marketplace.  However, I want to perform some final tests on a real device, before I publish.  Finally, I also have several ideas for an XNA game or two… and would like to test and build those over the next few months.

For Christmas, I have acquired and been reading XNA Game studio books.  I am really enjoying what I am seeing in XNA, and will be sharing some of my trials and findings with XNA over the next few months as I build and release a game or two.

Finally, look forward to the next steps in the CQRS series tomorrow.  This next phase requires a bit more development and fact checking.  I don’t want to present code and strategies that are not accurate.

ORM vs. CQRS/ES – Part 7 – Intro to CQRS

Before we plunge into the CQRS approach, a brief description of the CQRS architecture is in order.

CQRS stands for Command Query Responsibility Separation. For the common developer like you or I, this could be simplified to mean a 2-datastore architecture. In this architecture, we construct a read-only datastore and a write-only datastore. Of course these two are not ONLY for read and write, after all… the system needs to populate and maintain these stores. The naming is intended to define what the end-user’s interaction with each data-store is.

Before I go too much further, if you want the complete details from THE authoritative source on the topic, I highly suggest reading two thought leaders:

  • Udi Dahan – Founder of NServiceBus, and all around nice guy
  • Greg Young – World Traveller and Founder of CQRS Info.

By separating the reads from the writes, we enable some significant optimizations in our architecture. A read-only database can be configured to mirror layout and access patterns the end-users prefer when they consume our data. The write-only datastore can be optimized to allow fast append-only access to the store, and a minimum number of indexes allowing for very fast write-access.

Consider this: when users access your website, they are primarily reading data from your data store. In many environments, the users are less-frequently writing to the data store. When those users are writing to the data store, it is acceptable performance for that write-action to take a second or two to complete. However, Google now considers page speed in search ranking algorithms. So, lets optimize for lightning fast read-access, provide acceptable write access, merge the two and provide details to our end-users about when the last write occurred so they can make appropriate judgements about how to work with our data.

With the architecture, several choices are afforded to us because we are no longer tied to the necessity of keeping our read and write data in the same structure. The read-only datastore can actually be optimized to be file-stores on disk or we can also consider NoSQL offerings like MongoDb from 10Gen. On the write-only side, the strategy we will be discussing is the “ES” or EventStore strategy. In particular, I will be walking through the usage of the Jonathan Oliver Event Store library.

How do we tie these two datastores together? How do we ensure that they are consistent? This is the job of a messaging platform like NServiceBus or MassTransit. The scope of using these tools is beyond this discussion, but may be the target of a future blog post.

In the next post, we will begin constructing a Domain-Driven-Design class architecture to support the write-access to the data store. This will expose the Event objects to be stored. The composition of the read-only store will follow, and we will pull the three parts together in the following post.