Saturday, March 12, 2011

Documenting Software Architectural Requirements

Download Example

Background

Gathering requirements for a software project can definitely be challenging. Maintenance projects for familiar products are usually easier than starting a project for a new product. After searching through many books, articles, and gaining some practical experience, I have thus far developed a direct and simple method of documenting architectural requirements.

Note: Hands down the best article I have read is written by Peter Eeles at IBM. His article can be found at http://www.ibm.com/developerworks/rational/library/4706.html#iratings. Ultimately Eeles' process is a quality attributes type of approach to gathering requirements (see some of the links below).

Whether or not you use Eeles' approach for gathering requirements, I believe this method for documentation is fairly flexible and natural; especially for those using quality attributes.

Format

You will notice in the downloadable example that each requirement takes on the templated form of:

1. Statement:
     a. Question [optional]:
     b. Answer [optional]:
     c. Quality Attributes:
     d. Architectural Realization:
     e. Goal Metrics [optional]:

Explanation

What exactly is this the formatted structure about? Let's go over each line.

Statement: The statement is intended to be just that, a statement. Generally when discussions occur about what the application should or shouldn't do, statements by various stakeholders are made. Those statements should then be documented, hence the rest of the format is derived from the statement. Of course, all statements should be reviewed, revised, and approved by all key stakeholders. It is then up to the software architect to make sure that all statements will make sense technically.

Questions & Answers: Although these are optional, they may be helpful to facilitate clarification or future discussion. It should be noted that we may not necessarily be limited to one question and one answer. Feel free to add as many as necessary.

Quality Attributes: Which set of quality attributes we use and why we use them is beyond the discussion of this article (I used Eeles' in my example download).

Architectural Realization: As you will notice in the example document, architectural realizations are references to architectural design decisions shown later on in the document.

Goal Metrics: Depending on your company's policies regarding development requirements and post mortem analysis, we may need to document measurable goal metrics to further make design decisions and to ultimately measure the success of the project.

Justification

Why do I suggest this kind of format for documenting requirements? Well hopefully the straight forward statements will not scare off non-technical stakeholders while still providing a technical bridge from those statements to developers. Furthermore, when the "Why did we do that?" kind of questions arise, a direct correlation from architectural and code decisions can be traced back to their respective requirement statements. Finally, it is the intent that documenting a formal statement (hopefully approved statement), lends itself to reduce ad hoc scope creep. Or at least if scope creep does happen, we can at least show in post mortem the number of changes after initial development began.

Thursday, March 10, 2011

Weakly Typed vs Strongly Typed Objects

What is the difference between weakly typed and strongly typed objects? When should I use weakly typed objects or strongly typed objects? Within the context of the C# language I hope to give some insight on this subject. I think the best way to understand these diametrical types is to look at various code examples in comparison.
Example 1
string myStringValue = "43.3"// Weakly Typed
double myDoubleValue = 43.3d;   // Strongly Typed
This first example is fairly simple. What is the difference? Well obviously one variable is type string and the other is double. But what makes the string type weak and the double type strong? The double type is more restrictive that the string type. Thus, all possible double values in myDoubleValue can also be stored with their string equivalent in myStringValue. But, not all possible values for myStringValue can be stored in myDoubleValue. We can generalize our understanding to the following statements:
  1. Strongly Typed objects are more restrictive than Weakly Typed objects.
  2. All possible values for a Strongly Type object can be represented in a Weakly Typed object. But not all possible values for the same Weakly Typed object can be represented in the Strongly Typed object.
Example 2
// Weakly Typed
Dictionary<string, double> myDictionary = new Dictionary<string, double>();

myDictionary.Add("A", 43.3d);
myDictionary.Add("B", -24.0d);
myDictionary.Add("C", 1000.45d);


// Strongly Typed
MyClass myClass = new MyClass()
{
     A = 43.3d,
     B = -24.0d,
     C = 1000.45d
};
In this example we want to access some stored values for A, B, and C. myDictionary is the weakly typed object and myClass is the strongly typed object. We can get and set our values in myDictionary via a string key. Using a MyClass type we get and set our values via the class properties. In the .NET Framework libraries, types like myDictionary are fairly common. A few examples include ASP.NET's Session, multiple ADO.NET objects like DataSet, and the XmlDocument

When should I use weakly typed objects or strongly typed objects?

In most cases you should avoid weakly typed objects. Why?
Case 1
Bad: Suppose we used the weakly typed object, myStringValue in Example 1 in our application. Now somewhere deep in our code myStringValue was set to a non-double like value such as "nevermind." Why was it set to that? Who knows, strange code has a way of sneaking in. When will we know that that was an error? Hopefully at least while testing the runtime of our application. But no guarantees right. You will probably find out the error when a runtime exception was throw in production.

Good: Instead, we will use myDoubleValue, the strongly typed object. Why is this better? If any code tries sneak in and set myDoubleValue = "nevermind", then we will know the error at compile time instead of runtime. Anytime we can check our code for errors at compile time is far more preferable than at runtime. Less errors in production == less costs and management headaches (that could be a lengthy discussion on its own).

Case 2
Bad: What if we have in multiple places of our code some kind of logical access like if (myDictionary["A"] > 0.0d)...? Now what happens when we no longer use "A"? Again we may not know about any errors until runtime.

Good: In contrast, if we used the MyClass type instead and we had such logical code as if (myClass.A > 0.0d)..., then when we remove the property MyClass.A from our code and we compile, we will see immediately everywhere in our code errors where myClass.A is accessed.

Conclusion

Weakly typed objects do have their place. For instance, within the scope of designing a framework, I believe the designers of the .NET Framework made good decisions to have some of the runtime objects as weakly typed (that would also be a lengthy discussion). However, you should avoid using weakly typed objects when you can, but there is no need to be too ridged.

One more thing (a practical example); typically in my ASP.NET applications, I will keep a single session variable, such as Session["MySession"] = new MySession(). This allows me to limit my weakly typed Session object access to a single point (one spot in my code where I will get or set the Session object). I will then define my MySession class to act as a composite structure hierarchy with all my other variables and objects pertaining to the user's session. This will keep most of my code strongly typed with some very limited weakly typed access.

Linq Performance

Download Example

(UPDATE 11/30/2011: I made some clarifications to this article based on some healthy criticism from my friend at http://ox.no/posts/linq-vs-loop-a-performance-test)

LINQ Deferred Execution (a very brief explanation)


Once understood, LINQ offers a quick way to create query language statements (like SQL) for object-oriented development. However, it is critical to understand deferred execution for LINQ (i.e. the code is not run until the results of the query need to be evaluated). I highly recommend http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx as a good read.

Performance Study Explanation

The general approach to this study is to load up some objects with some random data. Then do a comparison of the resulting performance between the LINQ iterations verses the for loop iterations. Furthermore, I have included one example where iterating with LINQ is a good choice (Performance Test A) and another example where iterating through LINQ is a bad choice (Performance Test B).

Before showing the test examples, it should be noted that the working sets are preloaded with 10,000,000 dummy objects as is shown in the GenerateList() method below. The entire source code for the tests can be downloaded here.

public static List<DummyModel> GenerateList()
{
    int i;
    List<DummyModel> list = new List<DummyModel>();
    Random rand = new Random(unchecked((int)DateTime.Now.Ticks));

    for (i = 0; i < 10000000; i++)
        list.Add(new DummyModel()
        {
            A = rand.NextDouble()
        });

    return list;
}

 

Performance Test A (the Good LINQ)

Performance Test A Summary: On my machine set (1) performed better than set (2) by about 200 milliseconds and sets (1) and (2) were iterated the same number of times.

[TestMethod]
public void PerformanceTestA()
{
    IEnumerable<DummyModel> queryResults;
    IEnumerable<DummyModel> staticResults;
    List<DummyModel> list = PerformanceUtility.GenerateList();
    DateTime start = DateTime.Now;

    // 1.
    queryResults = list.Where(d => PerformanceUtility.PickDummy(d)); // Execution deferred
    PerformanceUtility.WriteAccessCount("1. "); // Should still be zero

    // 1.a
    foreach (DummyModel model in queryResults) // Query Executed
        model.A.ToString();

    PerformanceUtility.WriteAccessCount("1.a"); // Should be approx 15,000,000

    // 1.b
    Console.WriteLine(string.Format("1.b Execution Time: {0}", DateTime.Now - start));


    //------------------------------------
    Console.WriteLine();
    start = DateTime.Now;

    // 2.
    DummyModel.AccessCount = 0;
    queryResults = list.Where(d => PerformanceUtility.PickDummy(d)); // Execution deferred
    PerformanceUtility.WriteAccessCount("2. "); // Should still be zero

    // 2.a
    staticResults = queryResults.ToArray(); // Query Executed Again
    PerformanceUtility.WriteAccessCount("2.a"); // Should be 10,000,000

    // 2.b
    foreach (DummyModel model in staticResults) // Query NOT Executed
        model.A.ToString();

    PerformanceUtility.WriteAccessCount("2.b"); // Should be approx 15,000,000

    // 2.c
    Console.WriteLine(string.Format("2.c Execution Time: {0}", DateTime.Now - start));
}


I should note that the access of property DummyModel.A property is accessed 15,000,000 times because the PerformanceUtility.PickDummy() method picks all A values >= 0.5. This means the initial iteration for choosing which DummyModels to be include in our set will be 10,000,000. Thus, the resulting list will contain 5,000,000 items. Therefore any subsequent iterations will only be 5,000,000.

But wait, why did LINQ execute faster than the for loop if they both did the same number of iterations? Well to put it bluntly, apparently one of the perks for using LINQ is some kind of compiler optimization that your average for loop will not have (Pending Fact Check: I'd like to have some sort of confirmation from an official source about this assumption. If anyone has any info from a reliable source, let me know.).

Performance Test B (the Bad LINQ)

Performance Test B Summary: On my machine set (2) performed better than set (1) by about 900 milliseconds and set (1) required 20,000,000 more iterations than set (2) for the same functionality.
[TestMethod]
public void PerformanceTestB()
{
    IEnumerable<DummyModel> queryResults;
    IEnumerable<DummyModel> staticResults;
    List<DummyModel> list = PerformanceUtility.GenerateList();
    DateTime start = DateTime.Now;

    // 1.
    queryResults = list.Where(d => PerformanceUtility.PickDummy(d)); // Execution deferred
    PerformanceUtility.WriteAccessCount("1. "); // Should still be zero

    // 1.a
    queryResults.ToArray(); // Query Executed
    PerformanceUtility.WriteAccessCount("1.a"); // Should be 10,000,000

    // 1.b
    foreach (DummyModel model in queryResults) // Query Executed Again
        model.A.ToString();

    PerformanceUtility.WriteAccessCount("1.b"); // Should be approx 25,000,000

    // 1.c
    foreach (DummyModel model in queryResults) // Query Executed Again
        model.A.ToString();

    PerformanceUtility.WriteAccessCount("1.c"); // Should be approx 40,000,000

    // 1.d
    Console.WriteLine(string.Format("1.d Execution Time: {0}", DateTime.Now - start));


    //------------------------------------
    Console.WriteLine();
    start = DateTime.Now;

    // 2.
    DummyModel.AccessCount = 0;
    queryResults = list.Where(d => PerformanceUtility.PickDummy(d)); // Execution deferred
    PerformanceUtility.WriteAccessCount("2. "); // Should still be zero

    // 2.a
    staticResults = queryResults.ToArray(); // Query Executed Again
    PerformanceUtility.WriteAccessCount("2.a"); // Should be 10,000,000

    // 2.b
    foreach (DummyModel model in staticResults) // Query NOT Executed
        model.A.ToString();

    PerformanceUtility.WriteAccessCount("2.b"); // Should be approx 15,000,000

    // 2.c
    foreach (DummyModel model in staticResults) // Query NOT Executed
        model.A.ToString();

    PerformanceUtility.WriteAccessCount("2.c"); // Should be approx 20,000,000

    // 2.d
    Console.WriteLine(string.Format("2.d Execution Time: {0}", DateTime.Now - start));
}

Whoa! Why did LINQ take so many more iterations than the for loop? Well, this is definitely a direct result of deferred execution. If I was a developer who had no understanding of deferred execution (of which I did at first), I might make this performance mistake. Look at the comments in the code carefully. You'll notice that in each iteration LINQ does a query execution each and every time.


Conclusion

A simple knowledge and building some of your own tests for deferred execution can quickly get you up to speed for optimization. Ultimately, whether to load your LINQ results into a static collection or to allow deferred execution is going to be a case by case decision. Have fun and good luck!

Download Example

Saturday, March 5, 2011

MVC vs 3-Tier Pattern

I have had several people ask me what the difference is between MVC (Model View Controller) and Three-Tier architectural patterns. It is my intent to clarify the confusion by comparing the two patterns side-by-side. At least in part, I believe the source of some of the confusion is that they both have three distinct layers or nodes in their respective diagrams.

Three-Tier MVC
If you look carefully at each diagram you'll notice the associations (arrow connectors) between the boxes are set up a little differently.

Three-Tier

A 3-tiered system really is made up of layers (think of cake layers). The UI Layer has access to the Business Logic Layer, and the Business Layer has access to the Data Layer. But the UI Layer cannot directly access the Data Layer. In order for the UI Layer to access data, it must go through the Business Logic Layer via some kind of interface. If it helps, you could think of each layer as one big loosely coupled component with strict design rules of access between layers.

MVC (Model View Controller)

In contrast, the MVC pattern obviously does not keep a layered system. The Contoller accesses the Model (a runtime data repository) and the View. The View then accesses the Model. Exactly how does that work? The Controller ultimately is the logical decision point. What sort of logic? Typically, the Controller will retrieve, build, or modify a Model base on some triggered action. The Controller then decides which View is appropriate via some internal logic. At that point the Controller will push the Model to View.

Note: Because I mostly develop with .NET, Microsoft has adopted the MVC pattern for ASP.NET with their own platform (see http://www.asp.net/mvc). You can certainly use the MVC pattern without Microsoft's platform, but why reinvent the wheel? I have been very happy using it so far.

When Do I Choose Which Pattern?

First of all, these two patterns are definitely not mutually exclusive. In fact in my experience they are quite harmonious. Often I use a multi-tiered architecture, such as a three-tiered architecture, for the overall architectural structure. Then, within the UI Layer, I use MVC. Something like the diagram below.