This is part two in a series of articles on .NET architecture. You can start here for the introduction and table of contents. This post will focus on the overarching principles I’ve developed when tackling application architecture. The focus of this post is not on individual technologies (except when used as examples), but on general rules of thumb that I use to help guide me when making architectural decisions.
I consider these to be the axioms of my architectural philosophy. And like axioms in philosophy, if you disagree with any of these foundational principles, you will most likely disagree with a lot of the conclusions I draw based on them. But that’s okay – no matter what principles you hold, someone will always disagree. :-)
Here is a list of the guiding principles. I will go into more detail below.
- Prefer Microsoft Components
- Avoid Tight Coupling
- Testing as First Class Citizen
- No Silver Bullet
- Value in Consistency
- Don’t Forget the Team
- The App Is Long-Lived
- Don’t Paint Yourself Into a Corner
I consider YAGNI to be my most important architectural principle. It stands for You Ain’t Gonna Need It, and it is a well known acronym in programmer circles. This is closely related to the KISS principle – Keep It Simple, Stupid. Basically it boils down to this: don’t just start guessing at what you might need for a particular application or module. Only build in what you know that you need. Don’t include extra parts “just in case” they come in handy in the future.
Let’s say you are building a view page for a business application, and the page contains a grid of orders. Maybe the grid should have an Excel export feature? Yes, maybe it should. But you should not just start building it. Talk to the intended users of this new page, and ask them if they need an Excel export. Talk to the Project Manager and figure out if there is enough remaining time to build such a feature. But again, don’t just assume you will need it.
This concept can be seen at the level of application features, such as I just described. But it can also be seen at the design level (should we have a strategy pattern here?) or at the architectural level (do we need a separate data access layer?).
YAGNI can be taken too far, and it is sometimes difficult to differentiate between what is definitely needed, and what might be needed. If you take the concept too far, you will end up with architectures, designs, and applications that are too sparse and can’t support future changes or end user needs. You are never going to get the balance 100% correct, but if you keep YAGNI in mind, you will have quicker deliverables, cleaner designs, and less complex architectures.
The assumption in this series of articles is that you are designing an architecture for a .NET-based application. If you are not, this principle does not apply. But if .NET is at the center of your application architecture, then any technologies from Microsoft that overlap in functionality with technologies from a third party should be given preference.
Let me give an example. Suppose you are looking for a logger. There is the Logging Application Block from Microsoft, as well as many other logging frameworks from third parties, such as log4net. This principle means that all things being equal, choose the Microsoft logger over the other loggers.
There is a very important phrase there: all things being equal. It may be the case that for whatever particular technology you are looking at, the third party tool is superior to the Microsoft product. (In fact, this might be true of logging frameworks – don’t take the above example as an endorsement of Logging App Block over log4net.) In that case, it may be the best choice to go with the third party. But the fact that a .NET architectural component is from Microsoft should be considered a pro in the list of pros and cons in your evaluation of alternatives.
Why should the Microsoft component be given preference? There are a number of reasons:
- Generally, Microsoft components will be upgraded to newer versions of the .NET Framework quicker than their third party counterparts.
- Most Microsoft components play nice together out of the box, whereas it is usually more effort to integrate a third party component with other components in your architecture.
- Microsoft isn’t going anywhere. Third party vendors are almost definitely smaller than Microsoft, and therefore probably have shorter support timeframes and are more likely to go out business or discontinue the product line. By the same token, open source options could lose the support of the community, or developers might leave the project and the product will stagnate.
- The Microsoft product will be used more. This might not be the case at first if MS is releasing a new component to compete with established alternatives. However, eventually, most new applications will migrate toward the MS-endorsed option, and the third parties will be competing for the remaining, smaller market share. Over time, it is more likely that the MS product will survive and flourish. It will be easier to find developers.
Of course it is not always the case that the Microsoft product is a better option. LINQ to SQL comes to mind. But, in general, it is pretty safe to have your application depend on Microsoft components.
My experience has repeatedly shown me that the most limiting factor for the long term viability of an application is the amount of coupling between it’s components. Inevitably, with any serious application that has to live beyond a few months, components need to be upgraded or replaced. If, for example, you have to modify UI screens in order to upgrade your data access component, your application components are too tightly coupled.
There are a lot of concepts related to coupling, such as Separation of Concerns, Inversion of Control, multitier architectures, Strategy pattern, and dozens of others. The fact that there are so many patterns related to achieving loose coupling should tell you two things: that loose coupling is important, and that you don’t get it for free (or else you wouldn’t need all these patterns).
Tight coupling can occur at the component level, when you are deciding the major pieces that will make up your application architecture. But it can also occur at every level underneath: between projects in your solution, between classes in a project, between methods in a class, etc. It is almost impossible to avoid coupling completely but if you pay attention to it, you can usually manage how bad it gets. Sometimes it’s more of an art than a science.
Often, tools steer you in the wrong direction with this, for example with the Entity Framework. If you follow most of the Microsoft demos on EF and Silverlight, you will end up with domain classes that know about your persistence framework, and even a service layer and UI components that know about it as well (see WCF RIA Services’ LinqToEntitiesDomainService). Most of the time, if you are careful, you can take advantage of these tools while avoiding the coupling issues (e.g. EF POCOs). But you should always keep it in mind.
I considered including this point with the previous principle, as it is closely related. But it is important enough to stand on its own. If you are working on a “serious” application, that will have lots of features and live for many years, you must consider testing when creating that application. Of course you (or your equivalent in the testing department) could always run the application and click through the screens. But, when you are working on a feature that is fifteen screens deep and requires typing in 20 fields of data in order to test it, you will be begging for automated testing. And it gets much harder to make an application testable as an afterthought.
There are a lot of principles surrounding the testing discipline, such as test-driven development (TDD). I’m not specifically endorsing TDD or any other testing methodology. But I am saying that a suite of automated tests should accompany an application. And that as the application evolves, so should its tests. Ideally, the tests should be good enough so that after running them you have enough confidence to release the application to production. This isn’t always possible, but some automated tests are always better than no tests.
When you are making architectural decisions for your application, consider what effect those choices will have on testability. If it is too hard to write automated tests, developers won’t want to do it, or project managers won’t let them. The easier it is to write tests, and the more effective those tests are, the better off the application will be in the long run. And, as a bonus, highly testable usually means loosely coupled, so it’s a win-win.
This concept, coined by Fred Brooks in Mythical Man Month, which every developer worth his salt should have read, is very far reaching. I’m not going to explain all the nuances here; you should read the book. What I mean by it in this context is that there is no “one correct architecture”. Each application is different, the architectural choices you make for one application are not necessarily the correct ones for the next.
It is easy to get “stuck” in a particular architecture. Everyone has their favorite tools, and want to use them as much as possible. But, as they say, if all you have is a hammer, everything looks like a nail. Make sure when you are deciding upon an architecture that you have considered all of the options. Make sure that you aren’t just making choices because they are easy. And make sure that you keep aware of new options as they appear on the market so your knowledge doesn’t get stale.
Now that I’ve gotten through telling you that every application should have a different architecture, I’m going to tell you the exact opposite (blogging is so much fun!). You do get an advantage from having similar architectures amongst the different applications in your portfolio. There are less things to know, less upgrades to do, less components to integrate, etc. Any time you can reuse a component that is already in your portfolio, you get some long term benefit.
But of course, there is such a thing as too much consistency. There are situations where it is worth the effort of bringing in a new tool, even when it overlaps with another tool in the portfolio. Let’s say that you have a particular persistence framework in your portfolio that is used by half a dozen existing applications. You are about to start work on a new application, and are evaluating whether to use this existing persistence framework or switch to a different one. A bad reason to switch would be because you like the new one more, or you want to learn it so you can put it on your resume. But a good reason to switch is that this tool has features that the old framework doesn’t, which are particularly important to this new application. Or maybe the old framework is no longer supported, and upgrading it is becoming harder and harder.
There are many different legitimate reasons to break consistency. But it is up to you to figure out how to correctly weigh the benefit of consistency against the drawback of possibly jamming a square peg into a round hole.
You cannot make architectural decisions in a vacuum. You have to consider the team that will be building the application when you decide what the architecture for that application should look like. If you have a team full of VB.NET developers, you’d better have a darn good reason for recommending C#. If your team consists of budding programmers hired directly out of college that don’t have a grasp of object oriented programming, you might want to steer away from concepts like Inversion of Control.
This principle is true whether you have an existing team, or whether you need to build a new one for your given project. In your particular region, it might be easier to find Java programmers than .NET, or the general quality of available developers might be particularly high or low. Perhaps the hiring policies of your company impact what people you will be able to get on the project. Regardless of the details, when creating an architecture, you shouldn’t forget about the people that will eventually realize it.
This principle does not mean that you should use only technologies and concepts that your team is familiar with. There are very good reasons to introduce these things to a team. But you should do this intentionally, realizing that there might be a learning curve to overcome at the start of the project, or the team might not be as efficient as it was on the last project, etc.
Most of the decisions that are made at the architectural level have far-reaching and long-lived impacts on the application. It is very hard to change an application’s architecture once it is established. Most times it is too costly a project, and never gets done. Also, it is a generally accepted fact in the industry that changes in the maintenance phase of an application are far more costly than those in the earlier stages of the SDLC. What this all boils down to is that you’d better get the architecture right, because problems with the architecture can get extremely costly over the life of the application.
Obviously, you have to have some idea of how long the application will live, or you might go overboard. It seems wrong to spend a month of planning and analysis to come up with the perfect architecture for an application that is only going to run for 6 months. Conversely, if an application is expected to live in a timeframe of decades, you have to be extremely careful with the architectural choices that you make. Also, you have to be very cautious about any “technical debt” that you absorb into the application.
In my experience, applications always live longer than expected. And oftentimes an architecture in one application gets duplicated for a similar application down the road. So my recommendation when deciding between a more robust architecture that can survive for the long term, and a quick-and-dirty architecture that can get the job done for today, is to prefer the former. However, this has to be tempered against the needs of the business, the market, and the project. Sometimes quick-and-dirty is the right choice.
As you can see from the points I’ve raised above, there are many different factors to consider when making architectural choices for an application. And since you can’t see the future (I assume – good for you if you can), you might not always make the best choices. This is where my last point comes in: always have a contingency plan.
Now obviously you can’t have a contingency plan for every situation. So you need to decide which risks deserve attention and which don’t, given your particular application. Here’s some examples of what I’m talking about:
- What if Microsoft discontinues Silverlight in place of HTML 5?
- What if our only developer that understands Oracle PL/SQL resigns?
- What if this vendor’s product doesn’t deliver what we think it will?
- What if the web service API doesn’t give us all of the functionality we need?
- What if this open source tool loses favor in the community?
- What if our web applications gets far more users than we expect?
Obviously these are examples, and don’t apply universally. But if gives you some idea of the kinds of things you should be thinking about. Coming up with answers to questions like these might involve some of the other points I’ve raised. For example, maybe the best way to protect yourself from a deficient web service API is to put a facade interface in front of it, so that it is not tightly coupled to the rest of the application.
There are no hard and fast rules when creating an application architecture. You have to put thought and care into each decision, because it will have far-reaching effects upon the application. Hopefully the points I’ve raised will help guide you in the right direction the next time you hit File > New Project.