For years, whenever I’ve been involved in building a new web based architecture, I’ve always advocated that the engineers follow a simple guiding principle:
Design as though your web site is but one possible interface to your overall system. This is because eventually most systems will need to be accessed in a variety of different ways:
- Web Browsers
- Console Applications ( ETL data loaders, etc.. )
- Fat client apps ( iPhone, Android, etc… )
- 3rd party services ( via an API of sorts )
The pattern of an N-Tier architecture has been a best practice for over a decade. However, that pattern has typically espoused a logical separation while maintaining a single physical deployment. There is a implication that the communication between the tiers takes place in-process. What I’m describing is a more of a physical separation of tiers and/or services.
Why would you do this? Well, the primary reason to separate entities like this is to encapsulate them. This insulates them and minimizes the effects changes that one component has on another. This is true for inter-connected components at most levels: the more two things are decoupled, the more one is resistant to changes in the other. This goes double for software products. This separation and encapsulation also allows you to changes things more easily in case you need to pivot your product to match an evolving business model.
“Isn’t this premature optimization?” No.
I’m not advocating building something you don’t need or trying to make things more complex at the start in the hopes that you’ll need it later on. What I am saying is that there are clear patterns and paths that can be taken to ensure that you’re not impeding your products ability to change and evolve as the business needs. There is a clear distinction between trying to solve problems before you have them and maintaining a flexible and evolutionary design. Even if you never have to support multiple clients, your overall system will still be more flexible and responsive to change than if it were a more monolithic architecture.
For example, most newer web sites utilize AJAX technologies to make calls asynchronously back to a server, retrieve small data payloads, then process/display the results to the user. This updates the page without requiring a full refresh. Those calls are usually web service based, utilizing a URL to as the endpoint. It doesn’t take a large jump to make sure that each call isn’t specific to a particular client. Rather, it should not care who is calling it ( web page, command line app, python script, etc.. ), should be as RESTful as possible and as simple as possible. Taken in this light, suddenly your own calls become an API of sorts for others to possibly use. You end up building a web site utilizing your own API calls. Eating your own dog food.
Another example comes from the widely popular MVC architecture. It’s the architecture that Ruby on Rails, Django, ASP.NET MVC and a host of other frameworks are based on. The two pieces most closely tied to rendering data from your backend are the views and the controllers. However, if you’re building a web site, intimately tying your controllers with the expectation that the views are HTML based limits the would-be applications for the logic contained in your controllers. It’s better to not make such assumptions and rely on the view to render the data as it sees fit. If you’re using one of the above frameworks, your controllers will be URL based. If that’s the case, then suddenly reusing large parts of your backend becomes easier. For example, swapping out your regular web site for one optimized for mobile devices becomes a job of swapping out a set of views only, not rewriting your backend.
One last benefit of basing your product off of an architecture like this is that it makes it easier to scale individual pieces. However, that’s a large topic and will have to be addressed in another post.