Architecture for project rewrite

Posts   
 
    
HcD avatar
HcD
User
Posts: 214
Joined: 12-May-2005
# Posted on: 22-Jul-2007 21:19:57   

Hi,

I have to make a proof of concept for converting a badly going project to a better, more performant, faster developing. more bug-free, cleaner, easier-maintanable project. The subject is a large-web administrative app, mainly consisting of data entry screens (with complex business rule validation and complex relations between the data, also most of the data is temporal/time sliced). The data entry screens are mostly to be sequentially entered (wizard-style) but have to be arbitrary editable too once the whole sequence is finished.

My proposed architecture ofcourse will use generated LLBLGen code (if i get it to work on that pesky DB2 this week, but i assume i will) choosing the adapter way (i have mostly worked with selfservicing/winforms for the last 2 years, but adapter is the way to go i guess for this kind of project) I would put the LLBL DAL code naturally in the lowest horizontal layer (DAL) and the generated entity classes in my vertical domain layer, accessible by all other layers. For the GUI layer i'd make use of the MVP/MVC pattern, ie only views in a thin GUI and putting the presenters and view-interfaces in my businesslayer. Business requirements state that the business layer must be accessible from webservices,"somewhere in the future" but that it's not the main concern at the moment, on the contrary, we are going with the "we'll do it when we need it"-attitude. (and when we need it, i'll probably will need to convert some entities to poco's/dto's)

Now for the businesslayer, i have some questions. I have examined the HnD source code and have noticed the following things: - usage of self servicing. Isn't adapter better suited for n-tier development ? - the queries are constructed in the businesslayer. Shouldn't this be in the DAL ? It makes the business layer very tight coupled to the chosen ORM technology in my opinion, and doesn't comply very much with the "separation of concerns" best practice i assume ? Wouldn't it be better to define interfaces with needed methods, and then implement those methods in the DAL ? So that theoretically one layer could be replaced by another one with relative ease (the new layer just has to provide implementations for the interfaces) But then again, if the vertical layer consists of LLBLGen entities, the whole architecture is allready tied to LLBLGen. (not a bad thing imo, but it might be used as an argument against my proposal)

Working with interfaces in the Business, and implementations in the DAL should also improve mocking & testability (i'll go for nUnit & RhinoMocks i think)

So my architecture would be: GUI : very thin, only view-implementation, html-markup, css ... Events to be handled in the presenters. Still undeciced how/where i will handle the complex flow logic Business/domain : interfaces for the views, interfaces for the DAL, llbl-entities, validation logic, all of the "business" DAL : implementation of the interfaces defined in layer above + llbl generated adapter classes Basically following the practice "take away the gui & dal, and you still have an application"

A remark on transactions: In the business logic validation is really complex and can take quite some time. Also sometimes when a business object needs to be saved, other depending objects need to be saved too (and validated). So using TransactionScope at the start of a Save-operation is not a good idea imo, because it would mean opening up a transaction, and then during the lifetime of the transation letting others participate, keeping the transaction open for far too long, due to the complex timeconsuming validations. So i'm planning to use unit-of-work instead, adding the entities once validated and then committing the UoW in the originating Save-method. This should be much more scalable imo ...

At last, concurrency ..since every table in the DB already has a timestamp datetime field, i think i'll go with the method described in http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=9729&HighLight=1 .

Anyone has comments on this approach, or suggestions how to handle things better/differently ?

fpw2377
User
Posts: 35
Joined: 23-Feb-2007
# Posted on: 23-Jul-2007 20:30:38   

Hi HcD,

I think your ideas are all good and will work but you should also consider the true needs of your application. Be sure not to overdesign your project, this could be just as bad as not having a good design to start with. Take some time to look at how your application will be used and what the future requirements of it might be. For instance in response to your query question:

the queries are constructed in the businesslayer. Shouldn't this be in the DAL ? It makes the business layer very tight coupled to the chosen ORM technology in my opinion, and doesn't comply very much with the "separation of concerns" best practice i assume ?

Ask yourself, do you really need to make the datalayer interchangeable, do you foresee your application needing to change from LLBL to NHiberinate or some other DAL. If your application will be distributed to customers and have many installations than this might make sense, incase one customer has this as a requirement, but if your application will only be installed in your enviroment which you have control over then there is no reason to implement all that code to support a feature you may never actually use.

What I normally do is consider the risk of problems vs. the time it would take to implement the code. I work for a consulting firm and when I am designing a new application for a project I try to make the application fit the needs of the client and not needs of "best pratices". You have to find the balance between what the application requirements are and type of implementation your are going to use. Best pratices and patterns are a great thing, just make sure they don't cause you to implement functionallity just because they say to.

Just my opinion,

Thanks,

Frank

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39749
Joined: 17-Aug-2003
# Posted on: 24-Jul-2007 15:37:16   

I agree that you shouldn't over-engineer software if there's no real need. Over-engineering doesn't make it better, it makes it only more complex and late.

HcD wrote:

Now for the businesslayer, i have some questions. I have examined the HnD source code and have noticed the following things: - usage of self servicing. Isn't adapter better suited for n-tier development ?

HnD was a selfservicing project from the beginning, as it was a port of the old llblgen code to selfservicing and then to v2.0 code where we rewrote a lot and added a lot of new features.

  • the queries are constructed in the businesslayer. Shouldn't this be in the DAL ? It makes the business layer very tight coupled to the chosen ORM technology in my opinion, and doesn't comply very much with the "separation of concerns" best practice i assume ?

No not at all. Separation of concerns is about grouping features into a single class. For example in an MVC pattern, when you group controller and viewer into a single class, you should separate them. The BL formulates queries but doesn't execute them, the generated code does. There's no real 'DAL' anymore and there's also no need for it, as the abstraction level is perfect, there's no need for another layer of abstraction. Sure it's tied to the chosen O/R mapper but that's ALWAYS the case. Imagine a true POCO system with nhibernate. In there you'll do: myOrder.Customer = myCustomer;

now, after this line, is this true: myCustomer.Orders.Contains(myOrder); ? no.

However some POCO o/r mappers do support that. i.o.w.: you always are tied to the o/r mapper of choice.

Wouldn't it be better to define interfaces with needed methods, and then implement those methods in the DAL ? So that theoretically one layer could be replaced by another one with relative ease (the new layer just has to provide implementations for the interfaces) But then again, if the vertical layer consists of LLBLGen entities, the whole architecture is allready tied to LLBLGen. (not a bad thing imo, but it might be used as an argument against my proposal)

What will I gain with all that? Just less time to write the whole application and more complex code. The GOAL of the code is to become an executable form of the specifications. That's it. Your work as a software engineer is to reach that goal by writing the code to meet these specifications. Your work is NOT to create a complex system so you can, eventually, swap out every part of it for something else, unless that's a specific part of the specification. And then it still is a thing one should wonder if it's valid, if it justifies the work it requires. Because: it then would require EVERY software application to become a plugin system stacked upon plugin systems etc. so everything is pluggable and optional etc. etc.

This is the contradicting part of the story some people try to sell: on one hand they try to sell you a whole stack of patterns and everyone should apply them and they truly will make your system better. On the other hand they try to sell you TDD and agile/xp which has the motto YAGNI (you aint gonna need it): build what you should and move on to the next thing you need to implement. This doesn't match: the patterns in general make things often overly complicated and the XP style forces you to focus on just what's needed, thus whatever it takes to make it run (exxagerated)

Working with interfaces in the Business, and implementations in the DAL should also improve mocking & testability (i'll go for nUnit & RhinoMocks i think)

You'd think? I'm not convinced. One should test if the code meets the specifications. A truckload of unittests doesn't proof your code works as expected. It just proofs a bunch of tests succeed. This is still a valuable tool, don't get me wrong, however the tests should be setup with these in mind: - WHAT do I want to proof with my tests - DO my tests emit data to proof my code works so I can measure that? - IS the code able to perform what's in the spec element ABC?

A bunch of asserts don't necessarily proof that. Hard data does and above all: full scale testing of whole features does. A unittest then definitely doesn't cut it and is pretty much useless in that case.

So my architecture would be: GUI : very thin, only view-implementation, html-markup, css ... Events to be handled in the presenters. Still undeciced how/where i will handle the complex flow logic Business/domain : interfaces for the views, interfaces for the DAL, llbl-entities, validation logic, all of the "business" DAL : implementation of the interfaces defined in layer above + llbl generated adapter classes Basically following the practice "take away the gui & dal, and you still have an application"

Who cooked up that practise ? That person has no clue whatsoever what a 'feature' means as a concept IMHO. And that's what counts: the feature.

Of course one should build the software in such a way that it's maintainable, but that has nothing to do with using patterns.

A remark on transactions: In the business logic validation is really complex and can take quite some time. Also sometimes when a business object needs to be saved, other depending objects need to be saved too (and validated). So using TransactionScope at the start of a Save-operation is not a good idea imo, because it would mean opening up a transaction, and then during the lifetime of the transation letting others participate, keeping the transaction open for far too long, due to the complex timeconsuming validations. So i'm planning to use unit-of-work instead, adding the entities once validated and then committing the UoW in the originating Save-method. This should be much more scalable imo ...

that's indeed the scope for the UoW simple_smile

Frans Bouma | Lead developer LLBLGen Pro