- Home
- LLBLGen Pro
- Architecture
Inheritance behaviour with SaveEntity and UpdateEntitiesDirectly
Joined: 08-Dec-2014
Hi,
suppose there is a TargetPerEntity inheritance BaseTable (abstract) and DerivedTable. BaseTable has BaseProperty column and DerivedTable has DerivedProperty column.
When I manually delete a corresponding row from the DerivedTable and try to fetch the entity by id I don't get anything back. Inner join fails and it makes sense.
Next I assigned new values for both properties, set IsNew to false, set the id and tried calling SaveEntity which actually returned true. Checked the db and BaseProperty was indeed updated.
I was expecting SaveEntity to fail because second part of the "entity" is missing and transaction would be reverted.
Then I decided to test it with the UpdateEntitiesDirectly. Difference was that it does INNER JOIN for the update operation, but mainly it seems to check whether the first query has affected anything, thus, it doesn't even try running the second update for the DerivedTable and transaction is reverted.
From the current point of view UpdateEntitiesDirectly behaviour seems to be logical: something wrong with the entity - don't do anything. However, SaveEntity doesn't seem to care.
If that by design?
Thank you!
Joined: 08-Dec-2014
Otis wrote:
Yes. SaveEntity assumes all rows are still there and thus none are deleted from the entity which otherwise would form an entity so it doesn't have to perform the check.
Thank you for your reply!
What about the return value of SaveEntity, when is it false? I tried setting bogus id for the entity also tried giving a non-matching predicate and it seems to throw "ORMConcurrencyException" exception in both cases for the update (.IsNew = false). And how is it different for the insert case? I'm trying to formalize my repositories and need to decide how to handle LLBLGen return values.
Joined: 17-Aug-2003
Findev wrote:
Otis wrote:
Yes. SaveEntity assumes all rows are still there and thus none are deleted from the entity which otherwise would form an entity so it doesn't have to perform the check.
Thank you for your reply!
What about the return value of SaveEntity, when is it false? I tried setting bogus id for the entity also tried giving a non-matching predicate and it seems to throw "ORMConcurrencyException" exception in both cases for the update (.IsNew = false). And how is it different for the insert case? I'm trying to formalize my repositories and need to decide how to handle LLBLGen return values.
It's false when for some reason nothing was saved. E.g. you save an entity that was to be saved but e.g. it didn't go through, because e.g. an authorizer made that fail or e.g. the refetch which was ordered failed.
This might not lead to an exception but it does lead to a value of 'false' being returned.
Joined: 08-Dec-2014
Check and catch - understood.
Going back to the SaveEntity's behaviour: wound't it make sense to check whether each entity related statement has a row affected before running the next statement? For example, in this case data corruption can be caught earlier.
Another thing I'm tacking at the moment is checking whether I'm updating the entity efficiently. Mainly I've been using UpdateDirectly(+async) and passing PK predicate and then comparing the number of affected rows to a known value. However, I noticed that this known variable is not that reliable, because it varies depending where updated properties are. By checking the behaviour of UpdateDirectly I realized that I don't really care about exact number of affected records, because it internally checks if any of updates didn't affect anything then rollback, so I can simplify to just a rowsAffected > 0 check.
By analysing the generated queries for SaveEntity and UpdateDirectly I noticed an inner join difference. So I thought that probably it is better to use a simpler query, however, simpler query doesn't care about affected records and returns true if there's data corruption, plus, it throws an exception if predicate doesn't match. I can address the exception part, however, would also like to ensure that I get the data corruption related exception as early as possible.
I understand the tests part, that this kind of corruption is quite rare and me being a bit paranoid, but it just feels that this small additional check could add value
Thank you!
Joined: 17-Aug-2003
Findev wrote:
Check and catch - understood.
Going back to the SaveEntity's behaviour: wound't it make sense to check whether each entity related statement has a row affected before running the next statement? For example, in this case data corruption can be caught earlier.
You mean, it has to perform a select / where predicate on subtype tables even if the supertype table is the one which row is affected, to make sure the subtype rows are still there? It currently produces update queries for the table(s) affected for an update, and no related table predicates included as that's slower. I do understand your point tho, it's an integrity check that's not doable in the DB, but is doable in code...
Another thing I'm tacking at the moment is checking whether I'm updating the entity efficiently. Mainly I've been using UpdateDirectly(+async) and passing PK predicate and then comparing the number of affected rows to a known value. However, I noticed that this known variable is not that reliable, because it varies depending where updated properties are. By checking the behaviour of UpdateDirectly I realized that I don't really care about exact number of affected records, because it internally checks if any of updates didn't affect anything then rollback, so I can simplify to just a rowsAffected > 0 check.
By analysing the generated queries for SaveEntity and UpdateDirectly I noticed an inner join difference. So I thought that probably it is better to use a simpler query, however, simpler query doesn't care about affected records and returns true if there's data corruption, plus, it throws an exception if predicate doesn't match. I can address the exception part, however, would also like to ensure that I get the data corruption related exception as early as possible.
I understand the tests part, that this kind of corruption is quite rare and me being a bit paranoid, but it just feels that this small additional check could add value
Thank you!
It's a real-life scenario if you have applications deleting rows without the notion of 'inheritance'. the ORM deletes rows always in a transaction and commits only if all rows are deleted properly. So you can't delete a subtype row and leave the supertype row unaffected (The scenario you're talking about if I understood you correctly). It's something we also don't support, e.g. sharing a supertype row with multiple subtype rows, something that has been requested a number of times but which is dangerous and can lead to all kinds of scenarios like the one you mentioned.
I'm not sure what your scenario is though: are there applications able to delete rows in your DB so they e.g. can delete subtype rows and leave the supertype row as-is?
Joined: 08-Dec-2014
This is what I had in mind, linqpad:
var productId = Guid.Parse("3B5FD6C7-9804-4389-B5AB-7D9D0966F726");
var qf = new QueryFactory();
var product = new EventEntity();
product.IsNew = false;
product.Id = productId;
product.DateModified = DateTime.UtcNow; // base entity table (abstract) #1
product.IsPublished = false; // product table (abstract) #2
product.IsEventPublic = true; // event table #3
this.AdapterToUse.SaveEntity(product).Dump();
translates into (please ignore the [Type] assignment):
UPDATE [XXX].[dbo].[Entities]
SET [DateModified] = '2017-12-19T15:32:59' /* @p1 */,
[Type] = 2 /* @p2 */
WHERE ([XXX].[dbo].[Entities].[Id] = '3B5FD6C7-9804-4389-B5AB-7D9D0966F726' /* @p3 */)
UPDATE [XXX].[dbo].[Products]
SET [IsPublished] = 0 /* @p4 */,
[Type] = 1 /* @p5 */
WHERE ([XXX].[dbo].[Products].[Id] = '3B5FD6C7-9804-4389-B5AB-7D9D0966F726' /* @p6 */)
UPDATE [XXX].[dbo].[Events]
SET [IsEventPublic] = 1 /* @p7 */
WHERE ([XXX].[dbo].[Events].[Id] = '3B5FD6C7-9804-4389-B5AB-7D9D0966F726' /* @p8 */)
each statement affects 1 row = 3 rows affected. SaveEntity returns true.
If I delete the corresponding record directly from the Events table and re-run the snippet then generated code is the same, but number of affected rows is 0 for the last statement, thus total number is 2, yet SaveEntity returns true. You mentioned that SaveEntity assumes all records are there and it's by design. I was proposing for SaveEntity to check the total number of affected records, because in here it knew it needs to run 3 update statements and that total number of affected records should be at least 3, actually given that PK is used it should be exactly 3, then rollback with exception if expected != actual affected rows.
Now when I add a predicate:
this.AdapterToUse.SaveEntity(product, false, new PredicateExpression(ProductFields.IsPublished == true)).Dump();
Generated SQL:
UPDATE [XXX].[dbo].[Entities]
SET [DateModified] = '2017-12-19T15:45:51' /* @p1 */,
[Type] = 2 /* @p2 */
FROM (([XXX].[dbo].[Products]
INNER JOIN [XXX].[dbo].[Events]
ON [XXX].[dbo].[Products].[Id] = [XXX].[dbo].[Events].[Id])
INNER JOIN [XXX].[dbo].[Entities]
ON [XXX].[dbo].[Entities].[Id] = [XXX].[dbo].[Products].[Id])
WHERE (([XXX].[dbo].[Entities].[Id] = '3B5FD6C7-9804-4389-B5AB-7D9D0966F726' /* @p3 */)
AND (([XXX].[dbo].[Products].[IsPublished] = 1 /* @p4 */)))
I get ORMConcurrencyException immediately and that's great! Basically, this was the reason I proposed for SaveEntity to throw an exception on update if expected affected rows count doesn't match the actual value.
Regarding the second part of your question: yeah, we also had that discussion. I do have a stored procedure that converts a User (base type) to a CompanyUser (derived) and vice versa. It seems to be ok. I understand it can be dangerous, tried to be careful
Thank you!
Joined: 17-Aug-2003
Regarding your stored procedure: don't do that. Copy and delete, never 'upgrade'. If you need that, you shouldn't use inheritance for that entity, but use a 'role' entity associated with it, as it's not really inheritance, but a separate entity associated with the entity. Same as with Customer -> Gold Customer. It's not a subtype of Customer, it's a Customer with a different role. You more or less self-inflicted this complexity upon you through a shortcut you shouldn't take.
Anyway, what you propose sounds OK, though we have to review it whether it is possible. This might sound odd, but the code which controls teh save process has no notion of how many queries are run. In fact, the query object it executes contains multiple queries inside it (as it's a 'batch query' in this case) but the save controller doesn't know. It just checks IF there are rows affected as the general use case will never run into your situation.
I therefore think it has a low priority and won't make it in the next release, so don't hold your breath
Joined: 08-Dec-2014
Hi,
unfortunately delete is a bit difficult to archive due to relations with other tables and current implementation... CompanyUser does have at least one extra field - CompanyId which is an FK from Companies table. And when upgrading I'm asking for it, also I do change the discriminator value in the Users table. If I change as you propose then it looks more like a TargetPerHierarchy implementation with custom code driving the "types". Probably I could create new "custom" entities in the designer by mapping the corresponding properties for each "type"... Well, have to think.
As for SaveEntity - absolutely, I'm all for making the product better
Thanks!