Ignore LLBLGen validation errors

Posts   
 
    
sparq
User
Posts: 63
Joined: 14-Mar-2010
# Posted on: 13-Feb-2017 01:49:55   

Hi there,

I'm evaluating LLBLGen Pro 5.1 as a possible upgrade option for a 2.6 project.

I am unable to get past the model validation which complains of a FK column not matching a PK column (the FK is 13 chars in length while the PK is 32 in length).

Surely there is a way to ignore these errors and generate the code but I cannot find such an option.

Please help!

Ben

daelmo avatar
daelmo
Support Team
Posts: 8245
Joined: 28-Nov-2005
# Posted on: 13-Feb-2017 05:46:21   

Hi Ben,

To hide/ignore warnings, check the following documentation section: Ignoring/filtering out warnings.

However, if you have an error, you have to fix it in order to proceed. PK/FK type errors have to be fixed. This is the same if you would do it in your database. If there are many of those errors, you could think in constructing a script that do this in the "Search elements" option in LLBLGen Designer.

David Elizondo | LLBLGen Support Team
sparq
User
Posts: 63
Joined: 14-Mar-2010
# Posted on: 13-Feb-2017 05:49:40   

daelmo wrote:

Hi Ben,

To hide/ignore warnings, check the following documentation section: Ignoring/filtering out warnings.

However, if you have an error, you have to fix it in order to proceed. PK/FK type errors have to be fixed. This is the same if you would do it in your database. If there are many of those errors, you could think in constructing a script that do this in the "Search elements" option in LLBLGen Designer.

But.. it can't be fixed. I don't have control of the DB to change the schema.

Surely you can just ignore these somehow? The relations were working just fine in 2.6 (there's no issue querying the data just because the DB was designed poorly)

Not being able to work with a bad DB (the world is full of them) would be catastrophic.

Ben

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 13-Feb-2017 10:04:59   

there's an error given because of potential overflow at runtime: a 32char PK value doesn't fit in a 13 char FK field, so saving a graph with these entities will cause a problem. v2.6 didn't have any validation so it worked there, but it surprises me you never had an exception because of this at runtime.

The error can also give problems with just querying: the prefetch paths use pk/fk hashes, which differ with lengths being different, or if you have validation added, it could mean it fails because lengths differ.

I do realize it is a problem for you though disappointed You say you can't change the DB, is this because it's a legacy DB that's not changed anymore but the data has to be read? Or are there ways to change it?

You could try this, but it's a bit of work: - create a new catalog/schema with the same tables: _yourdatabase__Fixed - alter the tables to make the error go away, so fix the lengths of the FKs - load your project into the designer - in the catalog explorer, right-click the catalog and select 'Rename'. Specify as new name: _yourdatabase__Fixed - sync the project with the database, it will now use your changed catalog. This should make things work again. - generate code. - in the app/web.config file, specify a catalog overwrite: from yourdatabase to _yourdatabase__Fixed. (http://www.llblgen.com/Documentation/5.1/LLBLGen%20Pro%20RTF/Using%20the%20generated%20code/gencode_applicationconfiguration.htm#catalog-name-overwriting-sql-server-mysql)

This way you can have your project as it should. At runtime you'll get code which might fail, but at least you can work with the project. I don't know how much the db is changed overall. If it's not changed anymore, this will work just fine. If it IS changed a lot, the fix for the FK value length might be a change they can apply too so it gets fixed properly.

Frans Bouma | Lead developer LLBLGen Pro
sparq
User
Posts: 63
Joined: 14-Mar-2010
# Posted on: 13-Feb-2017 21:48:08   

Otis wrote:

it surprises me you never had an exception because of this at runtime.

Well the SQL being executed would be something like "WHERE FK_COLUMN = PK_VALUE". If PK_VALUE is longer than than the max length of FK_COLUMN then nothing returns. No error. Essentially it's possible for bad data to exist but it's just ignored.

Otis wrote:

You say you can't change the DB, is this because it's a legacy DB that's not changed anymore but the data has to be read? Or are there ways to change it?

Can't be changed - 3rd party product DB. I'm certain this is a fairly common scenario.

Otis wrote:

You could try this, but it's a bit of work: - create a new catalog/schema with the same tables

Understood. A bit of work to solve a problem that needn't exist though. And if the 3rd party DB changes after a product upgrade we don't instantly know if the schema that we're interested in has changed until a runtime failure (previously it would be visible on a LLBLGen refresh and caught at compile time at the latest).

Thanks for the help guys.

Ben

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 14-Feb-2017 15:17:26   

sparq wrote:

Otis wrote:

it surprises me you never had an exception because of this at runtime.

Well the SQL being executed would be something like "WHERE FK_COLUMN = PK_VALUE". If PK_VALUE is longer than than the max length of FK_COLUMN then nothing returns. No error. Essentially it's possible for bad data to exist but it's just ignored.

Saving a graph of entities with that fk/pk setup will give errors tho (as the PK value being synced with the FK field, which will then overflow). Prefetch paths also will likely fail (as the PK hashes are used for the FK field filters).

I'm not saying it never works, only that it can fail.

Otis wrote:

You say you can't change the DB, is this because it's a legacy DB that's not changed anymore but the data has to be read? Or are there ways to change it?

Can't be changed - 3rd party product DB. I'm certain this is a fairly common scenario.

It's common indeed, but not that one needs to change the DB because of a bad design. But alas, not something you can change...

Otis wrote:

You could try this, but it's a bit of work: - create a new catalog/schema with the same tables

Understood. A bit of work to solve a problem that needn't exist though. And if the 3rd party DB changes after a product upgrade we don't instantly know if the schema that we're interested in has changed until a runtime failure (previously it would be visible on a LLBLGen refresh and caught at compile time at the latest). Thanks for the help guys. Ben

That's indeed a problem. I'm not sure what to do though: making it possible to ignore an error is IMHO not the way to go: the error will cause problems somewhere, and the question then is: why didn't the designer warn me about this...

Frans Bouma | Lead developer LLBLGen Pro
sparq
User
Posts: 63
Joined: 14-Mar-2010
# Posted on: 20-Dec-2017 00:32:21   

Otis wrote:

That's indeed a problem. I'm not sure what to do though: making it possible to ignore an error is IMHO not the way to go: the error will cause problems somewhere, and the question then is: why didn't the designer warn me about this...

Can I appeal to you again to add an option into LLBLGen to turn this error into a warning?

It is not possible to upgrade our project from 2.6 to 5.3 because of this strict new validation.

Error message is:

The mapping of sub-element 'Column' of entity 'Table' for the target database with the driver 'Oracle Driver (ODP.NET)' has the following errors: - MappedTarget: The maximum length '13' of the target field 'SCHEMA.TABLE.COLUMN' is less than the maximum length '32' set for the mapped field 'Column (FK)'

Can we also have the same option for when number types don't match? For example, a number(10) column references a number(28 ) column which is perfectly legal in Oracle (a FK can be created) but LLBLGen doesn't allow it. This should also be able to be ignored.

Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 20-Dec-2017 10:30:34   

sparq wrote:

Otis wrote:

That's indeed a problem. I'm not sure what to do though: making it possible to ignore an error is IMHO not the way to go: the error will cause problems somewhere, and the question then is: why didn't the designer warn me about this...

Can I appeal to you again to add an option into LLBLGen to turn this error into a warning?

Errors won't be changed to warnings, as in this case, it is an error. Not all is lost however ->

It is not possible to upgrade our project from 2.6 to 5.3 because of this strict new validation.

Error message is:

The mapping of sub-element 'Column' of entity 'Table' for the target database with the driver 'Oracle Driver (ODP.NET)' has the following errors: - MappedTarget: The maximum length '13' of the target field 'SCHEMA.TABLE.COLUMN' is less than the maximum length '32' set for the mapped field 'Column (FK)'

This is a real error and can't be changed. However to fix this there is a way I think: a type converter. Create a type converter which converts the 32 length value to a 13 length value (strings, I presume?) and assign it to the field's mapping. This will make the designer happy as it will assume the type converter takes care of the conversion of the value. What you do in the type converter is up to you: simply pass through the value or do conversions with a substring call.

Can we also have the same option for when number types don't match? For example, a number(10) column references a number(28 ) column which is perfectly legal in Oracle (a FK can be created) but LLBLGen doesn't allow it. This should also be able to be ignored.

'legal' is a bit of a strong word. What happens if you want to insert a row in the FK table with a number(28,0) value for the FK field as the PK value is that big? It likely will convert it to a NUMBER(10,0), with all the misery that comes with it. It's a real error: if you assign a PK entity to an FK entity the PK value is synced with the FK field, but the types won't match so reading the FK field will cause a cast issue at runtime, hence the error.

Again you can solve this by using a type converter: make the FK field in the entity match the PK field. The mapping of the FK field then doesn't match its target field, (Number(22, 0) to Number(10,0)) so you use a type converter for that (create one which converts from int64 (number(10,0)) to decimal (number(22,0))). See: https://www.llblgen.com/Documentation/5.3/SDK/gui_implementingtypeconverter.htm

Frans Bouma | Lead developer LLBLGen Pro
sparq
User
Posts: 63
Joined: 14-Mar-2010
# Posted on: 20-Dec-2017 22:30:22   

Otis wrote:

'legal' is a bit of a strong word. What happens if you want to insert a row in the FK table with a number(28,0) value for the FK field as the PK value is that big? It likely will convert it to a NUMBER(10,0), with all the misery that comes with it. It's a real error: if you assign a PK entity to an FK entity the PK value is synced with the FK field, but the types won't match so reading the FK field will cause a cast issue at runtime, hence the error.

Well yes legal only as in Oracle lets you do it;

create table tmp_test1 (
    pk varchar2(32) not null,
    constraint pk_tmp_test1 primary key (pk)
);

create table tmp_test2 (
    pk varchar2(32) not null,
    fk varchar2(13) not null,
    constraint pk_tmp_test2 primary key (pk),
    constraint fk_tmp_test2_test1 foreign key (fk) references tmp_test1(pk)
);

insert into tmp_test1 (pk) values ('01234567890123');
insert into tmp_test2 (pk, fk) values ('01234567890123','01234567890123'); -- error!
rollback;

drop table tmp_test2;
drop table tmp_test1;

And if Oracle lets you do it then at some point somebody did it stuck_out_tongue_winking_eye

Otis wrote:

Again you can solve this by using a type converter: make the FK field in the entity match the PK field. The mapping of the FK field then doesn't match its target field, (Number(22, 0) to Number(10,0)) so you use a type converter for that (create one which converts from int64 (number(10,0)) to decimal (number(22,0)))

Yes I have to make a TypeConverter for the Int64/Decimal conflict. But for the String/String conflict making a TypeConverter seems silly (convert string to string?). Plus I still have to change the model.. and if I change the model I don't need a TypeConverter. So I just change the model. Which will be refreshed in the future and fixed again.

Can't we play a little bit of "the customer is always right" here and sneak in a simple option? simple_smile

You already have 3 options (screenshot attached) under DB first to allow bad DB designs (which the world is full of).

Attachments
Filename File size Added on Approval
options.jpg 13,034 20-Dec-2017 22:30.29 Approved
Otis avatar
Otis
LLBLGen Pro Team
Posts: 39588
Joined: 17-Aug-2003
# Posted on: 21-Dec-2017 09:58:12   

sparq wrote:

Otis wrote:

'legal' is a bit of a strong word. What happens if you want to insert a row in the FK table with a number(28,0) value for the FK field as the PK value is that big? It likely will convert it to a NUMBER(10,0), with all the misery that comes with it. It's a real error: if you assign a PK entity to an FK entity the PK value is synced with the FK field, but the types won't match so reading the FK field will cause a cast issue at runtime, hence the error.

Well yes legal only as in Oracle lets you do it;

create table tmp_test1 (
    pk varchar2(32) not null,
    constraint pk_tmp_test1 primary key (pk)
);

create table tmp_test2 (
    pk varchar2(32) not null,
    fk varchar2(13) not null,
    constraint pk_tmp_test2 primary key (pk),
    constraint fk_tmp_test2_test1 foreign key (fk) references tmp_test1(pk)
);

insert into tmp_test1 (pk) values ('01234567890123');
insert into tmp_test2 (pk, fk) values ('01234567890123','01234567890123'); -- error!
rollback;

drop table tmp_test2;
drop table tmp_test1;

And if Oracle lets you do it then at some point somebody did it stuck_out_tongue_winking_eye

Doesn't what you just showed above that it's not really a good thing this happens? How would you insert a new pk-fk relationship in that table? By truncating the FK value? Or by using PK values that aren't longer than 13 'by design'? How can that be enforced in our system? I mean: if someone uses our code to insert an entity with a PK value longer than 13, it will then cause an insert error and they'll come here and ask, rightfully so, why the system allows them to create a model and say 'everything is OK!' while they can't save an entity into the table.

In our system, if your customer entity has a PK with type (NUMBER 28,0) and your order entity has an FK to that customer with type NUMBER(9,0), then when you assign the customer to the order, the PK value is synced with the order's field. reading that field in memory will then cause a problem, as you are going to read a decimal value through an int property which will crash. That's why it's an error and will stay an error as you then have code which won't work at runtime and you'll come here (or your user will or the person who will pick up the maintenance of your system) and rightfully state there's an exception in our code, which is 100% generated so it can't be their fault.

Otis wrote:

Again you can solve this by using a type converter: make the FK field in the entity match the PK field. The mapping of the FK field then doesn't match its target field, (Number(22, 0) to Number(10,0)) so you use a type converter for that (create one which converts from int64 (number(10,0)) to decimal (number(22,0)))

Yes I have to make a TypeConverter for the Int64/Decimal conflict. But for the String/String conflict making a TypeConverter seems silly (convert string to string?).

It's not silly, as the entity has validation inside it. It will validate if the value set to a string field is too long and will raise a validation exception. This to prevent that you will persist a graph of entities to the DB which will then cause a rollback in a transaction and a situation where it's too late to e.g. go back to the user to correct the field which contains a value which is too long.

Plus I still have to change the model.. and if I change the model I don't need a TypeConverter. So I just change the model. Which will be refreshed in the future and fixed again.

No, you have to change the model to make it correct, which is then mapped onto a database which isn't correct, but to make that mapping work you have to add some converters. Changing the model doesn't make the type converters go away, they're not for the model, they're for the mapping part. Correcting the model will cause a problem where you have length conflicts in the mappings. So there are two things: fk-pk field type mismatches in the model (which you can correct by updating the model) and mapping mismatches where the field in the entity doesn't match the target field.

Can't we play a little bit of "the customer is always right" here and sneak in a simple option? simple_smile

You already have 3 options (screenshot attached) under DB first to allow bad DB designs (which the world is full of).

And what 'simple' option would that be? simple_smile "Silently swallow an exception that occurs at runtime and pretend everything persisted OK" ? Sorry, but we can't do that. I understand what you're after and I'd try to do the same thing, but we can't give in at this point as it will bite you when you aren't looking. There's no 'let's close our eyes, it's going to be OK' kind of fix here, simply because the entity model in-memory behaves differently than the DB.

Frans Bouma | Lead developer LLBLGen Pro