DB2 on AS/400 is not the "real" DB2. Is something awkward, that usually behaves like DB2 and sometimes does not behave at all.
We've used DB2 on other data migrations and we've only had problems on AS/400 version.
To get a glimpse, here's some problems we're having:
- trying to create a view is impossible since DB2 returns a generic error saying it is not possible to complete the command.
- trying to select specific fields from a view is impossible since DB2 returns an erro saying it does not find such fields.
One could think I may have extremly complex queries, well, judge for yourself (real column and table names have been changed).
Running the following query works fine:
select substr(ZIP1, 1, 2) ZIPinit, city,(ZIP1 || '-' || ZIP2) as zip_codefrom (select ZIP1, ZIP2, city,row_number() over (partition by substr(ZIP1, 1, 2), cityorder by ZIP1, ZIP2, city) as ZIP_ORDERfrom LIBWRK.ZIP_CODES) as ZIPs
but creating a view over it is impossible:
create view LIBWRK.ZIP_APROXIMADO asselect substr(ZIP1, 1, 2) ZIPinit, city,(ZIP1 || '-' || ZIP2) as zip_codefrom (select ZIP1, ZIP2, city,row_number() over (partition by substr(ZIP1, 1, 2), cityorder by ZIP1, ZIP2, city) as ZIP_ORDERfrom LIBWRK.ZIP_CODES) as ZIPswhere ZIP_ORDER = 1
Also, it is not possible to perform all queries over this view:
create view LIBWRK.ZIP_CONVERTION as (select ZIP.ZIP1 as ZIP1, ZIP.ZIP2 as ZIP2,ZIP.cityDescricao as cityDescription,ZIP1 || '-' || ZIP2 as ZIP_CODE,substr(ZIP1, 1, 2) as ZIPinit,casewhen ZIP.ZIP2 = '000' thenZIP1 || '-'|| (select min(x.ZIP2)from LIBWRK.ZIP_CODES as xwhere x.ZIP2 <> '000' and x.ZIP1 = ZIP.ZIP1)else ZIP1 || '-' || ZIP2end as zip_code,ZIP.NormalizedCity as NormalizedCity,row_number() over (partition by substr(ZIP1, 1, 2), NormalizedCityorder by ZIP1, ZIP2, NormalizedCity) as ZIP_ORDERfrom LIBWRK.ZIP_CODES as ZIP);select * from LIBWRK.ZIP_CONVERTION
works fine, but
select ZIP_CODE from LIBWRK.ZIP_CONVERTION
does not work at all.
But DB2 on AS/400 has a lot more problems. We've still not found them here but have runned into this in previous data migrations:
- query exploded when running a query: sometimes the query runs fine, some times it explodes, and yes, "query exploded" is the exact term that DB2 writes on its log;
- problems with summary querys: the optimizer was unnable to optimize de query and it tooks so much time it it becomes impossible to run such queries.
The "pragmatic" solution for all these problems has been creating tables populated with the queries result. In cases like the summaries, it was necessary to build sereveal summary tables with small subsets of the data and then create a view, if possible, or a table that joinned all the pre-calculated data.
No comments:
Post a Comment