-1

I have the following scenario:

Database table, let's call it ZDBX with primary keys:

MATNR, LIFNR, ZART

In a report i have internal table lt_table TYPE TABLE OF ZDBX

DELETE ADJACENT DUPLICATES FROM lt_table 
   COMPARING matnr lifnr zart fact_code.

DELETE FROM ZDBX.
INSERT ZDBX FROM TABLE lt_table.

The INSERT statement will lead to a short dump because there are rows in the internal table with identical primary keys, but different fact_code.

Now, I know that the obvious solution is to only compare primary keys in the DELETE ADJACENT DUPLICATES statement, but this does not work in my case, because the user wants to decide which fact_code will be deleted.

For now, my solution is to export the local table (right before INSERT) into a excel and find the duplicates and ask the user which fact_code he wants.

Can I find out (trough a system variable or in ST22) at which line the INSERT crashed? (So I dont have to do all the excel work)

My ideal solution would be to put the INSERT into a TRY-CATCH, find the duplicate row, and write a message into the Job-Log with the duplicate data.

Is it possible ?

(Also, setting the column fact_code as primary key is not a solution that the user agrees with)

Sandra Rossi
  • 11,934
  • 5
  • 22
  • 48
Ovidiu Pocnet
  • 579
  • 12
  • 32
  • 2
    Is it necessary to attempt the insert before choosing fact codes? If not, you already have a sorted list. Loop at the list and if they key + fact code is identical to the previous line, give a dialog. – Samleijenhorst Feb 19 '20 at 15:25
  • Thank you for the idea, I will implement it. It is the most appropriate for my given scenario. – Ovidiu Pocnet Feb 20 '20 at 09:26

2 Answers2

1

You can use FOR ALL ENTRIES directive for finding existing items.

  SELECT *
    FROM ZDBX
    INTO TABLE lt_dublicates
    FOR ALL ENTRIES IN lt_table
    WHERE MATNR EQ lt_table-MATNR
      AND LIFNR EQ lt_table-LIFNR 
      AND ZART  EQ lt_table-ZART.

I prefer create ABAP report for listing and choosing undesirable records. You can add checkbox column for marking rows and auto unmark same other records by primary keys.

mkysoft
  • 5,392
  • 1
  • 21
  • 30
  • The idea is good, but it does not work in my case because the duplicate data is in the internal table. I do not insert data, that is already in the DB table. – Ovidiu Pocnet Feb 20 '20 at 08:44
1

There's no particular solution to know which lines are duplicate, you have to code the algorithm from scratch.

Possible solutions:

  1. Worst one: INSERT one line at a time and test SY-SUBRC, it will be 0 if the insertion is successful, another value is usually a duplicate line. Rollback the insertions if there is any duplicate.

    LOOP AT lt_table INTO DATA(ls_table).
      INSERT zdbx FROM ls_table. " one line at a time
      IF sy-subrc <> 0.
        APPEND ls_table TO lt_error.
      ENDIF.
    ENDLOOP.
    IF lt_error IS NOT INITIAL.
      ROLLBACK WORK.
    ELSE.
      COMMIT WORK.
    ENDIF.
    
  2. Best one:

    • Read the current lines of the database table into the internal table lt_duplicates; you may read only the important lines by using mkysoft proposal: FOR ALL ENTRIES; be careful, it's important to make sure that lt_table is not empty otherwise all lines of the table are read.
    • Check which lines of your internal table lt_table exist in lt_duplicates. They are all duplicates.
    • For all those lines of lt_table which don't exist in lt_duplicates, you must make sure that there's not another line of lt_table with the same primary key. You can do it by first sorting the internal table by the primary key, read the next line(s) which have the same primary key fields, all of these lines are duplicates.
Sandra Rossi
  • 11,934
  • 5
  • 22
  • 48
  • The second solution does not really work in my case because the duplicate data is in the internal table. (I delete all data from the DB Table before inserting from the internal table). And like I mentioned in the description, I can't delete the duplicate data from the internal table, because the user has to decide which fact_code to delete. In a dream world I would have wanted to insert 15.000 rows into the DB-Table, the report crashes/ catches a exception and I see that row 1200 is a duplicate. – Ovidiu Pocnet Feb 20 '20 at 09:22
  • Sorry, I didn't pay attention to the fact that the database table was emptied before the insertion. So the question is about identifying duplicates in an internal table, and that's already answered [here](https://stackoverflow.com/questions/48810878/finding-duplicates-in-abap-internal-table-via-grouping). I will delete my answer as soon as you confirm this other question/answer is what you're looking for. – Sandra Rossi Feb 20 '20 at 10:06
  • My logic was pretty flawed because I wanted to find the duplicates exactly at the INSERT statement, instead, as @Samleijenhorst pointed out, I can check easily in a loop before the INSERT. Just because I could not delete the duplicates in the DELETE ADJACENT DUPLICATES statement, I thought that the only alternative must be to check in the INSERT statement. The answer I was looking for was initially not the one that you suggested, but the discussion made me realize there is no reason the 'check' directly the INSERT. So thank you for the suggested answer, it is the one I am now looking for :) – Ovidiu Pocnet Feb 20 '20 at 10:41