With this article, I want to outline some common troubleshooting steps that are useful for addressing duplicate errors that occur during the pre-processing phase of the Microsoft Dynamics AX 2012 upgrade process. The scenario being described is when one of the batch tasks has failed in the middle of the data being processed. The error message shown on the job in the Infolog would appear something like this:
Info Message (10:12:33) Run job inventJournalTransPreUpgradeProcess ordered by Admin in database XXX.
Error Message (10:12:33) Cannot create a record in Shadow_InventJournalTrans_Dimension (Shadow_InventJournalTrans_Dimension).
The record already exists.
Error Message (10:12:33) Script: ReleaseUpdateTransformDB40_Invent.inventJournalTransPreUpgradeProcess(xxx). Last control data: 00333331,1
The first two action items after seeing that type of error message usually would be the following:
1. Highlight the job in the upgrade cockpit and choose to Rerun the job and make sure it is not just a case of multiple updates to the same table causing issues.
2. Check to see if that table is part of a table collection that is shared within a virtual company. In these cases, the likely solution is to add a not exists clause to the while loop within the preprocessing script similar to what is found in the matching delta processing script. This will stop the system from processing the same rows for every company within the virtual company.
If neither of these is the culprit, then some additional research is needed. Here are the steps I went through to troubleshoot the error above:
1. Look at the row counts for the original and the shadow table. In this case the shadow table was Shadow_InventJournalTrans_Dimension, but based on my object IDs, in SQL that turns out to be SHADOW_INVENTJOURNALTRANS50376.
select COUNT(recid) from SHADOW_INVENTJOURNALTRANS50376
select COUNT(recid) from INVENTJOURNALTRANS
2. The results came back as roughly 5 million in the shadow to 18 million in the original. So we know that it processed a sizeable section before hitting an error. Next we need to look at the script itself to understand what record is indicated by ” Last control data: 00333331,1″.
3. While it is possible to debug this through standard batch debugging methods, many times it is easier to copy the contents of the upgrade script into a job so that you can step through the job easily without worrying about batch clients picking up any code changes and breakpoints. After copying the code from the method inventJournalTransPreUpgradeProcess of the ReleaseUpdateTransformDB40_Invent class to a new job, you need to change the reference to classidget(this) to classNum(ReleaseUpdateTransformDB40_Invent) in order for the script to compile.
4. Before running the job, it is very important to make sure you are in the correct company. The batch framework will switch companies automatically, but the job will run in the current company context.
5. Set a breakpoint at the top and start stepping through the script. As you step over the line
isResuming = transactionManager.initialize();
you should see isResuming going to true since we are midway through processing the script.
6. Then step over this line to see what the last journal ID was to be processed:
[lastjournalId, lastLineNum] = transactionManager.getLastIterationState();
7. In my case, the values came back as 00332431 and 3. These values represent the last values successfully saved, but notice they are different from what is reported in the error which represents the record that failed. This means that the process found a duplicate within the block of 5000 transactions that were being written, so the other items within that block got rolled back after the duplicate error was hit.
8. The next step is to verify that the while select statement is properly using the iteration state variables to start in the correct position. For this script the two values are lastJournalId and lastLineNum, and the items to check are the order by and the where clause. The order by is set to those to fields as it should be based on needing to match the unique index on the table. Since this is a two part iteration state, the where clause also has to be formatted correctly with an OR clause to manage the chance that we stopped writing in the middle of a journal.
9. In this case, since we had to step into the middle of the block, you could add a conditional breakpoint expression to the job so that you don’t have to watch every iteration of the while loop. To do this, add the following code to the beginning of the while loop.
if (inventJournalTrans.JournalId == ‘00333331’)
{
breakpoint;
}
10. When we hit the breakpoint stepping into the routine shows the duplicate error occurring on that first row. So we need to look at all the unique indexes on the table shadow_inventJournalTrans_Dimension – and it looks like RefRecId is the only unique index. Looking at the table in SQL Management studio and querying for the RecID of the InventJournalTrans table, it did indeed show that there were two rows with the same RecID within that company for that InventJournalTrans table, which was allowed because there is no RecID index on the table, only a Journal ID/Line Num unique index.
11. So to correct this situation, the developer would need to change the unique index on the shadow_inventJournalTrans_Dimension table to be RefJournalID, RefLineNum and then change all the code references to the same.
12. A second option was available in this case because that InventJournalTrans table should have been cleaned up through the Periodic processing within inventory to run Inventory Journals clean up.