I recently have a data import slowness issue reported taking days to process without finishing even in batch run in parallel. This issue could occur with any entity import especially for new company go-living or with no data in the target entity. The impact is exponentially worse when more data is imported, so I would like to share finding and possible solutions to the community.
The SQL statement with performance hit is emitted by Classes\DmfEntityWriter\processRecords() from the following call stack with the SQL pattern further in below.
from [Staging table]
Left outer join [Entity view]
WHERE ((T1.PARTITION=XXXXXXXXXX) AND (((T1.DEFINITIONGROUP=XXX) AND (T1.EXECUTIONID=XXX)) AND ((T1.RECID>=XXX) AND (T1.RECID<=XXX))
ORDER BY T1.RECID
The challenging part is that this SQL statement executes fast that could be easily missed in AX trace or SQL profiler trace. The part with performance hit is the cursor fetch though that surfaces after a while when staging to target stage import starts, and keep growing at alerting alarm if you query the SQL query statistics DMV. The characteristics is that the SQL text recorded is FETCH API_CURSORXXXXXXXX with alerting growing high logical read but very low execution count. The elapsed time in aggregate are typically the highest like sample in below.
Select top 10 qs.total_elapsed_time /1000.0 as [total_elapsed_time_ms], qs.execution_count, (qs.total_elapsed_time/qs.execution_count)/1000.0 as [avg_elapsed_time_ms], qs.max_elapsed_time / 1000.0 as [max_elapsed_time_ms], qs.min_elapsed_time / 1000.0 as [min_elapsed_time_ms],
qs.total_logical_reads, qs.total_logical_reads/execution_count as [avg_logical_reads], qs.max_logical_reads, qs.min_logical_reads, st.text as Sql_text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER BY qs.total_elapsed_time DESC
The DmfEntityWriter.processRecords() executes this SQL statement in select-while loop which queries all related records from the staging table to insert into the target entity. Therefore, each time the loop runs, the target table is going to have more record added or updated until the loop finishes. The performance hit is from cursor fetch growing exponentially to the power of target entity record based on # of record with RecId range in staging table assigned to the particular batch task/thread. The number is then multiply by number of tables defined in the entity view.
With no or virtually no statistical data in the target table, SQL optimizer tends to establish execution plan by (1) scanning cluster index (2) with the use of nested loop join. The SQL execution plan I got is to always start using cluster index scanning from EcoResProduct by its RecId index to nested join all the related tables defined in the EcoResProductEntity view and then finally nested loop join again to the EcoResProductStaging table by seeking RecId.
One option to address is to use set-based processing in the entity. However, not all entity supports set-based process. The 2nd option is to write SQL plan guide to use hash join hint instead. You may want to work with your DBA to identify the exact SQL statement to write SQL plan guide. I include the SQL plan guide used for your reference which reduces to about an hour from days of process.
@name = N’EcoResProductStaging_Import_Hash’,
@type = N’SQL’,
@module_or_batch = NULL,
@params = N’@P1 nvarchar(61),@P2 nvarchar(91),@P3 bigint,@P4 bigint’,
@hints = N’OPTION(HASH JOIN)’
We're always looking for feedback and would like to hear from you. Please head to the Dynamics 365 Community to start a discussion, ask questions, and tell us what you think!