Written some X++ which seems to have high memory usage when run in IL but runs normally when not in IL? Or just noticing high memory usage in servers running IL code?
Here we’re going to talk about an aspect of garbage collection in XppIL that is important for anyone writing XppIL code to take on board.
Let’s start with an example: If the code below runs in XppIL, when we hit the sleep line how many X++ objects are in memory (the query Cust contains CustTable and CustTrans):
q = new Query(queryStr(cust));
qr = new QueryRun(q);
Did you think it would be 1 query, 1 queryRun? The answer might be surprising, there are:
- 2000 Query
- 1000 QueryRun
- 1000 xArgs
- 1000 CustTable
- 1000 CustTrans
This demonstrates that instances of X++ kernel objects are not disposed until their XppIL owner object is disposed itself. If I ran the code above in an X++ job, it would have 6000 objects in memory just before the end of the job, when the job completes then they are all released. This is the key fact for an X++ developer to take on board – writing a process which will run for a long time and uses a lot of X++ kernel objects, it is this behaviour that could mean your code uses a lot more memory than you might expect.
What happens under the hood when we call something like Classes\Query in XppIL?
First it’s important to understand that the class Query although we recognise it as a kernel X++ class, there is no X++ code involved – it’s a native C++ object. This is the case for all kernel X++ classes – the classes associated with the query framework, TreeNode, Connection and so on.
This means when we’re calling a kernel X++ class in our XppIL code that we’re transitioning from managed (.NET) code to native code. The .NET garbage collector has no power over objects on the native side – they are out of its jurisdiction. The transition between these two layers is our interop layer, this is a part of our kernel and has to make a choice about how it’s going to try to track and release the native objects being used.
To understand how we’re going to track and release these objects we have to think about possible garbage collection approaches.
Basically Peter’s article explains that the old X++ approach was to track objects, by incrementing and decrementing references to them as they instantiated or assigned, and releasing them at the soonest possible moment when they are no longer needed. This didn’t scale very well because as you add more and more objects in memory at once, there’s a bigger and bigger overhead to all the tracking the references of objects – each time you need to add or remove a reference there’s a big tree of references to look at.
The .NET (and therefore XppIL) approach is different, it runs periodically clearing objects it deems to be no longer needed. This means you don’t know exactly when your object will leave memory, but it also means that it scales very well, as it’s always doing the same sweep when the GC runs to collect objects.
The reason that I referenced Peter’s article on garbage collection, and briefly explained the two approaches that old X++ and XppIL take in this respect, is because it provides the background necessary to appreciate why we take a certain approach with our interop when it comes to native objects used in IL.
What possible options might there be for trying to track the native kernel X++ objects used in XppIL so they can be garbage collected?
I am explaining this here, to help others understand why we took this approach with AX2012 – we had to make a technical choice and the option we choose gives the best performance.
- Don’t track X++ kernel object references at all and let them get released as soon as they go out of scope on the native side? No. The IL code might use them on the next line of it’s code, so we can’t do that, we need to make sure they’re not released while the IL code might need them
- Track each reference added, incrementing and decrementing references as the code goes along so they can be released as soon as they are free. Sound familiar? It’s the old X++ approach, and would mean it wouldn’t scale with larger numbers of objects- it would mean that IL was no faster than old X++ because this tracking would slow it back down to the same speed.
- Track a reference to the native object when it is first referenced, and when that IL object is cleaned up by the .NET GC, then allow the native object to be released. With this we get the behaviour that it’s fast and we don’t release objects while they are still needed.
We took the third option above for AX2012. What this means is that native objects can stay in scope longer than you might have expected when compared with old X++. So when writing X++ code that will run in IL, it’s important to realise this, and consider the impact you might have on memory when your code runs in IL.
There are a few tricks to help the X++ developer with this, this is a non-exhaustive list, but it’s intended as a good base to start with:
- Think about keeping the size of your unit of work down – for example a process which posts a thousand sales orders, if it posts them all in a single batch task would use a lot of memory, but if it splits each order out into a separate batch task – then it would use much less memory as each task completes it would release the memory it used – that approach would also be faster because you can run multiple tasks in parallel.
- Use the .NET memory profiler to profile your IL process. It’ll show the IL call stacks which are responsible for outstanding memory allocations. So if you have crazy memory usage you’ll be able to see where in your XppIL they’re coming from. If you see the last piece of your XppIL code in the stack was instantiating a kernel X++ class, and it then descends into the interop then you can recognise this is keeping a native kernel X++ object open.
- Take care with use of certain kernel X++ classes in your IL code – for example TreeNode, UserConnection/Connection, Query, QueryRun and so on. You can call finalize() on these when you know they’re no longer needed to release them in your code.
- If writing a while select, and you use break; within the braces, then this will leave the table buffer open until the IL code completes. You can call buffer.clear() to manually release it. If the while select naturally completes with no break; then it’ll release the buffer on it’s own.
- As standard the AOS uses the server .NET garbage collector. The .NET framework also has a client garbage collector. You change between then by adjusting the ax32serv.exe.config file, editing gcserver=true to false. The server garbage collector is lazy, as server applications are expected to be able to use lots of memory and not give it back, no need to balance all the other applications running on a machine. The client garbage collector is aggressive, it’ll try and give back memory sooner and keep the overall footprint lower. The client GC can be a useful option when machines don’t have much memory in them.
We're always looking for feedback and would like to hear from you. Please head to the Dynamics 365 Community to start a discussion, ask questions, and tell us what you think!