Since last October we’ve been able to try the preview of Microsoft Dynamics 365 for Finance and Operations Database Movement API which allows us to list and download DB backups and start DB refreshes using a REST API.
If you want to join the preview you first need to be part of the MSDyn365FO Insider Program where you can join the “Dynamics 365 for Finance and Operations Insider Community“. Once invited to the Yammer organization you can ask to join the “Self-Service Database Movement / DataALM” group where you’ll get the information to add yourself to the preview and enable it on LCS.
As you might know MSDyn365FO’s data access layer pretty different from T-SQL. This means that if you copy a query on AX and paste it in SSMS it won’t validate in the 99% of the cases (the other 1% being a select * from table).
When we run a query in MSDyn365FO we’re using its data access layer which will later be translated into real SQL. We can see the differences using xRecord’s getSQLStatement with generateonly on the query (and forceliterals to show the parameter’s values) to get the SQL query. For example if we run the following code:
We’ll get this SQL statement:
SELECT TOP 1 T1.PAYMTERMID,T1.LINEDISC,T1.TAXWITHHOLDGROUP_TH,T1.PARTYCOUNTRY,T1.ACCOUNTNUM,T1.ACCOUNTSTATEMENT,
FROM CUSTTABLE T1
WHERE (((PARTITION=5637144576) AND (DATAAREAID=N'usmf')) AND (ACCOUNTNUM=N'0001'))
We can see all the fields are being selected, and the where clause contains the account number we selected (plus DataAreaId and Partition).
When a while select is run on MSDyn365FO a select SQL statement is executed on SQL Server for each loop of the while. The same happens if an update or delete is executed inside the loop. This is know as record-per-record operation.
Imagine you need to update all the customers with the customer group 10 to update their note. We could do this with a while select, like this:
This would make as many calls as customers from the group 10 existed to SQL Server, one for each loop. Or we could use set-based operations:
This will execute a single SQL statement on SQL Server that will update all the customers with the customer group 10 instead of a query for each customer:
SET MEMO = 'Special customer'
Running this methods instead of while selects should obviously be faster as it’s only executing a single SQL query. But…
Why could my set-based operations be running slow?
There’s some well-documented scenarios in which set-based operations fall back to record-per-record operations as we can see in the following table:
Use … to override
Database log enabled
Alerts set up for table
ValidTimeStateFieldType property not equal to None on a table
In the example, if the update method of the CustTable is overriden (which it is) the operation from the update_recordset piece will be run like a while select that updates each record.
In the case of the update_recordset this can be solved calling the skipDataMethods method before running the update:
This will avoid calling the update method (or insert in case of the insert_recordset), more or less like calling doUpdate in a loop. The rest of the methods can be overriden with the corresponding method on the last column.
So, for bulk updates I’d always use set-based operations and enable this on data entities too with the EnableSetBasedSqlOperations property.
And now another but is coming.
Should I always use set-based operations when updating large sets of data?
Recently, a colleague found a little issue when using an AOT query to feed a view with a range dynamically filtered using a SysQueryRangeUtil method.
Recreating the issue
The query is pretty simple, only showing ledger transaction data from the GeneralJournalEntry and GeneralJournalAccountEntry tables. A range in the Ledger field from the current company was added as you can see in the pic below:
We created a new range method by extending the SysQueryRangeUtil class. Using the Ledger::current() to filter the active company.
The we used the query to feed data to the view and added two fields just for testing purposes:
Everything quite straightforward. Let’s check the view in the table browser…
No data! And I can tell there’s data in here:
What’s going on in here? If we use the query in a job (yeah, I know, Runnable Class…) the range is filtering the data as expected.
So… let’s see the view design in SSMS:
Well, it definitely looks like something’s being filtered in here. The range is working! Is it? Sure? Which company does that Ledger table RecId corresponds to?
What’s going on?
There’s an easy and clear explanation but one doesn’t think of it until he faces this specific issue. While the view* is a Data Dictionary object, and when the project is synchronized the view is created in SQL Server, the query* is a X++ object and only exists within the application. The view is created in SQL and we can see and query it in SSMS. The AOT query doesn’t. It feeds the view and provides a data back end, but all X++ added functionality stays in 365, including the SysQueryRangeUtil filters.
The solution is an easy one. Removing the range in the query and adding it in the form data source will do the trick (if this can be considered a trick…).
(*) Note: the links to the docs point to AX 2012 docs but should be valid.