DevOps ALM automation in Microsoft Dynamics 365 for Finance and Operations

I’ve already written some posts about development Application Lifecycle Management (ALM) for Dynamics 365 for Finance and Operations in the past:

The possibility of doing real CI/CD is one of my favorite MSDyn365FO things, going from “What’s source control?” to “Mandatory source control or die” has been a blessing. I’ll never get tired of saying this.

Plus the post ends with an extra bonus!

More automation!

I’ve already explained in the past how to automate the builds, create the CI builds and create the release pipelines on Azure DevOps, what I want to talk about in this post is about adding a little bit more automation.

Builds

In the build definition go to the “Triggers” tab and enable a scheduled build:

This will automatically trigger the build at the time and days you select. In the example image, every weekday at 16.30h a new build will be launched. But everyday? Nope! What the “Only schedule builds if the source or pipeline has changed” checkbox below the time selector makes is only triggering the build if there’s been any change to the codebase, meaning that if there’s no changeset checked-in during that day no build will be triggered.

Releases

First step done, let’s see what can we do with the releases:

The release pipeline in the image above is the one that launches after the build I’ve created in the first step. For this pipeline I’ve added the following:

The continuous deployment trigger has been enabled, meaning that after the build finishes this release will be automatically run. No need to define a schedule but you could also do that.

As you can see, the schedule screen is exactly the same as in the builds, even the changed pipeline checkbox is there.  You can use any of these two approaches, CD or scheduled release, it’s up to your project or team needs.

With these two small steps you can have your full CI and CD strategy automatized and update a UAT environment each night to have all the changes done during that day ready for testing, with no human interaction!

But I like to add some human touch to it

If you don’t like not knowing if an environment is being updated… well that’s IMPOSSIBLE because LCS will SPAM you to make sure you know what’s going on. But if you don’t want to be completely replaced by robots you can add approvals to your release flow:

Clicking the left lightning + person button on your release you can set the approvers, a person or a group (which is quite practical), and the kind of approval (all or single approver) and the timeout. You will also receive an email with a link to the approval form:

And you can also postpone the deployment! Everything is awesome!

Extra bonus!

A little tip. Imagine you have the following release:

This will update 3 environments, but will also upload the same Deployable Package three times to LCS. Wouldn’t it be nice to have a single upload and that all the deployments used that file? Yes, but we can’t pass the output variable from the upload to other stages 🙁 Yes that’s unfortunately right. But we can do something with a little help from our friend Powershell!

Update a variable in a release

What we need to do is create a variable in the release definition and set its scope to “Release”:

Then, for each stage, we need to enable this checkbox in the agent job:

I explain later why we’re enabling this. We now only need to update this variable after uploading the DP to LCS. Add an inline Powershell step after the upload one and do this:

You need to change the following:

  • Line 2: $assetId= “$(GoldenUpload.FileAssetId)”. Change $(GoldenUpload.FileAssetId) for your output variable name.
  • Line 6: $ReleaseVariableName = ‘axzfileid’. Change axzfileid for your Release variable name.

And you’re done. This script uses Azure DevOps’ REST API to update the variable value with the file id, and we enabled the OAuth token checkbox to allow the usage of this API without having to pass any user credentials. This is not my idea obviously, I’ve done this thanks to this post from Stefan Stranger’s blog.

Now, in the deploy stages you need to retrieve your variable’s value in the following way:

Don’t forget the ( ) or it won’t work!

And with these small changes you can have a release like this:

With a single DP upload to LCS and multiple deployments using the file uploaded in the first stage. With approvals, and delays, and emails, and everything!

And now the bad news

The bad news are that, right now, we can’t automate the deployments in self-service environments. We can’t either do this on a production environment, where we must do this manually.

Invent counting with AI Builder

This past weekend I’ve attended my third 365 Saturday, this time in Barcelona, as a speaker. As you can see in the post title my session has been about creating inventory counting journals using AI with the Power Platform.

The event has been great, but my session has left me with a bittersweet feeling because I haven’t been able to show the full app functionality due to stupid technical issues (which where stupid but were my fault) that I solved in less than two minutes after the session.

Me while fixing the issue AFTER the session

Anyway, thanks to all the people that came to my session and I’m sorry for that. Thanks to the organizers too, as well as the rest of the speakers and the Axazure team.

Counting with AI

So… what was my session about? Nothing original at all. If you’ve seen the 2019 MBAS opening keynote there was a part about a Pepsi distributor that was using AI Builder to scan their store displays and analyze how sales were performing (more or less). My PowerApp uses AI Builder to count objects (you’ll see which objects later) and with that, create an inventory counting journal on Dynamics 365 for Whatever-you-know-the-ERP.

But in the end, my main intention with the session was showing that we can use all the Power Platform with MSDy365FO, not only Power BI, and that it can help in our projects. Because in AX world we’re sometimes like:

I saw this on twitter and I added the logos, but I don’t remember where I stole it from 🙁

AI Builder

AI Builder is a tool for the Power Platform which adds AI functionality to PowerApps and Flow. And it’s really really really simple to set up and use.

Right now AI Builder consists of 4 different models:

  • Prediction: answers binary questions like “Will the customer renew the subscription?” or “Which customer will not pay on time?”.
  • Text classification: data extraction from texts. You get a sentiment % as an answer, 95% Good, 76% Quick, etc.
  • Form processing: data extraction in key-value pairs. Like getting info from an invoice or document (it must always be the same invoice or document).
  • Object detection: detects objects in images. That’s the model I used.

Of these four models only the prediction one is in GA, while the others are in preview. There’s also 5 pre-trained models available:

If you want to know more about AI Builder, there’s a hands-on-lab with all the needed resources to create your App using any of the four models.

Also, if you need a PowerApps environment sign for a PowerApps Community Plan to get a free environment where you’ll be able to use everything you need to and test Flow and PowerApps (and the CDS). If you haven’t signed up yet it’s the right time to do it (any time would be right).

AI 101

To explain how does this work, first I need to explain some AI and ML basics. But real basic, like, as basic as possible so I could explain it in front of an audience. If you want to see this better explained see this Channel 9 video about models, it’s from where I learnt everything I know.

In classic development when you solve a problem you basically get input data and your created function through a process and get a result as an output. The equivalent to this in machine learning is that you get input and solutions through a process and you get a function that will solve the problem related to the solutions you entered. This function is your ML model.

What else you need to know about models? Well, basically that the amount of data you feed the model with is directly proportional to the quality of the answers/solutions you’ll get. In AI Builder’s case, the object detection model asks for a minimum of 15 images. With 15 images you get a shitty model, it will detect the object you’re trying to detect, but it will detect almost anything as your object because the sample is too small.

The PatatApp

This is my app’s name, a joke using the Spanish name for Potato (Patata) and App.

Why this name? Well, I’m actually counting potatoes with the app. Why potatoes? I love them, they’re versatile (you can make omelette, fries, vodka, etc.) and because counting pallets is BOOOORING.

What my PowerApp does is detect potatoes in an image. Then I can choose between using an existing journal or creating a new one, then select an item, fill in its inventory dimensions and finally create the line in that journal in AX. I’ve made a short video showing it.

Simple, right? I detect 3 potatoes using AI Builder, then select a legal entity, create a new journal and select an item with its dimensions. Finally the line is created in the journal and it can be seen in MSDyn365FO.

No sorcery or magic at all. (Oh, I hate the “magic” thing when speaking about development, or anything, because it makes it look like its been done with no effort. End of my rant). To create the journal header and the line I’m using two Flows that get the data from the PowerApp and create it into Dynamics 365:

See? No magic, just a flow.

My colleague Hugo de Jesús suggested to use the Patch function on the data source but: 1) The app was finished 2) He told me the week before the event. But it would probably have worked as well.

As you can see it’s a really simple app, I had a first working version in four hours, with AI Builder and the Flows it’s really quick.

Shitty model vs. Not-so-shitty model

I want to end with real facts and important data. Remember the minimum number of images AI Builder asks for? It’s 15. This is what happens when your model consists of 20 images of lonely potatoes:

   

If your model sucks you will still detect all the potatoes in an image, but literally everything will be a potato.

I then trained a second version of the model with 40 images of potatoes with people, cats, other vegetables, etc. The result is much better, and it still detects potatoes:

         

I want to thank cazapelusas for drawing all the lovely potatoes and redesigning the PowerApp, you should have seen V1. Please adopt a graphic designer, your life will be prettier.

No potatoes were harmed during the making of this PowerApp.

Setup Entity Store’s export to Azure Data Lake storage

It’s easy to start this post, because many people can ask:

What’s a Data Lake?

Fishing in a Data Lake. By cazapelusas.

A Data Lake is not an Azure product but a term referring to a place where data is stored, regardless of whether it’s structured or unstructured. Its only purpose is storing the data ready to be consumed by other systems. It’s like a lake that stores the water of its tributaries, but instead of water with data.

In Azure the Data Lake is a Blob storage which holds the data. And this data can come from Microsoft Dynamics 365 for Finance or Supply Chain Management (I’ll go crazy with the name changes of Axapta 7) or from other sources.

Currently, and since PU23, #MSDyn365FO (#MSDyn365F ? or #MSDyn365SCM ?) officially supports exporting the Entity Store to Azure Data Lake storage Gen1, but compatibility with Data Lake Storage Gen2 is on the works in a private program with Data Feeds that will allow us to export entities and tables (YES!) in near real time. If you want to know more check the Data Management, Data Entities, OData and Integrations Yammer group in the Insider Program (if you still haven’t joined, you should).

Comparison vs. BYOD

The first thing we must notice is the price. Storage is cheaper than a database, even if it’s a single SaaS DB on Azure SQL. For example, a 1GB Blob storage account on Azure costs $21.6/month.

And the simplest Gen 4 with 1 vCore Azure SQL database costs $190.36/month. Almost 10 times more.

And what about performance? This comes from observation, not a real performance test, but data is transferred real fast. And it’s fast because in a Data Lake data is sent raw, there’s no data transformation until it’s consumed (ETL for a DB, ELT for a Data Lake) so there’s less time spent until data reaches its destiny. This doesn’t have a real impact for small sets of data but it does for large ones.

Setup

The process to export the Entity Store to a Data Lake is pretty simple and it’s well documented (but not updated) on the docs. I’ll explain step by step.

Create a storage account on Azure

On Azure go to or search in the top bar for Storage accounts and add a new one with a setup like the one in the pics below:

Make sure to disable Gen2 storage:

And you can go to review & create. When the account is ready go to Access Keys and copy the connection string:

Azure Key Vault

The next step is creating a Key Vault. For this step you need to select the same region as your Dynamics 365 instance:

When the Key Vault is ready go to the resource and create a new secret. Paste the connection string from the storage account into the value and press create:

Create an AAD App Registration

Now we’ll create an AAD App. Give it a name, select the supported account types you need and fill the URL with the base URL of your #MSDyn365FO instance:

Click register and now we must add the Azure Key Vault API to the app as in the image below:

Select the API and add the delegated user_impersonation permission:

Don’t forget to press the button you can see above to grant privileges (must be done by an Azure admin). Now go to secrets and create a new one, give it a name and copy the secret value. When you close the tab you will not be able to recover that secret anymore so copy it and save it somewhere until we need it.

Setup the Key Vault

Go back to the Key Vault we created in the second step and go to Access policies. Add a new one:

You have to select Get and List for Key and Secret permissions:

Now press Select principal and here add the AAD App created in the third step:

Add it and don’t forget to save in the access policies screen!!

Set up MSDyn365F… and O or and SCM or whatever its name is this month

Navigate to System Administration -> Setup -> System parameters and go to the Data Connections tab. Here there’s 4 fields for the key vault. The Application ID field corresponds to the Application ID of the AAD App (pretty obvious) and the Application Secret is the secret from the AAD App. This part is easy and clear.

The DNS name is the url on your Key Vault and the Secret name field is the name of your Key Vault’s secret where you pasted the storage account connection string.

Once all these fields are complete you can press Test Azure Key Vault and Test Azure Storage and, if you followed all steps correctly, you should see the following messages:

If any of the validations don’t succeed I’d just delete all resources and start from scratch, probably a secret mismatch.
Now, the two buttons you see next to the setup fields:
  • Enable Data Lake integration: will enable the full push of the entity store to the storage account you have just created and which is the main purpose of this post.
  • Trickle update Data Lake: will make updates after data is changed (Trickle Feed).

Setup Entity Store

Now we just need to go to the Entity Store (under System Administration -> Setup -> Entity Store) and enable the refresh of the entities we’d like to hydrate the Data Lake (I love this, it looks like it’s the correct technical word to use when feeding the Data Lake):

And done, our data is now being pushed to an Azure Blob:

The entities are saved each in a folder, and inside each folder there another folder for each measure of that entity and a CSV file with the data in it.

Now this can be consumed in Power BI with the blob connector, or feed Azure Data Factory or whatever you can think about, because that’s the purpose of the Data Lake.

 

Manually deploy Retail packages for Microsoft Dynamics 365 for Finance and Operations

First Microsoft Dynamics 365 for Finance and Operations Retail post! I hope more will come.

As you might know, one of the setbacks of the database refresh from production in LCS is that some data doesn’t get copied. This is a safety feature that prevents, among others, that emails are sent or batches run accidentally after a DB restore.

Remember that it’s a good idea to have a SQL query/script that changes all endpoints, passwords, enables users, etc. that you can run after a prod DB refresh, just like it was done with AX2009/2012. Just F5 it in SSMS and the environment will be ready to use and to export to your dev boxes.

Another thing that doesn’t get moved after a DB refresh are storage specific files, ER XLSX, DocuValue files and the self-service Retail installers.

Retail packages

Retail packages are the executable files used to install MPOS on the… well, on the points of sale (POS). These files are stored in an Azure blob storage which is specific to each environment, so after the DB refresh there’s no self service packages in the target environment because the reference was to the production blob:

Microsoft’s official fix for this is applying a binary package that will recreate the EXE files in the VM’s storage where the Deployable Package is run. And as you all know this is time consuming and while you can run it outside working hours you can fix it in less than 10 minutes.

The workaround

Ahhh “workaround”… it’s such a beautiful word with so many different meanings… And this workaround has a restriction: it only applies to dev boxes and Tier 2+ regular environments, this can’t be done on self-service environments as we don’t have access to the AOS VM.

What we need to do is log into the AOS VM using the RDP and go to the service volume (usually K on dev, G on Tier 2+). There should be a folder called DeployablePackages, if you have applied any, otherwise just go with the official fix. However, if the folder doesn’t exist this can probably be done in another way, which is using the files from the install drive, but I haven’t tried.

Sort the files by date modified (newer first) and inside the first folder you should see another folder called RetailSelfService:

And inside this folder you’ll see 3 more folders, Packages, Scripts and ServiceModel. Inside the Packages folders there’s the EXE files and inside the Scripts folder the scripts (obviously Mr. Obvious), open it and the open the Upgrade folder and you’ll find a PowerShell script called UpdateRetailSelfService. You need to run this script in PowerShell as an administrator. It will take between 3 and 5 minutes and when it’s done the packages will be uploaded to the environment’s storage and appear in the Retail Parameters form.

That doesn’t work for me!

There’s a case in which the installers will not be restored: if you have no setup done for Retail. Why? The PowerShell script runs a SQL query that will check for the following:

  • Has Channel data in Channel DB
  • Has any Channel data in AOS
  • Has transaction data in AOS
  • Has transaction data in Channel DB
  • Has Channel DB extensions

If none of the above conditions is met the script will not upload the installers to the blob. But we can do something! Yes, you can go and configure a Channel DB for instance. But what if you don’t feel like doing it?

Remember the UpdateRetailSelfService script I talked about before? Edit it and comment the following lines:

This will make the script skip the check and will deploy the installers.

That’s pretty dirty, right? Yes.

What about self-service environments?

I’m sure this can be also done modifying a Deployable Package that contains the Retail packages (the one for a monthly version update), leaving Retail only in the DefaultTopologyData.xml file, and even editing the script if needed. But I haven’t tried. Any volunteers?

Parse XML and JSON easily in MSDyn365FO

Some time ago I had to create an interface between MSDyn365FO and a web service that returned data as XML. I decided to use X++’s XML classes (XmlDocument,  XmlNodeList, XmlElement, etc…) to parse the XML and get the data. These classes are terrible. You get the job done but in an ugly way. There’s a better method to quickly parse XML or JSON in MSDyn365FO.

.NET to the rescue

There’s a feature in Visual Studio that will help us with this but it’s not available in Unified Operations projects. Open Visual Studio and create a new .NET project. Now you just need to copy a sample of the XML text you want to parse, go to the Edit menu, Paste Special, Paste XML As Classes:

And we’ll have a data contract with the needed elements to access all the element nodes using classes and dot notation to access data! For example, for this sample XML file we will get the following:

You can create this in a .NET Class library and consume it from Finance and Operations. This is the fastest way to use all the classes and members of the classes. Maybe all this can be implemented as Dynamics 365 FnO classes, but you’d have to create as many classes as different types of nodes exist in the XML. And the original purpose of this was being able to parse an XML file faster. I’d just stick with the .NET library.

All these steps are also valid for a JSON file, copy the sample JSON text, paste special and you’ll get all the classes needed to access the data.

Use it in MSDyn365FO

Once you have your library or you’ve created all the classes in FnO (c’mon don’t do this) add the reference to your project and (following the example above) you just need to do the following:

Declare a variable of the same type as the main node in the XML file, catalog in the example. Then we will create a new XmlSerializer using our type and create a TextReader from the XML as a string. Finally we need to deserialize the XML and assign the result to the catalog and…

As you can see the data is accessible using dot notation and the classes that were created using the paste special feature.

With the help of tools that are not specific from X++ programming experience we can achieve this, and it is definitely faster than having to parse the XML file using the Xml* classes from Dynamics.

Feature management: create a custom feature

Feature management has been around in Microsoft Dynamics 365 for Finance and Operations for some time now. Before that features were enabled through flighting running a SQL query on dev and UAT boxes (and the DSE team would do it on production).

Now we have a nice workspace showing all the available features and flighting is still around too. The main difference between flighting and features is that flighting is enabled to a selected group of customers, like a preview of a feature.

In each new PU there’s new functionalities added to MsDyn365FO, and in PU30, released recently under the PEAP (Preview Early Access Program), we got new enhancements to the Feature management, this time adding a new property to Menu items and Menus called “Feature class“:

This is not enabled yet, and if you try adding a class to a menu item you’ll get a warning and no functionality.

If you read the docs you’ll see that creating new features is not enabled yet, but if you search in the metadata for feature classes…

Creating a custom feature

We’ll take the TaxSetupValidationFeature class as an example. This class implements the interface IFeatureMetadata, and all feature classes use a Singleton pattern to get the feature instance! (It’s exciting because is the first time I see it being used in MSDyn365FO).

The methods to be implemented include the feature’s name and description, the model and some setup. Just copy all the methods and the member of the class and paste it into one you create.

Now build your solution and go to the feature management workspace, click the check for updates button and your feature should appear in the list:

Let’s use the feature (in quite a stupid way). Create an extension of a form and on its init method check if the feature is enabled, if it is display a message:

Before enabling the feature go to the form to check there’s no message is being displayed:

No message there, OK.

Go back to feature management and enable your feature.

Go back to the form (CustTable in my example) and…

There’s the message!

Custom features are working as of PU30, at least on dev boxes, and maybe Tier 2+ sandbox environments too. Don’t try it on a production environment until it’s officially released (but that’s not possible as it’s a PEAP release).

This is just a small test of the classes that are available now, we’ll see new features in PU31 when the Feature Class property will work and, as I read on Twitter:

Set up the new Azure DevOps tasks for Packaging and Model Versioning

During this past night (at least it was night for me :P) the new tasks for Azure DevOps to deploy the packages and update model versions have been published:

There’s an announcement in the Community blogs too with extended details on setting it up. Let´s see the new tasks and how to set them up.

Update Model Version task

This one is the easiest, just add it to your build definition under the current model versioning task, disable the original one and you’re done. If you have any filters in your current task, like excluding any model, you must add the filter in the Descriptor Search Pattern field using Azure DevOps pattern syntax.

Create Deployable Package task

This task will replace the Generate packages from the current build definitions. To set it up we just need to do a pair of changes to the default values:

X++ Tools Path

This is your build VM’s physical bin folder, the AosService folder is usually on the unit K for cloud-hosted VMs. I guess this will change when we go VM-less to do the builds.

Update!: the route to the unit can be changed for $(ServiceDrive), getting a path like $(ServiceDrive)\AOSService\PackagesLocalDirectory\bin.

Location of the X++ binaries to package

The task comes with this field filled in as $(Build.BinariesDirectory) but this didn’t work out for our build definitions, maybe the variable isn’t set up on the proj file. After changing this to $(Agent.BuildDirectory)\Bin the package is generated.

Filename and path for the deployable package

The path on the image should be changed to $(Build.ArtifactStagingDirectory)\Packages\AXDeployableRuntime_$(Build.BuildNumber).zip. You can leave it without the Packages folder in the path, but if you do that you will need to change the Path to Publish field in the Publish Artifact: Package step of the definition.

Add Licenses to Deployable Package task

This task will add the license files to an existing Deployable Package. Remember that the path of the deployable package must be the same as the one in the Create Deployable Package task.

And you’re done! A step closer from getting rid of the build VM.

If you need help setting up the release pipeline you can check this post I wrote.

Application Checker: enforcing better coding practices?

Unless you’ve been working for an ISV there’s a high percentage of probabilities that you’ve never cared about Dynamics Best Practices (BP), or maybe you have. I haven’t worked for an ISV myself but back when I started working with AX I was handed the development BP document and I’ve tried to follow most of them when writing code.

But BPs could be ignored and not implemented without any issue. This is why Microsoft will publish…

Application Checker

Application Checker is a tool that will change that. It will force some rules that our code will have to meet, otherwise the code won’t compile (and maybe won’t even deploy to the environments).

We got an advance of it during last MBAS session “X++ programming with quality” by Dave Froslie and Peter Villadsen. Unfortunately the session wasn’t recorded.

App checker is built on BaseX, an XML analysis tool, and powers Socratex which Microsoft uses to track code quality. I don’t know if Socratex will be publicly released and I don’t remember if this was clarified during the session.

The set of rules can be found in Application Checker’s GitHub project and it’s still WIP. I think there’s loooots of things to decide before this goes GA, and I’m a bit worried and afraid of some of the rules 😛

Rule types

There’s different types of rules, some will become errors and other warnings. For example:

ExtensionsWithoutPrefix.xq: this rule will throw an error avoiding your code to compile. It checks if the extension class has a name ending with _Extension and an attribute ExtensionOf. If it has it must have a prefix. E.g.: if we extend the class CustPostInvoice it can’t be named CustPostInvoice_Extension, it needs a prefix like CustPostInvoiceAAS_Extension.

SelectForUpdateAbsent.xq: this rule will throw a warning. When there’s a forUpdate clause in a select statement and no doUpdate, update, delete, doDelete or write is called later it will let us know.

As of today, there’s 21 rules in the GitHub project. You can contribute to the project, and you could enforce your own rules without sending them to the project on your dev boxes, just add them to the local rules folder. I’d create a rule that makes the space after an if/while/for/switch mandatory and throws an error otherwise, but that’s only a bit of my OCD when writing/reading code.

Try it on your code

We can already use Application Checker on our development environments since PU26, I think. We just need to install JRE and BaseX in the dev box and select the check when doing a full build.

Some examples

ComplexityIndentationCombined.xq

This query checks the (wait for it…) cyclomatic complexity of the methods. I’ll try to explain it… Cyclomatic complexity is a metric for software quality, and is the number of independent paths in the code. Depending on the number of ifs, whiles, sitches, etc… the code can have different outcomes through different paths, that’s what complexity calculates.

Taking this as an example, a dumb one but ignore it, just look at the amount of different paths that could happen:

In App checker the error appears when the complexity is over 30. I’ve used Lizard code complexity analyzer to calculate the complexity of the method below and I’m getting a 49.

The rule also checks for the indentation depth, failing if it’s greater than 2. In the end the purpose of both rules is to try to cut up long/large methods, which will also help in enabling more extension points in different places of our logic, like Microsoft did with Data Provider classes for reports.

BalancedTtsStatement.xq

This one gives me mixed feelings. The rule checks that the ttsbegin and the ttscommit of a method are in the same scope. So the following is not possible:

Imagine you’ve developed an integration with an external application that writes data to an intermediate table in MSDyn365FO and you process all pending data sequentially. You don’t want to throw an error if something goes wrong because you need the process to continue with the following record, so you ttsabort the wrong line, store the error and continue. If this is not possible… how should we do this? Create a batch that creates a task for each line to process?

Plus, the standard models have plenty of ttscommit inside if statements.

RecursiveMethods.xq

This rule will block the use of recursion on static methods. I don’t get why. Application checker should be a way to better coding practices, not forbidding some patterns. If somebody gets a recursive method to prod and the exit condition isn’t met… hello testing?

Some final thoughts

Will this force developers to code better? I don’t think so, but that’s probably not Application checker’s purpose. For centuries humans have found ways to bypass rules, laws and all kinds of restrictions and this won’t be an exception.

Will it help? Hell yes! But the best way to ensure code quality is promoting the best practices in your team, through internal trainings or code reviews. And even then if someone doesn’t care about clean code will keep on writing terrible code, which might work but won’t be beautiful at all.

Finally, I’m not sure about some rules, like avoiding recursion on static methods or the tts thing. We’ll just have to wait and see which rules make it to the final release and how will Application checker be finally implemented in the MSDyn365FO application lifecycle by blocking (or not) the deployments of code which doesn’t pass all the checks or if it will be included into the build process.

Self-service deployments: the future is here

Right now Microsoft Dynamics 365 for Finance and Operations has an old style monolithic architecture, even it’s now in Azure’s cloud, what we really have is a single (or multiple for Tier 2+ environments) VM that runs everything: the AOS/IIS, Azure SQL Server, the Batch service, MR, etc. Exactly the same as AX 2009/2012.

This is going to change in the coming months with the self-service deployments. We’ll move from the monolithic architecture to microservices that will run all the needed components with the help of Azure’s Service Fabric. MSDyn365FO will be on a real SAAS model.

Before starting let me clarify that all these changes will only apply to Microsoft-managed Tier 2+ environments: sandbox and production environments. The build environment (until it’s made obsolete) and the cloud-hosted environments on the customer or partner subscription will still be single VMs.

What’s new?

Faster deployments

When you deploy a new environment it will start deploying without waiting for Microsoft to do it (it’s self-service!). Additionally, thanks to the new microservices architecture, it will be ready to use in under 30 minutes compared to 6-8 hours of regular environments. The first time feels like…

Subscription estimator

We still need to fill out the subscription estimator for licensing purposes and for MS to estimate the size of the production environment. The self-service environments can be escalated more flexibly and quickly.

No RDP access

The access to the VM desktop has been removed because… well, I guess it’s because there’s no VM anymore. All the operations that could need us to access the RDP can be done from LCS.

No SQL Server access

Yes, no RDP access means no RDP access to the SQL box either. We still have access to the Azure SQL DB, we just need to ask for it from LCS and it’s granted in seconds:

Additionally you must whitelist your IP (or the one you’ll access SQL from) from the Maintain – Enable access button on LCS to be able to connect to the Azure SQL Server. The access to the DB and the firewall rule will be enabled for 8 hours.

As usual, there’s no access to the production DB.

One deployable package to rule them all

If you’ve recently tried to deploy a deployable package (DP) without all the packages the environment has (basically generating the DP for a single model/package from Visual Studio) you must’ve noticed the warning about the difference in the packages from the DP and the environment.

With the self-service deployments you must include all models/packages AND!! ISVs in one single deployable package.

Production updates

First, we can start the deployment to production without the 5 hours in notice we need to schedule now. We still can schedule the deployment but we can also start it instantly.

Next, the way the production environment is updated changes a bit from what we’re used to. With the new deployments we will update the sandbox environment as we do now, once it’s done we’ll select a sandbox environment to be promoted into production. This is probably another benefit of the architecture changes.

In the future the deployment downtime will also be reduced to zero for the service updates as long as you’re on the latest update. This won’t be available for custom DPs.

How do I get this?

At the moment this is only available for some new customers. Current customers will be migrated during the coming months, MS will contact the customers to schedule a maintenance window to apply the changes.

For more information check the session Microsoft Dynamics 365 for Finance and Operations: Strategic Lifecycle Services Investments from last June’s MBAS.

Our experience with it

We got into the private deployment preview program almost a year ago with one of Axazure’s customers. The customer is now live with the self-service environments and everything has been fine so far.

But the beginning was a bit hard. Some of the functionalities were still not available at the moment, like DB refresh or… package deployment. Yes, we needed to ask MS to deploy our DPs each time. We couldn’t even put the environments in maintenance mode! In the first months of 2019 a lot of functionality was added to LCS and in June we finally got the production self-service update functionality available. The help we’ve gotten from Microsoft’s product team has been very valuable and they have unlocked some issues that were stopping the progress of the project.

Slow set-based operations?

In Microsoft Dynamics 365 for Finance and Operations we can execute the CRUD operations from code in two different ways, record-per-record or set-based.

Microsoft’s recommendation is to always use set-based operations, if possible, as you can check on the Implementation Best Practices for Dynamics 365: Performance best practices for a successful Dynamics 365 Finance and Operations implementation session from last June’s Business Applications Summit.

Why?

Set-based Vs. Record-per-record

When we run a query in MSDyn365FO we’re using its data access layer which will later be translated into real SQL. We can see the differences using xRecord’s getSQLStatement with generateonly on the query (and forceliterals to show the parameter’s values) to get the SQL query. For example if we run the following code:

We’ll get this SQL statement:

 

We can see all the fields are being selected, and the where clause contains the account number we selected (plus DataAreaId and Partition).

When a while select is run on MSDyn365FO a select SQL statement is executed on SQL Server for each loop of the while. The same happens if an update or delete is executed inside the loop. This is know as record-per-record operation.

Imagine you need to update all the customers with the customer group 10 to update their note. We could do this with a while select, like this:

This would make as many calls as customers from the group 10 existed to SQL Server, one for each loop. Or we could use set-based operations:

This will execute a single SQL statement on SQL Server that will update all the customers with the customer group 10 instead of a query for each customer:

There’s three set-based operations in MSDyn365FO, update_recordset to update records, insert_recordset to create records and delete_from to delete the records. Plus we can make massive inserts using RecordSortedList and RecordInsertList.

Running this methods instead of while selects should obviously be faster as it’s only executing a single SQL query. But…

Why could my set-based operations be running slow?

There’s some well-documented scenarios in which set-based operations fall back to record-per-record operations as we can see in the following table:

DELETE_FROMUPDATE_RECORDSETINSERT_RECORDSETARRAY_INSERTUse … to override
Non-SQL tablesYesYesYesYesNot applicable
Delete actionsYesNoNoNoskipDeleteActions
Database log enabledYesYesYesNoskipDatabaseLog
Overridden methodYesYesYesYesskipDataMethods
Alerts set up for tableYesYesYesNoskipEvents
ValidTimeStateFieldType property not equal to None on a tableYesYesYesYesNot applicable

In the example, if the update method of the CustTable is overriden (which it is) the operation from the update_recordset piece will be run like a while select that updates each record.

In the case of the update_recordset this can be solved calling the skipDataMethods method before running the update:

This will avoid calling the update method (or insert in case of the insert_recordset), more or less like calling doUpdate in a loop. The rest of the methods can be overriden with the corresponding method on the last column.

So, for bulk updates I’d always use set-based operations and enable this on data entities too with the EnableSetBasedSqlOperations property.

And now another but is coming.

Should I always use set-based operations when updating large sets of data?

Well it depends on which data you’re working with. There’s a wonderful blog post from Denis Trunin called “Blocking in D365FO(and why you shouldn’t always follow MS recommendations)” that shows a perfect example where set-based operations would be counterproductive.

As always, developing an ERP is quite sensitive, and similar scenarios can have different solutions. Analyze the requirements and decide which one to use.