After the MBAS on Wednesday I’m thinking about this more and more. Will Dynamics 365 for Finance and Supply Chain Management’s data be natively hosted in the CDS?
After watching Ryan Jones’ session “What’s new in the Common Data Service“, I ask myself whether that’s the question or it should be when will it be natively available in the Common Data Service?
The Common Data Service
The CDS is a platform that allows us to store data that will be used by the business applications. But it’s not only that, take a look at this picture:
We could put MSDyn365FO on top of all that, it supports relational databases, storage, reporting, workflows, security, etc… Of course that wouldn’t be an overnight switch but maybe something progressive. Like what we’ll have with the FnO virtual entities on CDS!
With virtual entities we still won’t have Finance and SCM data on CDS because virtual entities:
Virtual entities enable the integration of data residing in external systems by seamlessly representing that data as entities in Common Data Service, without replication of data and often without custom coding.
“Without replication of data”. When you access a virtual entity in the Common Data Service its state is dynamically retrieved from the external system.
As you can see in the image all public data entities will be natively in CDS. This means we can use the Power Platform capabilities for Finance and Operations as fast and easy as our Customer Engagement colleagues do. At least for the public data entities.
If we need data to be physically in both places we’ll still need to use Dual Write. Remember Dual Write synchronizes data between Finance and Operations and Customer Engagement/CDS near real time.
If you want to learn a bit more about Dual Write you can check the “And finally… Dual Write!” session Juan Antonio and I did on 2019 Dynamics 365 Saturday Madrid. It’s in Spanish and old, Dual Write has now many more Out-of-the-box functionalities, but it gives an idea of what it does and is capable of.
Will this ever happen?
Who knows, I’m just speculating, I’m a developer but I can’t stop thinking that Microsoft is investing a lot into CDS. And Finance and Operations Apps are the only Dynamics 365 products whose data does not reside on Common Data Service.
We’re seeing some functionalities from FnO being replicated and later extended into the CDS like Dynamics 365 Human Resources or Dynamics 365 Project Operations. This is creating an issue, because right now, you must create an integration between the two applications if you want to have some kind of data exchange. FnO in the Common Data Service would solve this.
This also creates some confusion to customers that think that this integration happens Out-of-the-box when it’s not. The naming of the product suggests that but it’s not happening.
We must think that this wouldn’t happen in the following year or two, or three. This is something in the long term. I don’t know about CDS Apps, but Dynamics 365 for Finance and SCM has quite a nice and large amount of tables and migrating all of them to the Common Data Service is sure a tremendous amount of work.
And what about the developer tools? That should change for sure too! We’ll see where the product and us as professionals are headed, but for sure we can’t think about Finance and Operations alone without the CDS anymore.
Tired of developing in Visual Studio 2015? You feel you’ve been left and forgotten in the past? Worry no more, you can use Visual Studio 2017/2019 to develop Microsoft Dynamics 365 for Finance & Operations!
What are the advantages?
Absolutely none at all! Visual Studio will still go non-responding whatever the version is because it’s the dev tools extension what’s causing the issues.
Of course we get the option to use Live Share, and for screen sharing sessions that’s way better than teams. Hey, and we’ll be using the latest VS version!
Select the .NET desktop development option and press install. When the installation is finished we log in with our account.
The next step is installing the Dynamics developer tools extension for VS. Go to drive K and in the DeployablePackages you’ll find some ZIP files that have the extension in the DevToolsService/Scripts folder:
An alternative is, for example, downloading a Platform Update package which also has the dev tools extensions, and maybe with some update to them.
Install the extension and the VS2019 option is already there:
Once installed open VS as the admin and…
Don’t panic! The extension was made for VS2015 and using it in a newer version can cause some warnings, but it’s just that, the tools are installed and ready to use:
As I said in the beginning, the dev tools extension is the one causing the unresponsiveness or blocks in VS, and Visual Studio 2019 is letting us know:
But regardless of the warnings working with Visual Studio 2019 is possible. I’ve been doing so for a week and I still haven’t found a blocking issue that makes me go back to VS2015.
Update: it looks like opening a report design will only display its XML instead of the designer. Thanks to David Murray for warning me about it!
Dev tools preview
In October 2019 the dev tools’ preview version will be published, as we could see in the MBAS in Atlanta. Let’s see which new features this will bring us both in a possible VS version upgrade or performance.
I want to start this second part with a little rant. As I said in the first part, those who have been working with AX for several years were used to not using version-control systems. MSDyn365FO has taken us to uncharted territory, so it is not uncommon for different teams to work in different ways, depending on their experience and what they’ve found in the path. There’s an obvious interest factor here, each team will need to invest some time to discover what’s better for them regarding code, branching and methodologies. Many times this will be based on experimentation and test-error, and with the pace of some projects this turns out bad. And here’s where I’ve been missing some guidance from Microsoft (but maybe I’ve just not found it).
Regardless of this rant, the journey and all I’ve learnt has been, and I think will be, pretty fun 😉
The truth is that I’d love a FastTrack session about this and, I think, it doesn’t exist. EDIT: it looks like I did definitely overlooked it and there is a FastTrack session called Developer ALM which talks a bit about all this. Thanks to Dag Calafell (twitter) for pointing this out!
In the first part we learnt that the Main folder is created when deploying the Build VM. The usual is that in an implementation project all development will be done on that branch until the Go Live, and just before that a new dev branch will be created. The code tree will look like this:
From this moment on, the development VMs need to be mapped to this new development branch. This will allow us to keep developing on the Dev branch and decided when the changes are promoted to the Main one.
This branching strategy is really simple and will keep us mostly worries-free. In my previous job, we went on with a 3 branches strategy, Main, Test and Dev, merging from Dev to Test and from Test to Main. A terrible mistake. Having to mantain 2 sets of changesets is harder and with version ugrades, dozens of pending changeset waiting to be merged and an ISV partner taht sometimes would not help much, everything was kind of funny (“funny”). But I learnt a lot!
Anyway, just some advice: try to avoid having pending changesets to be merged for long. The amount of merge conflicts that will appear is directly proportional to the time the changeset has been waiting to be merged.
At this point, I cannot emphasize enough what I mean by normal. As I say, I wrote all of this based on my experience. It’s obviously not the same working for an ISV than for an implementation partner. An ISV has different needs, it has to mantain different code versions to support all their customers and they don’t need to work in a Dev-Main manner. They could have one (or more) branch for each version. However, since the end of overlayering this is not necessary :). More ideas about this can be found in the article linked at the beggining of this post.
This build definition has all the default steps active. We can disable (or remove) all the steps we’re not going to use. For example, the testing steps can be removed if we have no unit testing. Or the DB sync and report deployment too.
We can also create new build definitions from scratch, however it’s easier to clone the default one and modify it to other branches or needs.
Since 8.1 all the X++ hotfixes are gone, the updates are applied in a deployable package (binaries!). This implies that the Metadatada folder will only contain our custom packages and models, no standard packages anymore. Up until 8.0, having a build definition compiling and generating a DP only with our models was a good idea. In this way we could have a deployable package ready in less time than having to compile standard packages with hotfixes plus ours. Should we need to apply a hotfix we’d just queue the default build pointing to the Main root, otherwise we’d just generate our packages. Using this strategy, we reduced the DP generation time from 1h15m to 9m in one of our customer’s project.
But that was in the past, and all this is outdated information. Right now I hope everybody is as close to 8.1 as possible because One Version is coming in April!
Another useful option is having a build definition that will only compile the code:
It may look a bit useless until you enable the continuous integration option:
Right after every developer’s check-in a build will be queued, and the code compiled. In case there’s a compilation error we’ll be notified about it. Of course, we all build the solutions before checking them in. Right?
And because we all know that “Slow and steady wins the race” but at some point during a project, that’s not possible this kind of build definition can help us out. Especially when merging code conflicts from a dev branch to Main. This will allow us to be 100% sure when creating a DP for release to production that it’ll work. I can tell you that having to do a release to prod in a hurry and seeing the Main build failing is not nice.
Somebody with far more experience and knowledge than me can think, wait but this can also be done with…
What we accomplish with a gated check-in is that the build agent will launch an automated compilation BEFORE checking-in the code. If it fails, the changeset is not made until the errors are fixed and checked-in again.
This option might seem perfect for the merge check-ins to the Main branch. I’ve found some issues trying to use it, for example:
If multiple merge & check-ins from the same development are done and the first fails but the second doesn’t, you’ll still have pending merges to be done.
Issues with error notifications and pending code on dev VMs.
If many check-ins are made you’ll end up with lots of queued builds (and we only have one available agent per DevOps project).
I’m sure this probably has a solution, but I haven’t found it. And I think the CI option is working perfectly to us to validate code. As I’ve already said, all of this is product of trial-error, we’ve learnt to use this while working with it.
I guess the biggest conclusion is that with MSDyn365FO we must use DevOps. It’s mandatory, there’s no other option. If there’s anyone out there not doing it, do it. Now. Review how you work and let’s forget and don’t look back at how we used to work with AX, technically speaking MSDyn365FO is a different product.
Truth is that MSDyn365FO has taken developers to a more classic approach of software projects, like .NET or Java. But we’re still special. An ERP project has a lot of peculiarities, and not having to create a product from scratch, having a base that makes us follow a path, limits us in some aspects, and the usage of certain techniques or methodologies.
I hope these two posts about Azure DevOps can help somebody. And if anyone with more experience or better ideas wants to recommend anything, comments are open!
One of the major changes we got with Dynamics 365 has been the mandatory use of a source control system. In older versions we had MorphX VCS for AX 2009 and the option to use TFS in AX 2009 and AX 2012 (and there’s training available about this on El rincón Dynamics, in Spanish), but it wasn’t mandatory. Actually, always from my experience, I think most of projects used no source control other than comments in the code.
Azure DevOps in MsDyn365FO
In Microsoft Dynamics 365 for Finance and Operations the source control tool Azure DevOps offers, is not just a source control tool but aTHE tool that will be our One Ring for our projects (I hope that not for binding us in darkness). From project management to the functional team, everybody can be involved in using Azure DevOps to manage the project and team.
BPM synchronization and task creation, team planning, source control, automated builds and releases, are some of the tools it offers. All these changes will need some learning from the team, but in the short-term all of this will help the team to better manage the project.
As I said it looks like the technical team is the most affected for the addition on source control to Visual Studio, but it’s the most benefited too…
The first thing we need to do when starting a new implementation project, is linking LCS to the DevOps project we’ll be using. Everything is really well documented.
Once done we’ll have to deploy the build server. This is usually done in the dev box on Microsoft’s subscription. When this VM gets deployed the basic source tree will be created in the DevOps project:
With the source tree now available, we can map the development machines and start working. The Main folder you see in the image is a regular folder, but we can convert it into a branch if we need it.
In the image above, you can see the icon for Main changes when it’s converted in a branch. Branches allow us to perform some actions that aren’t available to folders. Some differences can be seen in the context menu:
For instance, branches can display the hierarchy of all the project branches (in this case it’s only Main and Dev so it’s quite simple :P).
Properties dialogs are different too. The folder one:
And the branch one, where we can see the different relationships between the other branches created from Main:
This might be not that interesting or useful, but one of the things converting a folder into a branch is seeing where has a changeset been merge into. We’ll see this in part 2.
I strongly recommend moving the Projects folder out of the Main branch into the root of the project, at the same level as BuildProcessTemplates and Trunk. If you don’t, and end up working in Main and Dev branches, Visual Studio’s solutions and projects will still be checked-in in the Main branch. It will spare you of small heart attacks when you receive the build email with the changeset summary, thinking something went into production 🙂
It is possible that after queuing a new build, the job won’t start. It won’t be possible to cancel it either, and nothing will change after rebooting the build server VM. This can be an unusual case but it’s not something impossible.
The build server
Even though the build machine is exactly as a developer box it really isn’t. It has Visual Studio installed in it, the AosService folder with all the standard packages and a SQL Server with an AxDB, just like all other developer machines. But it isn’t!
We won’t be using any of these features. The “heart” of the build machine is the build agent, an application which Azure DevOps uses to execute the build definition’s tasks from LCS. The link between the DevOps project and LCS is created during the deployment of the build machine.
As a curiosity, it looks like the build server will disappear in the future and all we’ll need will be Azure DevOps. I’ve been looking for the source of this information, but I can’t find it, I think Joris de Gruyter commented this on Twitter.
This was caused by having 2 build agents in our DevOps pool. How did this happen? Well, in our case, during a short time 2 LCS implementation projects, with a build machine each, coexisted pointing to the same DevOps project. Nothing strange, it was 100% justified at that point 🙂
This temporary situation made this happen:
There are two agents at the same time, and the one marked as enabled is offline! This agent is the one created after deploying the build machine on the first LCS project. We had already disabled it, but it got enabled again even though the LCS project had been deleted.
The fix is as easy as logging in with the Azure DevOps account admin (not the project), expand the agent section and delete the old/offline one with the X button. Enable the new one and after this the builds will start running again.
Recently, a colleague found a little issue when using an AOT query to feed a view with a range dynamically filtered using a SysQueryRangeUtil method.
Recreating the issue
The query is pretty simple, only showing ledger transaction data from the GeneralJournalEntry and GeneralJournalAccountEntry tables. A range in the Ledger field from the current company was added as you can see in the pic below:
We created a new range method by extending the SysQueryRangeUtil class. Using the Ledger::current() to filter the active company.
The we used the query to feed data to the view and added two fields just for testing purposes:
Everything quite straightforward. Let’s check the view in the table browser…
No data! And I can tell there’s data in here:
What’s going on in here? If we use the query in a job (yeah, I know, Runnable Class…) the range is filtering the data as expected.
So… let’s see the view design in SSMS:
Well, it definitely looks like something’s being filtered in here. The range is working! Is it? Sure? Which company does that Ledger table RecId corresponds to?
What’s going on?
There’s an easy and clear explanation but one doesn’t think of it until he faces this specific issue. While the view* is a Data Dictionary object, and when the project is synchronized the view is created in SQL Server, the query* is a X++ object and only exists within the application. The view is created in SQL and we can see and query it in SSMS. The AOT query doesn’t. It feeds the view and provides a data back end, but all X++ added functionality stays in 365, including the SysQueryRangeUtil filters.
The solution is an easy one. Removing the range in the query and adding it in the form data source will do the trick (if this can be considered a trick…).
(*) Note: the links to the docs point to AX 2012 docs but should be valid.
Some weeks ago, the release pipeline extension for #MSDyn365FO was published in Azure DevOps Marketplace, taking us closer to the continuous integration scenario. While we wait for the official documentation we can check the notes on the announcement, and I’ve written a step by step guide to set it up on our projects.
To configure the release pipeline, we need:
AAD app registration
An Azure DevOps project linked to the LCS project above
A service account
I recommend the user to be a service account with a non-expiring password and enough privileges on LCS, Azure and Azure DevOps (well, this is not a recommendation, without rights this cannot be done). This is not mandatory and can be done even with your user (if it has enough rights) for testing purposes.
AAD app creation
The first step to take is creating an app registration on Azure Active Directory to upload the generated deployable package to LCS. Head to Azure portal and once logged in go to Azure ActiveDirectory, then App Registrations and create a new Native app:
Next go to “Settings” and “Required permissions” to add the Dynamics Lifecycle Services API:
Select the only available permission in step 2 and accept until it appears on the “Required permissions” screen. Finally push the “Grant permissions” button to apply the changes:
This last step can be easily forgotten and the package upload to LCS cannot be done if not granted. Once done take note of the Application ID, we’ll use it later.
Create the release pipeline in DevOps
Before setting up anything on Azure DevOps we need to make sure the project we’re going to use is linked to LCS. This can be done in the “Visual Studio Team Services” tab in LCS’ project settings.
After setting it up, we’ll go to Pipelines -> Releases to create the new release. Select “New release pipeline” and choose “Empty job” from the list.
On the artifact box select the build which we will link to this release definition:
Pick the build definition you want to use for the release in “Source”, “Latest” in “Default version” and push “Add”.
The next step we’ll take is adding a Task with the release pipeline for Dynamics. Go to the Tasks tab and press the plus button. A list with extension will appear, look for “Dynamics 365 Unified Operations Tools”:
If the extension hasn’t been added previously it can be done in this screen. In order to add it, the user used to create the release must have admin rights on the Azure DevOps account, not only in the project in which we’re creating the pipeline.
When the task is created we need to fill some parameters:
Creating the LCS connection
The first step in the task is setting up the link to LCS using the AAD app we created before. Press New and let’s fill the fields in the following screen:
It’s only necessary to fill in the connection name, username, password (from the user and Application (Client) ID fields. Use the App ID we got in the first step for the App ID field. The endpoint fields should be automatically filled in. Finally, press OK and the LCS connection is ready.
In the LCS Project Id field, use the ID from the LCS project URL, for example in https://lcs.dynamics.com/V2/ProjectOverview/1234567 the project is is 1234567.
Press the button next to “File to upload” and select the deployable package file generated by the build:
If the build definition hasn’t been modified, the output DP will have a name like AXDeployableRuntime_VERSION_BUILDNUMBER.zip. Change the fixed Build Number for the DevOps variable $(Build.BuildNumber) like in the image below:
The package name and description in LCS are defined in “LCS Asset Name” and “LCS Asset Description”. For these fields, Azure DevOps’ build variables and release variables can be used. Use whatever fits your project, for example a prefix to distinguish between prod and pre-prod packages followed by $(Build.BuildNumber), will upload the DP to LCS with a name like Prod 2019.1.29.1, using the date as a DP name.
Save the task and release definition and let’s test it. In the Releases select the one we have just created and press the “Create a release” button, in the dialog just press OK. The release will start and, if everything is OK we’ll see the DP in LCS when it finishes:
The release part can be automated, just press the lightning button on the artifact and enable the trigger:
And that’s all! Now the build and the releases are both configured. Once the deployment package is published the CI scenario will be complete.