Feature management: create a custom feature

Feature management has been around in Microsoft Dynamics 365 for Finance and Operations for some time now. Before that features were enabled through flighting running a SQL query on dev and UAT boxes (and the DSE team would do it on production).

Now we have a nice workspace showing all the available features and flighting is still around too. The main difference between flighting and features is that flighting is enabled to a selected group of customers, like a preview of a feature.

In each new PU there’s new functionalities added to MsDyn365FO, and in PU30, released recently under the PEAP (Preview Early Access Program), we got new enhancements to the Feature management, this time adding a new property to Menu items and Menus called “Feature class“:

This is not enabled yet, and if you try adding a class to a menu item you’ll get a warning and no functionality.

If you read the docs you’ll see that creating new features is not enabled yet, but if you search in the metadata for feature classes…

Creating a custom feature

We’ll take the TaxSetupValidationFeature class as an example. This class implements the interface IFeatureMetadata, and all feature classes use a Singleton pattern to get the feature instance! (It’s exciting because is the first time I see it being used in MSDyn365FO).

The methods to be implemented include the feature’s name and description, the model and some setup. Just copy all the methods and the member of the class and paste it into one you create.

using Composition = System.ComponentModel.Composition;

[Composition.ExportAttribute(identifierstr(Dynamics.AX.Application.IFeatureMetadataV0))]
public final class AASTestFeature implements IFeatureMetadataV0
{
    private static AASTestFeature instance;


    private void new()
    {
    }

    private static void TypeNew()
    {
        instance = new AASTestFeature();
    }

    [Hookable(false)]
    public FeatureModuleV0 module()
    {
        return FeatureModuleV0::AccountsReceivable;
    }

    /// <summary>
    /// Obtains the singleton object instance.
    /// </summary>
    /// <returns>The <c>AASTestFeature</c> instance.</returns>
    [Hookable(false)]
    public static AASTestFeature instance()
    {
        return AASTestFeature::instance;
    }

    [Hookable(false)]
    public LabelId label()
    {
        return literalStr("Enable Test Feature");
    }

    [Hookable(false)]
    public LabelId summary()
    {
        return literalStr("Enables 'Say hello' button");
    }

    [Hookable(false)]
    public WebSiteURL learnMoreUrl()
    {
        return "";
    }

    [Hookable(false)]
    public boolean isEnabledByDefault()
    {
        return false;
    }

    [Hookable(false)]
    public boolean canDisable()
    {
        return true;
    }

    public static boolean isEnable()
    {
        return FeatureStateProviderV0::isFeatureEnabled(AASTestFeature::instance());
    }

}

Now build your solution and go to the feature management workspace, click the check for updates button and your feature should appear in the list:

Let’s use the feature (in quite a stupid way). Create an extension of a form and on its init method check if the feature is enabled, if it is display a message:

[ExtensionOf(formStr(CustTable))]
final class CustTableAASFeatureDemo_Extension
{
  void init()
    {
        next init();

        if (FeatureStateProviderV0::isFeatureEnabled(AASTestFeature::instance()))
        {
            info('Hello! The feature is enabled!');
        }
    }

}

Before enabling the feature go to the form to check there’s no message is being displayed:

No message there, OK.

Go back to feature management and enable your feature.

Go back to the form (CustTable in my example) and…

There’s the message!

Custom features are working as of PU30, at least on dev boxes, and maybe Tier 2+ sandbox environments too. Don’t try it on a production environment until it’s officially released (but that’s not possible as it’s a PEAP release).

This is just a small test of the classes that are available now, we’ll see new features in PU31 when the Feature Class property will work and, as I read on Twitter:

Set up the new Azure DevOps tasks for Packaging and Model Versioning

During this past night (at least it was night for me :P) the new tasks for Azure DevOps to deploy the packages and update model versions have been published:

There’s an announcement in the Community blogs too with extended details on setting it up. Let´s see the new tasks and how to set them up.

Update Model Version task

This one is the easiest, just add it to your build definition under the current model versioning task, disable the original one and you’re done. If you have any filters in your current task, like excluding any model, you must add the filter in the Descriptor Search Pattern field using Azure DevOps pattern syntax.

Create Deployable Package task

This task will replace the Generate packages from the current build definitions. To set it up we just need to do a pair of changes to the default values:

X++ Tools Path

This is your build VM’s physical bin folder, the AosService folder is usually on the unit K for cloud-hosted VMs. I guess this will change when we go VM-less to do the builds.

Update!: the route to the unit can be changed for $(ServiceDrive), getting a path like $(ServiceDrive)\AOSService\PackagesLocalDirectory\bin.

Location of the X++ binaries to package

The task comes with this field filled in as $(Build.BinariesDirectory) but this didn’t work out for our build definitions, maybe the variable isn’t set up on the proj file. After changing this to $(Agent.BuildDirectory)\Bin the package is generated.

Filename and path for the deployable package

The path on the image should be changed to $(Build.ArtifactStagingDirectory)\Packages\AXDeployableRuntime_$(Build.BuildNumber).zip. You can leave it without the Packages folder in the path, but if you do that you will need to change the Path to Publish field in the Publish Artifact: Package step of the definition.

Add Licenses to Deployable Package task

This task will add the license files to an existing Deployable Package. Remember that the path of the deployable package must be the same as the one in the Create Deployable Package task.

And you’re done! A step closer from getting rid of the build VM.

If you need help setting up the release pipeline you can check this post I wrote.

Update to Visual Studio 2019 for #MSDyn365FO

Tired of developing in Visual Studio 2015? You feel you’ve been left and forgotten in the past? Worry no more, you can use Visual Studio 2017/2019 to develop Microsoft Dynamics 365 for Finance & Operations!

What are the advantages?

Absolutely none at all! Visual Studio will still go non-responding whatever the version is because it’s the dev tools extension what’s causing the issues.

Of course we get the option to use Live Share, and for screen sharing sessions that’s way better than teams. Hey, and we’ll be using the latest VS version!

Is it difficult?

No, zero mysteries. The first thing we need to do is downloading Visual Studio 2019 Professional (or Enterprise but it won’t make such a difference for D365 development) and install it:

Select the .NET desktop development option and press install. When the installation is finished we log in with our account.

The next step is installing the Dynamics developer tools extension for VS. Go to drive K and in the DeployablePackages you’ll find some ZIP files that have the extension in the DevToolsService/Scripts folder:

An alternativa is, for example, downloading a Platform Update package which also has the dev tools extensions, and maybe with some update to them.

Install the extension and the VS2019 option is already there:

Once installed open VS as the admin and…

And also…

Don’t panic! The extension was made for VS2015 and using it in a newer version can cause some warnings, but it’s just that, the tools are installed and ready to use:

As I said in the beggining, the dev tools extension is the one causing the unresponsiveness or blocks in VS, and Visual Studio 2019 is letting us know:

But regardless of the warnings working with Visual Studio 2019 is possible. I’ve been doing so for a week and I still haven’t found a blocking issue that makes me go back to VS2015.

Dev tools preview

In October 2019 the dev tools’ preview version will be published, as we could see in the MBAS in Atlanta. Let’s see which new features this will bring us both in a possible VS version upgrade or performance.

Using Azure Application Insights with MSDyn365FO

First of all… DISCLAIMER: think twice before using this on a productive environment. Then think again. And if you finally decide to use it, do it in the most cautious and light way.

Why does this deserve a disclaimer? Well, even though the docs state that the system performance should not be impacted, I don’t really know its true impact. Plus it’s on an ERP. On one in which we don’t have access to the production environment (unless you’re On-Prem) to certify that there’s no performance degradation. And because probably Microsoft’s already using it to collect data from the environments to show up in LCS, and I don’t know if it could interfere on it. A lot of I-don’t-knows.

Would I use it on production? YES. It will be really helpful in some cases.

With that said, what am I going to write about that needs a disclaimer? As the title says, about using Azure Application Insights in Microsoft Dynamics 365 for Finance and Operations. This post is just one of the “Have you seen that? Yeah, we should try it!” consequences of Juanan (and on Twitter, follow him!) and me talking. And the that this time was this post from Lane Swenka on AX Developer Connection. So nothing original here 🙂

Azure Application Insights

I spy
I spy!! Made by Cazapelusas

What’s Application Insights? As the documentation says:

Application Insights is an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your blah web application. It will blah blah detect blaaah anomalies. It blah powerful blahblah tools to bleh blah blih and blah blah blaaaah. It’s blaaaaaaaah.

Mmmm… you better watch this video:

So much misery and sadness in the first 30 seconds…

Monitoring. That’s what it does and is for. “LCS already does that!“. OK, extra monitoring! Everybody loves extra, like in pizzas, unless it’s pinneapple, of course.

Getting it to work

The first step will be to create an Application Insights resource on our Azure subscription. Regarding pricing: the first 5GB per month are free and data will be retained for 90 days. More details here.

Then we need the code. I’ll skip the details in this part because it’s perfectly detailed in the link above (this one), just follow the steps. You basically need to create a DLL library to handle the events and send data to AAI and use it from MSDyn365FO. In our version we’ve additionally added the trackTrace method to the C# library. Then just add a reference to the DLL in your MSDyn365FO Visual Studio project and it’s ready to use.

What can we measure?

And now the interesting part (I hope). Page views, capture errors (or all infologs), batch executions, field value changes, and anything else you can extend and call our API methods.

For example, we can extend the FormDataUtil class from the forms engine. This class has several methods that are called from forms in different actions on the data sources, like validating the write, delete, field validations, etc… And also this:

modifiedField in FormDataUtils

This will run after a form field value is modified. We’ll extend it to log which field is having it’s value changed, the old and new value. Like this:

Extending modifiedField
I promise I always use labels!

And because the Application Insights call will also store the user that triggered the value change, we just got a new database log! Even better, we got a new database log that has no performance consequences because there’s no extra data to be generated on MSDyn365FO’s side. The only drawback in this is that it will only be called from forms, but it might be enough to monitor the usage of forms and counter the “no, I haven’t changed any parameter” 🙂

This is what we get on Azure’s Application Insights metrics explorer:

Azure Application Insights Custom Event
What dou you mean I changed that?!

Yes you did, Admin! Ooops it’s me…

Custom events

We’re storing the AOS name too and if the call was originated in a Batch.

All the metrics from our events will display on Azure and the data can be displayed later in Power BI, if you feel like doing it.

With this example you can go on and add calls to the extended objects where you need it. Batches, integrations, critical processes, etc…

Again, please plan what you want to monitor before using this and test it. Then test it again, especially on SAT environments with Azure SQL databases which perform a bit different than the regular SQL Server ones.

And enjoy the data!

Generate number sequence values from REST services and OData

One of the options to integrate MSDyn365FO with external systems is using the data entities with REST services and OData. To use OData the entity must have its IsPublic property set to Yes:

Entidad Clientes V3

Otherwise, if it´s an standard entity, we´ll need to duplicate it because it´s not possible to change the property value in an extension.

If we´re doing an integration with an external system using OData to create new records in the ERP, we can have an issue when the record has a mandatory ID, as we can see in the Customers V3 entity. If we check the Mandatory property of the CustomerAccount field it´s set to Auto, getting the value from the CustTable where it´s set to Yes.

In this case, if we try to create a customer without an account number the service will fail as it can be seen in the Postman capture below:

Postman fail :(

Crystal clear error, the customer account field cannot be empty.

This isn´t happening with the Vendors entity. “Hey! But the vendor account is mandatory in the VendTable!” someone may think. Correct, it is, but not in the entity where it´s been overriden:

Vendors V2

To see how the standard solves this we need to check the entity initValue method:

The skipNumberSequenceCheck is one of the data methods from the Common class, and it´s a relative of skipDataMethods, skipDataSourceValidateWrite, skipAosValidation, etc… It will always return false unless we tell it not to do so by passing true earlier in the code through the parameter.

The NumberSeqRecordFieldHandler class enableNumberSequenceControlForField will initialize the value of the field we pass in the parameters with the next value from the sequence we select. In this case it´s filling the vendor account field with the sequence set in the vendor parameters (obviously)

So, doing the same as the standard does, we’re going to extend the entity and the initValue method:

Extensión de código de la entity Customers V3

Having done this we’ll try again in Postman, this time deleting the CustomerAccount parameter from the body, and…

Cliente creado por servicio REST y OData

Success! We’ve got a new customer! Created from an external system and using the number sequence from Dynamics 365.

This is no mistery, it just mimicks what the standard does. As MSDyn365FO developers we must try to do that, always. Always… as long as we can, of course 🙂 Because even though partners always try to apply the standard as much as possible, we all know that in the end, there´ll be some customization done (hopefully, we´re developers!).

Dynamics 365 for Finance & Operations and Azure DevOps (part II)

In the first part of this post I wrote about Azure DevOps value and how to set it up in MSDyn365FO.

I want to start this second part with a little rant. As I said in the first part, those who have been working with AX for several years were used to not using version-control systems. MSDyn365FO has taken us to uncharted territory, so it is not uncommon for different teams to work in different ways, depending on their experience and what they’ve found in the path. There’s an obvious interest factor here, each team will need to invest some time to discover what’s better for them regarding code, branching and methodologies. Many times this will be based on experimentation and test-error, and with the pace of some projects this turns out bad. And here’s where I’ve been missing some guidance from Microsoft (but maybe I’ve just not found it).

Regardless of this rant, the journey and all I’ve learnt has been, and I think will be, pretty fun 😉

Branching strategies

I want to make it clear in advance that I’m not an expert in managing code nor Azure DevOps, at all. All that I’ve written here is product of experiences (also bad ones) of almost 3 years working with MSDyn365FO. In this article on branching strategies from the docs there’s more information regarding branching and links to articles of the DevOps team. And there’s even MORE info in the DevOps Rangers’ Library of tooling and guidance solutions!

The truth is that I’d love a FastTrack session about this and, I think, it doesn’t exist. EDIT: it looks like I did definitely overlooked it and there is a FastTrack session called Developer ALM which talks a bit about all this. Thanks to Dag Calafell (twitter) for pointing this out!

In the first part we learnt that the Main folder is created when deploying the Build VM. The usual is that in an implementation project all development will be done on that branch until the Go Live, and just before that a new dev branch will be created. The code tree will look like this:

Ramas despues de branch

From this moment on, the development VMs need to be mapped to this new development branch. This will allow us to keep developing on the Dev branch and decided when the changes are promoted to the Main one.

This branching strategy is really simple and will keep us mostly worries-free. In my previous job, we went on with a 3 branches strategy, Main, Test and Dev, merging from Dev to Test and from Test to Main. A terrible mistake. Having to mantain 2 sets of changesets is harder and with version ugrades, dozens of pending changeset waiting to be merged and an ISV partner taht sometimes would not help much, everything was kind of funny (“funny”). But I learnt a lot!

Anyway, just some advice: try to avoid having pending changesets to be merged for long. The amount of merge conflicts that will appear is directly proportional to the time the changeset has been waiting to be merged.

At this point, I cannot emphasize enough what I mean by normal. As I say, I wrote all of this based on my experience. It’s obviously not the same working for an ISV than for an implementation partner. An ISV has different needs, it has to mantain different code versions to support all their customers and they don’t need to work in a Dev-Main manner. They could have one (or more) branch for each version. However, since the end of overlayering this is not necessary :). More ideas about this can be found in the article linked at the beggining of this post.

Builds

In the first part and an older post (Unresponsive builds in Azure DevOps) I explained a bit about builds, and we saw the default build definition generated when deploying the build machine:

Pasos de la definición de build por defecto

This build definition has all the default steps active. We can disable (or remove) all the steps we’re not going to use. For example, the testing steps can be removed if we have no unit testing. Or the DB sync and report deployment too.

We can also create new build definitions from scratch, however it’s easier to clone the default one and modify it to other branches or needs.

Since 8.1 all the X++ hotfixes are gone, the updates are applied in a deployable package (binaries!). This implies that the Metadatada folder will only contain our custom packages and models, no standard packages anymore. Up until 8.0, having a build definition compiling and generating a DP only with our models was a good idea. In this way we could have a deployable package ready in less time than having to compile standard packages with hotfixes plus ours. Should we need to apply a hotfix we’d just queue the default build pointing to the Main root, otherwise we’d just generate our packages. Using this strategy, we reduced the DP generation time from 1h15m to 9m in one of our customer’s project.

But that was in the past, and all this is outdated information. Right now I hope everybody is as close to 8.1 as possible because One Version is coming in April!

Another useful option is having a build definition that will only compile the code:

Definicion build continua

It may look a bit useless until you enable the continuous integration option:

DevOps continuous integration

Right after every developer’s check-in a build will be queued, and the code compiled. In case there’s a compilation error we’ll be notified about it. Of course, we all build the solutions before checking them in. Right?

tysonjaja

And because we all know that “Slow and steady wins the race” but at some point during a project, that’s not possible this kind of build definition can help us out. Especially when merging code conflicts from a dev branch to Main. This will allow us to be 100% sure when creating a DP for release to production that it’ll work. I can tell you that having to do a release to prod in a hurry and seeing the Main build failing is not nice.

Somebody with far more experience and knowledge than me can think, wait but this can also be done with…

Gated check-ins

What we accomplish with a gated check-in is that the build agent will launch an automated compilation BEFORE checking-in the code. If it fails, the changeset is not made until the errors are fixed and checked-in again.

This option might seem perfect for the merge check-ins to the Main branch. I’ve found some issues trying to use it, for example:

  • If multiple merge & check-ins from the same development are done and the first fails but the second doesn’t, you’ll still have pending merges to be done.
  • Issues with error notifications and pending code on dev VMs.
  • If many check-ins are made you’ll end up with lots of queued builds (and we only have one available agent per DevOps project).

I’m sure this probably has a solution, but I haven’t found it. And I think the CI option is working perfectly to us to validate code. As I’ve already said, all of this is product of trial-error, we’ve learnt to use this while working with it.

Conclusions

I guess the biggest conclusion is that with MSDyn365FO we must use DevOps. It’s mandatory, there’s no other option. If there’s anyone out there not doing it, do it. Now. Review how you work and let’s forget and don’t look back at how we used to work with AX, technically speaking MSDyn365FO is a different product.

Truth is that MSDyn365FO has taken developers to a more classic approach of software projects, like .NET or Java. But we’re still special. An ERP project has a lot of peculiarities, and not having to create a product from scratch, having a base that makes us follow a path, limits us in some aspects, and the usage of certain techniques or methodologies.

I hope these two posts about Azure DevOps can help somebody. And if anyone with more experience or better ideas wants to recommend anything, comments are open!

Dynamics 365 for Finance & Operations and Azure DevOps (part I)

One of the major changes we got with Dynamics 365 has been the mandatory use of a source control system. In older versions we had MorphX VCS for AX 2009 and the option to use TFS in AX 2009 and AX 2012 (and there’s training available about this on El rincón Dynamics, in Spanish), but it wasn’t mandatory. Actually, always from my experience, I think most of projects used no source control other than comments in the code.

El AOT antes de la llegada del control de versiones
The AOT before source control, by cazapelusas.com

Azure DevOps in MsDyn365FO

In Microsoft Dynamics 365 for Finance and Operations the source control tool Azure DevOps offers, is not just a source control tool but a THE tool that will be our One Ring for our projects (I hope that not for binding us in darkness). From project management to the functional team, everybody can be involved in using Azure DevOps to manage the project and team.

BPM synchronization and task creation, team planning, source control, automated builds and releases, are some of the tools it offers. All these changes will need some learning from the team, but in the short-term all of this will help the team to better manage the project.

As I said it looks like the technical team is the most affected for the addition on source control to Visual Studio, but it’s the most benefited too…

First steps

The first thing we need to do when starting a new implementation project, is linking LCS to the DevOps project we’ll be using. Everything is really well documented.

Once done we’ll have to deploy the build server. This is usually done in the dev box on Microsoft’s subscription. When this VM gets deployed the basic source tree will be created in the DevOps project:

Carpetas en proyecto de Azure DevOps

(Ignore the CSProjects folder, it’s from the comic performance on last year’s 365 Saturday with my colleague Juanan)

With the source tree now available, we can map the development machines and start working. The Main folder you see in the image is a regular folder, but we can convert it into a branch if we need it.

Carpeta convertida en rama

In the image above, you can see the icon for Main changes when it’s converted in a branch. Branches allow us to perform some actions that aren’t available to folders. Some differences can be seen in the context menu:

Menú contextual carpeta
Folder context menu
Menú contextual rama
Branch context menu

For instance, branches can display the hierarchy of all the project branches (in this case it’s only Main and Dev so it’s quite simple :P).

Jerarquía de las ramas

Properties dialogs are different too. The folder one:

And the branch one, where we can see the different relationships between the other branches created from Main:

Propiedades de la rama

This might be not that interesting or useful, but one of the things converting a folder into a branch is seeing where has a changeset been merge into. We’ll see this in part 2.

Some advice

I strongly recommend moving the Projects folder out of the Main branch into the root of the project, at the same level as BuildProcessTemplates and Trunk. If you don’t, and end up working in Main and Dev branches, Visual Studio’s solutions and projects will still be checked-in in the Main branch. It will spare you of small heart attacks when you receive the build email with the changeset summary, thinking something went into production 🙂


 

Unresponsive builds in Azure DevOps

It is possible that after queuing a new build, the job won’t start. It won’t be possible to cancel it either, and nothing will change after rebooting the build server VM. This can be an unusual case but it’s not something impossible.

The build server

Even though the build machine is exactly as a developer box it really isn’t. It has Visual Studio installed in it, the AosService folder with all the standard packages and a SQL Server with an AxDB, just like all other developer machines. But it isn’t!

We won’t be using any of these features. The “heart” of the build machine is the build agent, an application which Azure DevOps uses to execute the build definition’s tasks from LCS. The link between the DevOps project and LCS is created during the deployment of the build machine.

As a curiosity, it looks like the build server will disappear in the future and all we’ll need will be Azure DevOps. I’ve been looking for the source of this information, but I can’t find it, I think Joris de Gruyter commented this on Twitter.

Edit: I still haven’t found it but this looks like a hint.

The issue

This was caused by having 2 build agents in our DevOps pool. How did this happen? Well, in our case, during a short time 2 LCS implementation projects, with a build machine each, coexisted pointing to the same DevOps project. Nothing strange, it was 100% justified at that point 🙂

This temporary situation made this happen:

Dos agentes de build para el mismo proyecto y pool!

There are two agents at the same time, and the one marked as enabled is offline! This agent is the one created after deploying the build machine on the first LCS project. We had already disabled it, but it got enabled again even though the LCS project had been deleted.

The fix is as easy as logging in with the Azure DevOps account admin (not the project), expand the agent section and delete the old/offline one with the X button. Enable the new one and after this the builds will start running again.

Borrando el agente offline de Azure DevOps

And that’s all, one less problem!

The mystery of the non-filtering query

There’s no mystery here but a misperception.

Recently, a colleague found a little issue when using an AOT query to feed a view with a range dynamically filtered using a SysQueryRangeUtil method.

Recreating the issue

The query is pretty simple, only showing ledger transaction data from the GeneralJournalEntry and GeneralJournalAccountEntry tables. A range in the Ledger field from the current company was added as you can see in the pic below:

Query en Visual Studio

We created a new range method by extending the SysQueryRangeUtil class. Using the Ledger::current() to filter the active company.

Extensión en visual studio

The we used the query to feed data to the view and added two fields just for testing purposes:

Vista en Visual Studio

Everything quite straightforward. Let’s check the view in the table browser…

Explorador de tablas

No data! And I can tell there’s data in here:

Registros en SSMS

What’s going on in here? If we use the query in a job (yeah, I know, Runnable Class…) the range is filtering the data as expected.

Psyduck is confused
Me in a tribute to “Psyduck is confused” by cazapelusas.com

So… let’s see the view design in SSMS:

Diseño de la vista en SSMS

Well, it definitely looks like something’s being filtered in here. The range is working! Is it? Sure? Which company does that Ledger table RecId corresponds to?

Registro de DAT

Qué haces besando a la lisiada!?
Why are you quering the damned DAT? (Sorry this was funnier in Spanish)

What’s going on?

There’s an easy and clear explanation but one doesn’t think of it until he faces this specific issue. While the view* is a Data Dictionary object, and when the project is synchronized the view is created in SQL Server, the query* is a X++ object and only exists within the application. The view is created in SQL and we can see and query it in SSMS. The AOT query doesn’t. It feeds the view and provides a data back end, but all X++ added functionality stays in 365, including the SysQueryRangeUtil filters.

The solution is an easy one. Removing the range in the query and adding it in the form data source will do the trick (if this can be considered a trick…).

(*) Note: the links to the docs point to AX 2012 docs but should be valid.

Setting up Release Pipeline in Azure DevOps for Dynamics 365 for Finance and Operations

Let’s go…

Some weeks ago, the release pipeline extension for #MSDyn365FO was published in Azure DevOps Marketplace, taking us closer to the continuous integration scenario. While we wait for the official documentation we can check the notes on the announcement, and I’ve written a step by step guide to set it up on our projects.

To configure the release pipeline, we need:

  • AAD app registration
  • LCS project
  • An Azure DevOps project linked to the LCS project above
  • A service account

I recommend the user to be a service account with a non-expiring password and enough privileges on LCS, Azure and Azure DevOps (well, this is not a recommendation, without rights this cannot be done). This is not mandatory and can be done even with your user (if it has enough rights) for testing purposes.

AAD app creation

The first step to take is creating an app registration on Azure Active Directory to upload the generated deployable package to LCS. Head to Azure portal  and once logged in go to Azure ActiveDirectory, then App Registrations and create a new Native app:

Nueva app azure AD

Next go to “Settings” and “Required permissions” to add the Dynamics Lifecycle Services API:

Permiso de LCS

Select the only available permission in step 2 and accept until it appears on the “Required permissions” screen. Finally push the “Grant permissions” button to apply the changes:

Grant permission

This last step can be easily forgotten and the package upload to LCS cannot be done if not granted. Once done take note of the Application ID, we’ll use it later.

Create the release pipeline in DevOps

Before setting up anything on Azure DevOps we need to make sure the project we’re going to use is linked to LCS. This can be done in the “Visual Studio Team Services” tab in LCS’ project settings.

After setting it up, we’ll go to Pipelines -> Releases to create the new release. Select “New release pipeline” and choose “Empty job” from the list.

On the artifact box select the build which we will link to this release definition:

New release

Pick the build definition you want to use for the release in “Source”, “Latest” in “Default version” and push “Add”.

The next step we’ll take is adding a Task with the release pipeline for Dynamics. Go to the Tasks tab and press the plus button. A list with extension will appear, look for “Dynamics 365 Unified Operations Tools”:

Dynamics 365 Unified Operations Tools

If the extension hasn’t been added previously it can be done in this screen. In order to add it, the user used to create the release must have admin rights on the Azure DevOps account, not only in the project in which we’re creating the pipeline.

When the task is created we need to fill some parameters:Release Dynamics Operations

Creating the LCS connection

The first step in the task is setting up the link to LCS using the AAD app we created before. Press New and let’s fill the fields in the following screen:

Coenxión LCS Azure DevOps

It’s only necessary to fill in the connection name, username, password (from the user and Application (Client) ID fields. Use the App ID we got in the first step for the App ID field. The endpoint fields should be automatically filled in. Finally, press OK and the LCS connection is ready.

In the LCS Project Id field, use the ID from the LCS project URL, for example in https://lcs.dynamics.com/V2/ProjectOverview/1234567 the project is is 1234567.

Press the button next to “File to upload” and select the deployable package file generated by the build:

DP Generado

If the build definition hasn’t been modified, the output DP will have a name like AXDeployableRuntime_VERSION_BUILDNUMBER.zip. Change the fixed Build Number for the DevOps variable $(Build.BuildNumber) like in the image below:

BUildNumber

The package name and description in LCS are defined in “LCS Asset Name” and “LCS Asset Description”. For these fields, Azure DevOps’ build variables and release variables can be used. Use whatever fits your project, for example a prefix to distinguish between prod and pre-prod packages followed by $(Build.BuildNumber), will upload the DP to LCS with a name like Prod 2019.1.29.1, using the date as a DP name.

Save the task and release definition and let’s test it. In the Releases select the one we have just created and press the “Create a release” button, in the dialog just press OK. The release will start and, if everything is OK we’ll see the DP in LCS when it finishes:

LCS Asset Library

The release part can be automated, just press the lightning button on the artifact and enable the trigger:

Release trigger

And that’s all! Now the build and the releases are both configured. Once the deployment package is published the CI scenario will be complete.