Application Checker: enforcing better coding practices?

Unless you’ve been working for an ISV there’s a high percentage of probabilities that you’ve never cared about Dynamics Best Practices (BP), or maybe you have. I haven’t worked for an ISV myself but back when I started working with AX I was handed the development BP document and I’ve tried to follow most of them when writing code.

But BPs could be ignored and not implemented without any issue. This is why Microsoft will publish…

Application Checker

Application Checker is a tool that will change that. It will force some rules that our code will have to meet, otherwise the code won’t compile (and maybe won’t even deploy to the environments).

We got an advance of it during last MBAS session “X++ programming with quality” by Dave Froslie and Peter Villadsen. Unfortunately the session wasn’t recorded.

Application Checker: enforcing better coding practices? 1

Application Checker: enforcing better coding practices? 2

App checker is built on BaseX, an XML analysis tool, and powers Socratex which Microsoft uses to track code quality. I don’t know if Socratex will be publicly released and I don’t remember if this was clarified during the session.

The set of rules can be found in Application Checker’s GitHub project and it’s still WIP. I think there’s loooots of things to decide before this goes GA, and I’m a bit worried and afraid of some of the rules 😛

Rule types

There’s different types of rules, some will become errors and other warnings. For example:

ExtensionsWithoutPrefix.xq: this rule will throw an error avoiding your code to compile. It checks if the extension class has a name ending with _Extension and an attribute ExtensionOf. If it has it must have a prefix. E.g.: if we extend the class CustPostInvoice it can’t be named CustPostInvoice_Extension, it needs a prefix like CustPostInvoiceAAS_Extension.

Application Checker: enforcing better coding practices? 3

SelectForUpdateAbsent.xq: this rule will throw a warning. When there’s a forUpdate clause in a select statement and no doUpdate, update, delete, doDelete or write is called later it will let us know.

Application Checker: enforcing better coding practices? 4

As of today, there’s 21 rules in the GitHub project. You can contribute to the project, and you could enforce your own rules without sending them to the project on your dev boxes, just add them to the local rules folder. I’d create a rule that makes the space after an if/while/for/switch mandatory and throws an error otherwise, but that’s only a bit of my OCD when writing/reading code.

Try it on your code

We can already use Application Checker on our development environments since PU26, I think. We just need to install JRE and BaseX in the dev box and select the check when doing a full build.

Application Checker: enforcing better coding practices? 5

Some examples

ComplexityIndentationCombined.xq

This query checks the (wait for it…) cyclomatic complexity of the methods. I’ll try to explain it… Cyclomatic complexity is a metric for software quality, and is the number of independent paths in the code. Depending on the number of ifs, whiles, sitches, etc… the code can have different outcomes through different paths, that’s what complexity calculates.

Taking this as an example, a dumb one but ignore it, just look at the amount of different paths that could happen:

In App checker the error appears when the complexity is over 30. I’ve used Lizard code complexity analyzer to calculate the complexity of the method below and I’m getting a 49.

The rule also checks for the indentation depth, failing if it’s greater than 2. In the end the purpose of both rules is to try to cut up long/large methods, which will also help in enabling more extension points in different places of our logic, like Microsoft did with Data Provider classes for reports.

Application Checker: enforcing better coding practices? 6

BalancedTtsStatement.xq

This one gives me mixed feelings. The rule checks that the ttsbegin and the ttscommit of a method are in the same scope. So the following is not possible:

Application Checker: enforcing better coding practices? 7

Imagine you’ve developed an integration with an external application that writes data to an intermediate table in MSDyn365FO and you process all pending data sequentially. You don’t want to throw an error if something goes wrong because you need the process to continue with the following record, so you ttsabort the wrong line, store the error and continue. If this is not possible… how should we do this? Create a batch that creates a task for each line to process?

Plus, the standard models have plenty of ttscommit inside if statements.

RecursiveMethods.xq

Application Checker: enforcing better coding practices? 8

This rule will block the use of recursion on static methods. I don’t get why. Application checker should be a way to better coding practices, not forbidding some patterns. If somebody gets a recursive method to prod and the exit condition isn’t met… hello testing?

Some final thoughts

Will this force developers to code better? I don’t think so, but that’s probably not Application checker’s purpose. For centuries humans have found ways to bypass rules, laws and all kinds of restrictions and this won’t be an exception.

Will it help? Hell yes! But the best way to ensure code quality is promoting the best practices in your team, through internal trainings or code reviews. And even then if someone doesn’t care about clean code will keep on writing terrible code, which might work but won’t be beautiful at all.

Finally, I’m not sure about some rules, like avoiding recursion on static methods or the tts thing. We’ll just have to wait and see which rules make it to the final release and how will Application checker be finally implemented in the MSDyn365FO application lifecycle by blocking (or not) the deployments of code which doesn’t pass all the checks or if it will be included into the build process.

Slow set-based operations?

In Microsoft Dynamics 365 for Finance and Operations we can execute the CRUD operations from code in two different ways, record-per-record or set-based.

Microsoft’s recommendation is to always use set-based operations, if possible, as you can check on the Implementation Best Practices for Dynamics 365: Performance best practices for a successful Dynamics 365 Finance and Operations implementation session from last June’s Business Applications Summit.

Why?

Set-based Vs. Record-per-record

When we run a query in MSDyn365FO we’re using its data access layer which will later be translated into real SQL. We can see the differences using xRecord’s getSQLStatement with generateonly on the query (and forceliterals to show the parameter’s values) to get the SQL query. For example if we run the following code:

Slow set-based operations? 9

We’ll get this SQL statement:

 

We can see all the fields are being selected, and the where clause contains the account number we selected (plus DataAreaId and Partition).

When a while select is run on MSDyn365FO a select SQL statement is executed on SQL Server for each loop of the while. The same happens if an update or delete is executed inside the loop. This is know as record-per-record operation.

Imagine you need to update all the customers with the customer group 10 to update their note. We could do this with a while select, like this:

Slow set-based operations? 10

This would make as many calls as customers from the group 10 existed to SQL Server, one for each loop. Or we could use set-based operations:

Slow set-based operations? 11

This will execute a single SQL statement on SQL Server that will update all the customers with the customer group 10 instead of a query for each customer:

There’s three set-based operations in MSDyn365FO, update_recordset to update records, insert_recordset to create records and delete_from to delete the records. Plus we can make massive inserts using RecordSortedList and RecordInsertList.

Running this methods instead of while selects should obviously be faster as it’s only executing a single SQL query. But…

Why could my set-based operations be running slow?

There’s some well-documented scenarios in which set-based operations fall back to record-per-record operations as we can see in the following table:

DELETE_FROM UPDATE_RECORDSET INSERT_RECORDSET ARRAY_INSERT Use … to override
Non-SQL tables Yes Yes Yes Yes Not applicable
Delete actions Yes No No No skipDeleteActions
Database log enabled Yes Yes Yes No skipDatabaseLog
Overridden method Yes Yes Yes Yes skipDataMethods
Alerts set up for table Yes Yes Yes No skipEvents
ValidTimeStateFieldType property not equal to None on a table Yes Yes Yes Yes Not applicable

In the example, if the update method of the CustTable is overriden (which it is) the operation from the update_recordset piece will be run like a while select that updates each record.

In the case of the update_recordset this can be solved calling the skipDataMethods method before running the update:

Slow set-based operations? 12

This will avoid calling the update method (or insert in case of the insert_recordset), more or less like calling doUpdate in a loop. The rest of the methods can be overriden with the corresponding method on the last column.

So, for bulk updates I’d always use set-based operations and enable this on data entities too with the EnableSetBasedSqlOperations property.

And now another but is coming.

Should I always use set-based operations when updating large sets of data?

Well it depends on which data you’re working with. There’s a wonderful blog post from Denis Trunin called “Blocking in D365FO(and why you shouldn’t always follow MS recommendations)” that shows a perfect example where set-based operations would be counterproductive.

As always, developing an ERP is quite sensitive, and similar scenarios can have different solutions. Analyze the requirements and decide which one to use.

Update to Visual Studio 2019 for #MSDyn365FO

Tired of developing in Visual Studio 2015? You feel you’ve been left and forgotten in the past? Worry no more, you can use Visual Studio 2017/2019 to develop Microsoft Dynamics 365 for Finance & Operations!

What are the advantages?

Absolutely none at all! Visual Studio will still go non-responding whatever the version is because it’s the dev tools extension what’s causing the issues.

Of course we get the option to use Live Share, and for screen sharing sessions that’s way better than teams. Hey, and we’ll be using the latest VS version!

Is it difficult?

No, zero mysteries. The first thing we need to do is downloading Visual Studio 2019 Professional (or Enterprise but it won’t make such a difference for D365 development) and install it:

Update to Visual Studio 2019 for #MSDyn365FO 13

Select the .NET desktop development option and press install. When the installation is finished we log in with our account.

The next step is installing the Dynamics developer tools extension for VS. Go to drive K and in the DeployablePackages you’ll find some ZIP files that have the extension in the DevToolsService/Scripts folder:

Update to Visual Studio 2019 for #MSDyn365FO 14

Update to Visual Studio 2019 for #MSDyn365FO 15

An alternative is, for example, downloading a Platform Update package which also has the dev tools extensions, and maybe with some update to them.

Install the extension and the VS2019 option is already there:

Update to Visual Studio 2019 for #MSDyn365FO 16

Once installed open VS as the admin and…

Update to Visual Studio 2019 for #MSDyn365FO 17

And also…

Update to Visual Studio 2019 for #MSDyn365FO 18

Don’t panic! The extension was made for VS2015 and using it in a newer version can cause some warnings, but it’s just that, the tools are installed and ready to use:

Update to Visual Studio 2019 for #MSDyn365FO 19Update to Visual Studio 2019 for #MSDyn365FO 20

As I said in the beginning, the dev tools extension is the one causing the unresponsiveness or blocks in VS, and Visual Studio 2019 is letting us know:

Update to Visual Studio 2019 for #MSDyn365FO 21Update to Visual Studio 2019 for #MSDyn365FO 22

But regardless of the warnings working with Visual Studio 2019 is possible. I’ve been doing so for a week and I still haven’t found a blocking issue that makes me go back to VS2015.

Update: it looks like opening a report design will only display its XML instead of the designer. Thanks to David Murray for warning me about it!

Dev tools preview

In October 2019 the dev tools’ preview version will be published, as we could see in the MBAS in Atlanta. Let’s see which new features this will bring us both in a possible VS version upgrade or performance.

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory

If you ever need to consume a SOAP web service from Dynamics 365 for Finance and Operations, the first step you should take is asking the people responsible for that web service to create a REST version. If that’s not possible this post is for you.

I’ll use this SOAP web service I found online at http://www.dneonline.com/calculator.asmx for the example, it’s a simple service-calculator with four methods to add, substract, multiply or divide two integers.

Consuming a SOAP service in .NET

Let’s start with the basics. How do we consume a SOAP web service in Visual Studio? Easy peasy. Just add a service reference to your project:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 23

And point it to the web service of your choice:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 24

This will add the reference to the project:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 25

With that done we can create an instance of the web service’s client and call one of it’s methods:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 26

3 + 6 = 9, it looks like it’s working.

Consuming a SOAP service in Dynamics 365 for Finance and Operations

To consume the web service on FnO create a new project in Visual Studio, right click on References and add the service reference:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 27

Hmmm… nope, it can’t be done, no service reference option.

Consuming a SOAP service in Dynamics 365 for Finance and Operations (I hope…)

The problem is that we cannot add a service reference in Visual Studio on 365 dev boxes.

What do the docs say about this? Well, like in AX2012 we need to create a .NET class library that will consume that web service, then add the reference to our DLL on 365 and call the service methods from a client object. All right!

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 28

There it is. A reference to our class library and a runnable class that will do the job:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 29

Let’s run it!

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 30

What?

An exception of type ‘System.InvalidOperationException’ occurred in System.ServiceModel.dll but was not handled in user code

Additional information: Could not find default endpoint element that references contract ‘AASSOAPCalculatorService.CalculatorSoap’ in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element.

Contract? What contract? I know nothing about a contract. Nobody told me about any contract! What does the Wikipedia say about SOAP?

Soap is the term for a salt of a fatty acid or for a variety of cleansing and lubricating products produced from such a substance.

Oops wrong soap…

SOAP provides the Messaging Protocol layer of a web services protocol stack for web services. It is an XML-based protocol consisting of three parts:

  • an envelope, which defines the message structure and how to process it

  • a set of encoding rules for expressing instances of application-defined datatypes

  • a convention for representing procedure calls and responses

The envelope is the contract. A data contract is an agreement between a service and a client that abstractly describes the data to be exchanged. That contract.

Consuming a SOAP service in Dynamics 365 for Finance and Operations (I promise this is the good one)

If we check the class library there’s a file called app.config:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 31

In this file we can see the endpoint the DLL is using. This is fixed (hardcoded) and in case there’s a test endpoint and a production endpoint we should change the address accordingly and have two different DLLs, one for each endpoint. We can also see the data contract being used by the service, the one called AASSOAPCalculatorService.CalculatorSoap. Because #MSDyn365FO is a web-based ERP we could solve this by adding the system.serviceModel node in the web.config file of the server, right? (app.config for desktop apps, web.config for web apps). Yes, but this would be useless as we have no access to the production environment to do this, and it will be impossible to do in the sandbox Tier-2+ environments when the self-service environments start to roll out.

So, what do we do? Easy, ChannelFactory<T> to the rescue! The ChannelFactory<T> allows us to create an instance of the factory for our service contract and then creates a channel between the client and the service. The client being our class in D365and the service the endpoint (obviously).

Then we do the following:

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 32

The BasicHttpBinding object can be a BasicHttpsBinding if the web service is running on HTTPS. The endpoint is the URL of the web service. Then we instantiate a service contract from our class with the binding and enpoint and create the channel. Now we can call the web service methods and…

Consume a SOAP web service in Dynamics 365 for Finance and Operations using ChannelFactory 33

It’s working! And it’s even better, if there’s different endpoints for a test and prod web service we just have to parametrize them!

But really, don’t use SOAP services, go with the REST.

Do you want to become a better X++ developer?

I’ve been a X++ developer for almost 10 years, that’s the 100% of my professional career, excluding internships. During these 10 years I’ve seen the product evolve and, in my opinion, the last three years with #MSDyn365FO have been the most exciting by far as I’ve said several times.

The move from the notepad-like MorphX to Visual Studio, Azure DevOps and the asset upload and release tasks make me feel like a real software developer. And this has been only the beginning of the journey, we’re now starting with testing automation with RSAT and the ATL, we’ll (hopefully…) finally do testing!

X++ developer

And how can we be better X++ developers?

It takes time

Like learning any other thing. You know nothing on day one, you learn things mainly by doing them, and with time you realize that the ERP is huge and you just know a small portion. Keep learning. Time will pass and you’ll realize that you  still know a small portion of Operations.

Love your job

This one might be hard sometimes… be passionate about what you do. Find a company that helps you grow, try having fun at work, it will be difficult, like during go-lives, but even in those moments there’s time for laughs. With passion the rest is easier.

Functional knowledge

Obviously developers need to know how the processes work from a functional point of view. In case of doubt ask your functional colleagues, don’t waste time digging through the code trying to understand the functionality. After the functional explanation you’ll see the code more clearly.

I always say that programming in X++ is easy, the difficult part is knowing the business processes.

Learn other languages

Get outside X++. Working (or playing) with a different language can help you lose or soften the vices you may have gotten with AX.

Developers usually know more than one language, from previous jobs or from pet projects. C# is obviously a good choice, because we can use .NET libraries in X++ code or we can create ours. Learn the syntax (easy), try the foreach (I’d love having this in X++), LINQ, etc.

I also used to think that, at some point in the future, X++ would be completely replaced by .NET/C# so learning .NET was a good idea. But seeing the latest investments in X++ like SysDa or the ATL I have some doubts in the mid term. Plus X++’s data access layer is wonderful.

Explore Azure

Including DevOps. Luckily there’s no option not to use DevOps. But just don’t use it as a source control tool. It’s waaaay more that that.

Explore Azure, it’s huge and the solution to a problem can be there. Azure functions, Logic apps, Azure SQL, Service bus (combined with Business events for example). It’s not AX by itself anymore, 365 comes with friends on the cloud.

Power Platform

After the last MBAS it’s crystal clear that Microsoft is investing a lot into the PowerPlatform. Flow, PowerApps, AI Builder… All these products can be integrated with MSDyn365FO.

A PowerApp can be used instead of a mobile workspace, Flow to send emails when triggered by a Business Event or a CRUD operation.

Learn something about CRM and CDS, you’ll have to integrate them with FnO at some point, for sure.

Share and teach

For me teaching is reaaaaaaaaaaaaaaally difficult, I’m a terrible teacher, the things in my head are clear but the link between my head and my mouth is broken. I find very hard to turn my thoughts into words. Writing things down helps me put things in order, because I can write and delete, and write again, and again 🙂

Share your knowledge, do internal training with your colleagues, be a speaker. I never tought about that until I started at Axazure, and when I was offered being a speaker at Dynamics 365 Saturday my first thought was “Me? What can I tell that could be interesting to people?”. In the end you just need to pick a topic you know a bit (or nothing at all) of and expand your knowledge, or have stupid ideas and bring them to life!

These are just some ideas, there’s lots of thing that can be done to improve, but the most important is patience. Time and patience.

Override the default theme per company (proof of concept)

At this point I’m 99% sure almost all of us have been asked the “can we change the theme color to the one of our company/brand?” question. While this is unfortunately not possible what we can do is defining a different theme for each company.

This is just a proof of concept. I still haven’t managed to successfully change the theme when the DataArea is changed using the company list.

The standard

By default each user sets his desired theme in the user settings:

User info

If you check the SysUserInfo table you’ll find the enum Theme field, its type is SysUserInfoTheme. This enum is not extensible and this is one of the reasons we cannot add new colors (the other is the class which handles the themes being not accessible).

The customer might ask us to set a fixed different color/theme for different companies. To be sure that the users don’t misidentify different companies or even environments.

Let’s do it

For this example I’ve decided to add an override on the Legal Entities form and set the new theme to be used there.

Add a new SysUserInfoTheme enum field to the CompanyInfo table:

SysUserInfoTheme

Then add the field to the OMLegalEntity form:

OMLegalEntity

We now have a list of the available themes. Let’s add the functionality.

If we do a metadata search of the SysUserInfo Theme field we’ll find it’s being used by the SysFormUtil class in the GetThemeDensityForCurrentUser. We’ll extend this method in the following way:

Override the default theme per company (proof of concept) 34

By returning our field’s value we make the system select the value from the CompanyInfo table instead of the one defined by the user. For example:

USMF

USMF

THMF

Override the default theme per company (proof of concept) 35

SAMF

Override the default theme per company (proof of concept) 36

Different companies, different themes!

Now I only need to find a way to make this work when changing companies. I’ve tried with the lookup form which shows the available companies with no luck. Any ideas?

Using Azure Application Insights with MSDyn365FO

First of all… DISCLAIMER: think twice before using this on a productive environment. Then think again. And if you finally decide to use it, do it in the most cautious and light way.

Why does this deserve a disclaimer? Well, even though the docs state that the system performance should not be impacted, I don’t really know its true impact. Plus it’s on an ERP. On one in which we don’t have access to the production environment (unless you’re On-Prem) to certify that there’s no performance degradation. And because probably Microsoft’s already using it to collect data from the environments to show up in LCS, and I don’t know if it could interfere on it. A lot of I-don’t-knows.

Would I use it on production? YES. It will be really helpful in some cases.

With that said, what am I going to write about that needs a disclaimer? As the title says, about using Azure Application Insights in Microsoft Dynamics 365 for Finance and Operations. This post is just one of the “Have you seen that? Yeah, we should try it!” consequences of Juanan (and on Twitter, follow him!) and me talking. And the that this time was this post from Lane Swenka on AX Developer Connection. So nothing original here 🙂

Azure Application Insights

I spy
I spy!! Made by Cazapelusas

What’s Application Insights? As the documentation says:

Application Insights is an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your blah web application. It will blah blah detect blaaah anomalies. It blah powerful blahblah tools to bleh blah blih and blah blah blaaaah. It’s blaaaaaaaah.

Mmmm… you better watch this video:

So much misery and sadness in the first 30 seconds…

Monitoring. That’s what it does and is for. “LCS already does that!“. OK, extra monitoring! Everybody loves extra, like in pizzas, unless it’s pinneapple, of course.

Getting it to work

The first step will be to create an Application Insights resource on our Azure subscription. Regarding pricing: the first 5GB per month are free and data will be retained for 90 days. More details here.

Then we need the code. I’ll skip the details in this part because it’s perfectly detailed in the link above (this one), just follow the steps. You basically need to create a DLL library to handle the events and send data to AAI and use it from MSDyn365FO. In our version we’ve additionally added the trackTrace method to the C# library. Then just add a reference to the DLL in your MSDyn365FO Visual Studio project and it’s ready to use.

What can we measure?

And now the interesting part (I hope). Page views, capture errors (or all infologs), batch executions, field value changes, and anything else you can extend and call our API methods.

For example, we can extend the FormDataUtil class from the forms engine. This class has several methods that are called from forms in different actions on the data sources, like validating the write, delete, field validations, etc… And also this:

modifiedField in FormDataUtils

This will run after a form field value is modified. We’ll extend it to log which field is having it’s value changed, the old and new value. Like this:

Extending modifiedField
I promise I always use labels!

And because the Application Insights call will also store the user that triggered the value change, we just got a new database log! Even better, we got a new database log that has no performance consequences because there’s no extra data to be generated on MSDyn365FO’s side. The only drawback in this is that it will only be called from forms, but it might be enough to monitor the usage of forms and counter the “no, I haven’t changed any parameter” 🙂

This is what we get on Azure’s Application Insights metrics explorer:

Azure Application Insights Custom Event
What dou you mean I changed that?!

Yes you did, Admin! Ooops it’s me…

Custom events

We’re storing the AOS name too and if the call was originated in a Batch.

All the metrics from our events will display on Azure and the data can be displayed later in Power BI, if you feel like doing it.

With this example you can go on and add calls to the extended objects where you need it. Batches, integrations, critical processes, etc…

Again, please plan what you want to monitor before using this and test it. Then test it again, especially on SAT environments with Azure SQL databases which perform a bit different than the regular SQL Server ones.

And enjoy the data!

Generate number sequence values from REST services and OData

One of the options to integrate MSDyn365FO with external systems is using the data entities with REST services and OData. To use OData the entity must have its IsPublic property set to Yes:

Entidad Clientes V3

Otherwise, if it´s an standard entity, we´ll need to duplicate it because it´s not possible to change the property value in an extension.

If we´re doing an integration with an external system using OData to create new records in the ERP, we can have an issue when the record has a mandatory ID, as we can see in the Customers V3 entity. If we check the Mandatory property of the CustomerAccount field it´s set to Auto, getting the value from the CustTable where it´s set to Yes.

In this case, if we try to create a customer without an account number the service will fail as it can be seen in the Postman capture below:

Postman fail :(

Crystal clear error, the customer account field cannot be empty.

This isn´t happening with the Vendors entity. “Hey! But the vendor account is mandatory in the VendTable!” someone may think. Correct, it is, but not in the entity where it´s been overriden:

Vendors V2

To see how the standard solves this we need to check the entity initValue method:

Generate number sequence values from REST services and OData 37

The skipNumberSequenceCheck is one of the data methods from the Common class, and it´s a relative of skipDataMethods, skipDataSourceValidateWrite, skipAosValidation, etc… It will always return false unless we tell it not to do so by passing true earlier in the code through the parameter.

The NumberSeqRecordFieldHandler class enableNumberSequenceControlForField will initialize the value of the field we pass in the parameters with the next value from the sequence we select. In this case it´s filling the vendor account field with the sequence set in the vendor parameters (obviously)

So, doing the same as the standard does, we’re going to extend the entity and the initValue method:

Extensión de código de la entity Customers V3

Having done this we’ll try again in Postman, this time deleting the CustomerAccount parameter from the body, and…

Cliente creado por servicio REST y OData

Success! We’ve got a new customer! Created from an external system and using the number sequence from Dynamics 365.

This is no mistery, it just mimicks what the standard does. As MSDyn365FO developers we must try to do that, always. Always… as long as we can, of course 🙂 Because even though partners always try to apply the standard as much as possible, we all know that in the end, there´ll be some customization done (hopefully, we´re developers!).

Dynamics 365 for Finance & Operations and Azure DevOps (part II)

You can read my complete guide on Microsoft Dynamics 365 for Finance & Operations and Azure DevOps.

In the first part of this post I wrote about Azure DevOps value and how to set it up in MSDyn365FO.

I want to start this second part with a little rant. As I said in the first part, those who have been working with AX for several years were used to not using version-control systems. MSDyn365FO has taken us to uncharted territory, so it is not uncommon for different teams to work in different ways, depending on their experience and what they’ve found in the path. There’s an obvious interest factor here, each team will need to invest some time to discover what’s better for them regarding code, branching and methodologies. Many times this will be based on experimentation and test-error, and with the pace of some projects this turns out bad. And here’s where I’ve been missing some guidance from Microsoft (but maybe I’ve just not found it).

Regardless of this rant, the journey and all I’ve learnt has been, and I think will be, pretty fun 😉

Branching strategies

I want to make it clear in advance that I’m not an expert in managing code nor Azure DevOps, at all. All that I’ve written here is product of experiences (also bad ones) of almost 3 years working with MSDyn365FO. In this article on branching strategies from the docs there’s more information regarding branching and links to articles of the DevOps team. And there’s even MORE info in the DevOps Rangers’ Library of tooling and guidance solutions!

The truth is that I’d love a FastTrack session about this and, I think, it doesn’t exist. EDIT: it looks like I did definitely overlooked it and there is a FastTrack session called Developer ALM which talks a bit about all this. Thanks to Dag Calafell (twitter) for pointing this out!

In the first part we learnt that the Main folder is created when deploying the Build VM. The usual is that in an implementation project all development will be done on that branch until the Go Live, and just before that a new dev branch will be created. The code tree will look like this:

Ramas despues de branch

From this moment on, the development VMs need to be mapped to this new development branch. This will allow us to keep developing on the Dev branch and decided when the changes are promoted to the Main one.

This branching strategy is really simple and will keep us mostly worries-free. In my previous job, we went on with a 3 branches strategy, Main, Test and Dev, merging from Dev to Test and from Test to Main. A terrible mistake. Having to mantain 2 sets of changesets is harder and with version ugrades, dozens of pending changeset waiting to be merged and an ISV partner taht sometimes would not help much, everything was kind of funny (“funny”). But I learnt a lot!

Anyway, just some advice: try to avoid having pending changesets to be merged for long. The amount of merge conflicts that will appear is directly proportional to the time the changeset has been waiting to be merged.

At this point, I cannot emphasize enough what I mean by normal. As I say, I wrote all of this based on my experience. It’s obviously not the same working for an ISV than for an implementation partner. An ISV has different needs, it has to mantain different code versions to support all their customers and they don’t need to work in a Dev-Main manner. They could have one (or more) branch for each version. However, since the end of overlayering this is not necessary :). More ideas about this can be found in the article linked at the beggining of this post.

Builds

In the first part and an older post (Unresponsive builds in Azure DevOps) I explained a bit about builds, and we saw the default build definition generated when deploying the build machine:

Pasos de la definición de build por defecto

This build definition has all the default steps active. We can disable (or remove) all the steps we’re not going to use. For example, the testing steps can be removed if we have no unit testing. Or the DB sync and report deployment too.

We can also create new build definitions from scratch, however it’s easier to clone the default one and modify it to other branches or needs.

Since 8.1 all the X++ hotfixes are gone, the updates are applied in a deployable package (binaries!). This implies that the Metadatada folder will only contain our custom packages and models, no standard packages anymore. Up until 8.0, having a build definition compiling and generating a DP only with our models was a good idea. In this way we could have a deployable package ready in less time than having to compile standard packages with hotfixes plus ours. Should we need to apply a hotfix we’d just queue the default build pointing to the Main root, otherwise we’d just generate our packages. Using this strategy, we reduced the DP generation time from 1h15m to 9m in one of our customer’s project.

But that was in the past, and all this is outdated information. Right now I hope everybody is as close to 8.1 as possible because One Version is coming in April!

Another useful option is having a build definition that will only compile the code:

Definicion build continua

It may look a bit useless until you enable the continuous integration option:

DevOps continuous integration

Right after every developer’s check-in a build will be queued, and the code compiled. In case there’s a compilation error we’ll be notified about it. Of course, we all build the solutions before checking them in. Right?

tysonjaja

And because we all know that “Slow and steady wins the race” but at some point during a project, that’s not possible this kind of build definition can help us out. Especially when merging code conflicts from a dev branch to Main. This will allow us to be 100% sure when creating a DP for release to production that it’ll work. I can tell you that having to do a release to prod in a hurry and seeing the Main build failing is not nice.

Somebody with far more experience and knowledge than me can think, wait but this can also be done with…

Gated check-ins

What we accomplish with a gated check-in is that the build agent will launch an automated compilation BEFORE checking-in the code. If it fails, the changeset is not made until the errors are fixed and checked-in again.

This option might seem perfect for the merge check-ins to the Main branch. I’ve found some issues trying to use it, for example:

  • If multiple merge & check-ins from the same development are done and the first fails but the second doesn’t, you’ll still have pending merges to be done.
  • Issues with error notifications and pending code on dev VMs.
  • If many check-ins are made you’ll end up with lots of queued builds (and we only have one available agent per DevOps project).

I’m sure this probably has a solution, but I haven’t found it. And I think the CI option is working perfectly to us to validate code. As I’ve already said, all of this is product of trial-error, we’ve learnt to use this while working with it.

Conclusions

I guess the biggest conclusion is that with MSDyn365FO we must use DevOps. It’s mandatory, there’s no other option. If there’s anyone out there not doing it, do it. Now. Review how you work and let’s forget and don’t look back at how we used to work with AX, technically speaking MSDyn365FO is a different product.

Truth is that MSDyn365FO has taken developers to a more classic approach of software projects, like .NET or Java. But we’re still special. An ERP project has a lot of peculiarities, and not having to create a product from scratch, having a base that makes us follow a path, limits us in some aspects, and the usage of certain techniques or methodologies.

I hope these two posts about Azure DevOps can help somebody. And if anyone with more experience or better ideas wants to recommend anything, comments are open!

The mystery of the non-filtering query

There’s no mystery here but a misperception.

Recently, a colleague found a little issue when using an AOT query to feed a view with a range dynamically filtered using a SysQueryRangeUtil method.

Recreating the issue

The query is pretty simple, only showing ledger transaction data from the GeneralJournalEntry and GeneralJournalAccountEntry tables. A range in the Ledger field from the current company was added as you can see in the pic below:

Query en Visual Studio

We created a new range method by extending the SysQueryRangeUtil class. Using the Ledger::current() to filter the active company.

Extensión en visual studio

The we used the query to feed data to the view and added two fields just for testing purposes:

Vista en Visual Studio

Everything quite straightforward. Let’s check the view in the table browser…

Explorador de tablas

No data! And I can tell there’s data in here:

Registros en SSMS

What’s going on in here? If we use the query in a job (yeah, I know, Runnable Class…) the range is filtering the data as expected.

Psyduck is confused
Me in a tribute to “Psyduck is confused” by cazapelusas.com

So… let’s see the view design in SSMS:

Diseño de la vista en SSMS

Well, it definitely looks like something’s being filtered in here. The range is working! Is it? Sure? Which company does that Ledger table RecId corresponds to?

Registro de DAT

Qué haces besando a la lisiada!?
Why are you quering the damned DAT? (Sorry this was funnier in Spanish)

What’s going on?

There’s an easy and clear explanation but one doesn’t think of it until he faces this specific issue. While the view* is a Data Dictionary object, and when the project is synchronized the view is created in SQL Server, the query* is a X++ object and only exists within the application. The view is created in SQL and we can see and query it in SSMS. The AOT query doesn’t. It feeds the view and provides a data back end, but all X++ added functionality stays in 365, including the SysQueryRangeUtil filters.

The solution is an easy one. Removing the range in the query and adding it in the form data source will do the trick (if this can be considered a trick…).

(*) Note: the links to the docs point to AX 2012 docs but should be valid.