At this point I’m 99% sure almost all of us have been asked the “can we change the theme color to the one of our company/brand?” question. While this is unfortunately not possible what we can do is defining a different theme for each company.
This is just a proof of concept. I still haven’t managed to successfully change the theme when the DataArea is changed using the company list.
By default each user sets his desired theme in the user settings:
If you check the SysUserInfo table you’ll find the enum Theme field, its type is SysUserInfoTheme. This enum is not extensible and this is one of the reasons we cannot add new colors (the other is the class which handles the themes being not accessible).
The customer might ask us to set a fixed different color/theme for different companies. To be sure that the users don’t misidentify different companies or even environments.
Let’s do it
For this example I’ve decided to add an override on the Legal Entities form and set the new theme to be used there.
Add a new SysUserInfoTheme enum field to the CompanyInfo table:
Then add the field to the OMLegalEntity form:
We now have a list of the available themes. Let’s add the functionality.
If we do a metadata search of the SysUserInfo Theme field we’ll find it’s being used by the SysFormUtil class in the GetThemeDensityForCurrentUser. We’ll extend this method in the following way:
By returning our field’s value we make the system select the value from the CompanyInfo table instead of the one defined by the user. For example:
Different companies, different themes!
Now I only need to find a way to make this work when changing companies. I’ve tried with the lookup form which shows the available companies with no luck. Any ideas?
First of all… DISCLAIMER: think twice before using this on a productive environment. Then think again. And if you finally decide to use it, do it in the most cautious and light way.
Why does this deserve a disclaimer? Well, even though the docs state that the system performance should not be impacted, I don’t really know its true impact. Plus it’s on an ERP. On one in which we don’t have access to the production environment (unless you’re On-Prem) to certify that there’s no performance degradation. And because probably Microsoft’s already using it to collect data from the environments to show up in LCS, and I don’t know if it could interfere on it. A lot of I-don’t-knows.
Would I use it on production? YES. It will be really helpful in some cases.
What’s Application Insights? As the documentation says:
Application Insights is an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your blah web application. It will blah blah detect blaaah anomalies. It blah powerful blahblah tools to bleh blah blih and blah blah blaaaah. It’s blaaaaaaaah.
Mmmm… you better watch this video:
So much misery and sadness in the first 30 seconds…
Monitoring. That’s what it does and is for. “LCS already does that!“. OK, extra monitoring! Everybody loves extra, like in pizzas, unless it’s pinneapple, of course.
Getting it to work
The first step will be to create an Application Insights resource on our Azure subscription. Regarding pricing: the first 5GB per month are free and data will be retained for 90 days. More details here.
Then we need the code. I’ll skip the details in this part because it’s perfectly detailed in the link above (this one), just follow the steps. You basically need to create a DLL library to handle the events and send data to AAI and use it from MSDyn365FO. In our version we’ve additionally added the trackTrace method to the C# library. Then just add a reference to the DLL in your MSDyn365FO Visual Studio project and it’s ready to use.
What can we measure?
And now the interesting part (I hope). Page views, capture errors (or all infologs), batch executions, field value changes, and anything else you can extend and call our API methods.
For example, we can extend the FormDataUtil class from the forms engine. This class has several methods that are called from forms in different actions on the data sources, like validating the write, delete, field validations, etc… And also this:
This will run after a form field value is modified. We’ll extend it to log which field is having it’s value changed, the old and new value. Like this:
And because the Application Insights call will also store the user that triggered the value change, we just got a new database log! Even better, we got a new database log that has no performance consequences because there’s no extra data to be generated on MSDyn365FO’s side. The only drawback in this is that it will only be called from forms, but it might be enough to monitor the usage of forms and counter the “no, I haven’t changed any parameter” 🙂
This is what we get on Azure’s Application Insights metrics explorer:
Yes you did, Admin! Ooops it’s me…
We’re storing the AOS name too and if the call was originated in a Batch.
All the metrics from our events will display on Azure and the data can be displayed later in Power BI, if you feel like doing it.
With this example you can go on and add calls to the extended objects where you need it. Batches, integrations, critical processes, etc…
Again, please plan what you want to monitor before using this and test it. Then test it again, especially on SAT environments with Azure SQL databases which perform a bit different than the regular SQL Server ones.
One of the options to integrate MSDyn365FO with external systems is using the data entities with REST services and OData. To use OData the entity must have its IsPublic property set to Yes:
Otherwise, if it´s an standard entity, we´ll need to duplicate it because it´s not possible to change the property value in an extension.
If we´re doing an integration with an external system using OData to create new records in the ERP, we can have an issue when the record has a mandatory ID, as we can see in the Customers V3 entity. If we check the Mandatory property of the CustomerAccount field it´s set to Auto, getting the value from the CustTable where it´s set to Yes.
In this case, if we try to create a customer without an account number the service will fail as it can be seen in the Postman capture below:
Crystal clear error, the customer account field cannot be empty.
This isn´t happening with the Vendors entity. “Hey! But the vendor account is mandatory in the VendTable!” someone may think. Correct, it is, but not in the entity where it´s been overriden:
To see how the standard solves this we need to check the entity initValue method:
The skipNumberSequenceCheck is one of the data methods from the Common class, and it´s a relative of skipDataMethods, skipDataSourceValidateWrite, skipAosValidation, etc… It will always return false unless we tell it not to do so by passing true earlier in the code through the parameter.
The NumberSeqRecordFieldHandler class enableNumberSequenceControlForField will initialize the value of the field we pass in the parameters with the next value from the sequence we select. In this case it´s filling the vendor account field with the sequence set in the vendor parameters (obviously)
So, doing the same as the standard does, we’re going to extend the entity and the initValue method:
Having done this we’ll try again in Postman, this time deleting the CustomerAccount parameter from the body, and…
Success! We’ve got a new customer! Created from an external system and using the number sequence from Dynamics 365.
This is no mistery, it just mimicks what the standard does. As MSDyn365FO developers we must try to do that, always. Always… as long as we can, of course 🙂 Because even though partners always try to apply the standard as much as possible, we all know that in the end, there´ll be some customization done (hopefully, we´re developers!).
I want to start this second part with a little rant. As I said in the first part, those who have been working with AX for several years were used to not using version-control systems. MSDyn365FO has taken us to uncharted territory, so it is not uncommon for different teams to work in different ways, depending on their experience and what they’ve found in the path. There’s an obvious interest factor here, each team will need to invest some time to discover what’s better for them regarding code, branching and methodologies. Many times this will be based on experimentation and test-error, and with the pace of some projects this turns out bad. And here’s where I’ve been missing some guidance from Microsoft (but maybe I’ve just not found it).
Regardless of this rant, the journey and all I’ve learnt has been, and I think will be, pretty fun 😉
The truth is that I’d love a FastTrack session about this and, I think, it doesn’t exist. EDIT: it looks like I did definitely overlooked it and there is a FastTrack session called Developer ALM which talks a bit about all this. Thanks to Dag Calafell (twitter) for pointing this out!
In the first part we learnt that the Main folder is created when deploying the Build VM. The usual is that in an implementation project all development will be done on that branch until the Go Live, and just before that a new dev branch will be created. The code tree will look like this:
From this moment on, the development VMs need to be mapped to this new development branch. This will allow us to keep developing on the Dev branch and decided when the changes are promoted to the Main one.
This branching strategy is really simple and will keep us mostly worries-free. In my previous job, we went on with a 3 branches strategy, Main, Test and Dev, merging from Dev to Test and from Test to Main. A terrible mistake. Having to mantain 2 sets of changesets is harder and with version ugrades, dozens of pending changeset waiting to be merged and an ISV partner taht sometimes would not help much, everything was kind of funny (“funny”). But I learnt a lot!
Anyway, just some advice: try to avoid having pending changesets to be merged for long. The amount of merge conflicts that will appear is directly proportional to the time the changeset has been waiting to be merged.
At this point, I cannot emphasize enough what I mean by normal. As I say, I wrote all of this based on my experience. It’s obviously not the same working for an ISV than for an implementation partner. An ISV has different needs, it has to mantain different code versions to support all their customers and they don’t need to work in a Dev-Main manner. They could have one (or more) branch for each version. However, since the end of overlayering this is not necessary :). More ideas about this can be found in the article linked at the beggining of this post.
This build definition has all the default steps active. We can disable (or remove) all the steps we’re not going to use. For example, the testing steps can be removed if we have no unit testing. Or the DB sync and report deployment too.
We can also create new build definitions from scratch, however it’s easier to clone the default one and modify it to other branches or needs.
Since 8.1 all the X++ hotfixes are gone, the updates are applied in a deployable package (binaries!). This implies that the Metadatada folder will only contain our custom packages and models, no standard packages anymore. Up until 8.0, having a build definition compiling and generating a DP only with our models was a good idea. In this way we could have a deployable package ready in less time than having to compile standard packages with hotfixes plus ours. Should we need to apply a hotfix we’d just queue the default build pointing to the Main root, otherwise we’d just generate our packages. Using this strategy, we reduced the DP generation time from 1h15m to 9m in one of our customer’s project.
But that was in the past, and all this is outdated information. Right now I hope everybody is as close to 8.1 as possible because One Version is coming in April!
Another useful option is having a build definition that will only compile the code:
It may look a bit useless until you enable the continuous integration option:
Right after every developer’s check-in a build will be queued, and the code compiled. In case there’s a compilation error we’ll be notified about it. Of course, we all build the solutions before checking them in. Right?
And because we all know that “Slow and steady wins the race” but at some point during a project, that’s not possible this kind of build definition can help us out. Especially when merging code conflicts from a dev branch to Main. This will allow us to be 100% sure when creating a DP for release to production that it’ll work. I can tell you that having to do a release to prod in a hurry and seeing the Main build failing is not nice.
Somebody with far more experience and knowledge than me can think, wait but this can also be done with…
What we accomplish with a gated check-in is that the build agent will launch an automated compilation BEFORE checking-in the code. If it fails, the changeset is not made until the errors are fixed and checked-in again.
This option might seem perfect for the merge check-ins to the Main branch. I’ve found some issues trying to use it, for example:
If multiple merge & check-ins from the same development are done and the first fails but the second doesn’t, you’ll still have pending merges to be done.
Issues with error notifications and pending code on dev VMs.
If many check-ins are made you’ll end up with lots of queued builds (and we only have one available agent per DevOps project).
I’m sure this probably has a solution, but I haven’t found it. And I think the CI option is working perfectly to us to validate code. As I’ve already said, all of this is product of trial-error, we’ve learnt to use this while working with it.
I guess the biggest conclusion is that with MSDyn365FO we must use DevOps. It’s mandatory, there’s no other option. If there’s anyone out there not doing it, do it. Now. Review how you work and let’s forget and don’t look back at how we used to work with AX, technically speaking MSDyn365FO is a different product.
Truth is that MSDyn365FO has taken developers to a more classic approach of software projects, like .NET or Java. But we’re still special. An ERP project has a lot of peculiarities, and not having to create a product from scratch, having a base that makes us follow a path, limits us in some aspects, and the usage of certain techniques or methodologies.
I hope these two posts about Azure DevOps can help somebody. And if anyone with more experience or better ideas wants to recommend anything, comments are open!
Recently, a colleague found a little issue when using an AOT query to feed a view with a range dynamically filtered using a SysQueryRangeUtil method.
Recreating the issue
The query is pretty simple, only showing ledger transaction data from the GeneralJournalEntry and GeneralJournalAccountEntry tables. A range in the Ledger field from the current company was added as you can see in the pic below:
We created a new range method by extending the SysQueryRangeUtil class. Using the Ledger::current() to filter the active company.
The we used the query to feed data to the view and added two fields just for testing purposes:
Everything quite straightforward. Let’s check the view in the table browser…
No data! And I can tell there’s data in here:
What’s going on in here? If we use the query in a job (yeah, I know, Runnable Class…) the range is filtering the data as expected.
So… let’s see the view design in SSMS:
Well, it definitely looks like something’s being filtered in here. The range is working! Is it? Sure? Which company does that Ledger table RecId corresponds to?
What’s going on?
There’s an easy and clear explanation but one doesn’t think of it until he faces this specific issue. While the view* is a Data Dictionary object, and when the project is synchronized the view is created in SQL Server, the query* is a X++ object and only exists within the application. The view is created in SQL and we can see and query it in SSMS. The AOT query doesn’t. It feeds the view and provides a data back end, but all X++ added functionality stays in 365, including the SysQueryRangeUtil filters.
The solution is an easy one. Removing the range in the query and adding it in the form data source will do the trick (if this can be considered a trick…).
(*) Note: the links to the docs point to AX 2012 docs but should be valid.