So three days ago I took a look at the new 10.0.25 features currently in the PEAP program, calmly scrolling through until I saw something about custom scripts on prod, something that read:
Run custom X++ scripts with zero downtime
You can imagine my face when I read this. At the beginning I was confused, then surprised and then confused again. After reading the description, the situation wouldn’t get better:
This feature lets you upload and run deployable packages that contain custom X++ scripts without having to go through Microsoft Dynamics Lifecycle Services (LCS) or suspend your system. Therefore, you can correct minor data inconsistences without causing any disruptive downtime.
Are we getting a way to run custom code in production without having to deploy it? Yes we are. As many people have said these past two days: “X++ jobs are back!”. And there are a lot of discussions going on about the custom scripts feature.
Custom Scripts feature
If you want to learn how to enable it on a dev VM and use it, you can read Marijan Huljić‘s blog post: AX2012 jobs are back in D365 (kind of). And even MFP has made a video.
I’m going to skip that and make some remarks about its implementation and give my opinion.
How does it work?
If you’ve read Marijan’s blog post, you’ve already seen that we upload a Deployable Package to the environment. Several things happen between the upload and when you run the code:
- This DP is saved to the Azure blob storage account of the environment.
- When you test or run it, it’s unzipped to the DIXF temp path set in the DMF parameters.
- All files except the DLL assembly of the package are deleted.
- The process looks for a class with this exact signature:
public static void main(Args _args)
- Then using .NET’s Reflection, it will run the code contained in the main method. The use of reflection is not uncommon in the standard codebase, but it has surprised me. I think there wasn’t other way to do it, so it’s logical.
And some findings:
- There’s no code deployed to the environment.
- You can upload a DP of a model/package that already exists in prod.
- A DP can only contain one class with a main method, so this makes it hard to do what I’ve just said in the previous point.
- Before running the code it needs to be approved, tested, and the test approved. Go read Marijan’s blog post if you still haven’t.
- If you upload a DP with two classes with a main method, it will cancel the upload.
- Once you’ve run a job, it cannot run again. But you can upload the same DP again.
Should we use this feature?
I’ve got a lot of mixed feelings regarding the custom scripts feature. The main appeal of the feature is that it allows us to fix things pretty quick, IN PRODUCTION. And for me this is also the main issue.
I get that some companies operate on a 24/7 basis, and having to do an unplanned maintenance in production disrupts the normal operation of the business. And to me, this is one of the few use-cases where the feature would be 100% justified.
So what do I answer to the question that heads this section? Yes, of course, if you don’t have any other alternative.
There are several scenarios where we would use this. For example, simple data fixes. But if that’s just a fix, why rush in doing this? Can’t you wait to do a planned environment update with a fix?
And what if the issue is very urgent because it’s stopping the business? If it’s already stopping the business, then what’s the problem in doing an unexpected release?
We’ve survived 6 years without this option and I promise that after 2 months working with AX7 I wasn’t missing the access to the DB or being able to run code in prod as we did on older versions. OK, maybe it was 3 months.
How would I use it?
My first step would be trying not to use it.
But if I had to use it and run code in prod, I would do something first: deploy the code to a sandbox environment and run it there.
Please, please, please, if you ever run custom scripts on prod do not trust what happens in a dev box. Do not trust how it performs. Avoid developing, testing on dev, and uploading it to production. And this is applicable to any customization.
Also, make use of the approval, don’t log with the admin user to approve it. Get the customer to test it and sign it off on a sandbox environment before the real run. And then get the customer to test it again in prod.
And before doing it, make sure that the issue that you want to solve with this cannot be solved in any other way: DMF, OData, Power Automate or even a nightly update. And be VERY careful.
2 Comments
The only reactions I personally saw so far in terms of this new feature were excitement and joy, however I am glad that I now know someone that at least keeps it sane and suggests others being careful and attentive while using this thing. Thank you for that. Also, it gets me just a little bit (tiny) angry that this exact feature is to be released instead of something else and that else being – I don’t know – maybe something more appropriate, less hacky, more stable, something else that helps us to resist using this feature in the first place. As this honestly feels just a bit as a temporary way to please someone. What I’d love to see instead is near zero downtime when servicing, more proper servicing, more attention to LBD environment servicing and so on and forth. Anyways, I know that I am keeping it abstract and maybe it’s my angry side speaking up which is not that rational, so we’ll see. Just wanted to let out a bit of frustration :D. Thanks again for the article!
I’ve seen plenty of skepticism, but yes a lot of excitement and joy too… But well, that depends on each one’s point of view, I guess.
I would’ve loved seeing zero-downtime deployments as an alternative to this, it’s something we’ve been hearing about for the last three years and the only progress we’ve seen is regarding system updates.
And about LBD, I think MS will not invest the same as in the cloud version as it’s being retired at the end of 2027.