melgaard.nu https://melgaard.nu Blog about Dynamics 365, Power Platform, and everything in between Tue, 01 Jul 2025 11:47:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://melgaard.nu/wp-content/uploads/2024/06/cropped-cropped-Designer-4-1-32x32.jpeg melgaard.nu https://melgaard.nu 32 32 X++ Syntactic Sugar – Public Interfaces and the Dangers of Refactoring Code https://melgaard.nu/xppsugar-public-interfaces/ Mon, 02 Sep 2024 19:25:19 +0000 https://melgaard.nu/?p=335

One of the features that was included in the latest release, 1.7.0.0 as of writing, of the D365 Admin Toolkit, was my telemetry messages when adding or removing the SysAdmin role for users featue. Microsoft recently released a new way of creating telemetry messages using X++, that makes it a lot easier adding attributes to the telemetry messages, so I thought that I should use that to provide an implementation example. More about the new telemetry interface coming soon….

Johan Persson suddenly began to receive strange errors when adding system administrator roles to users after updating to 10.0.41.

Method not found: 'Dynamics.AX.Application.SysApplicationInsightsEventTelemetry Dynamics.AX.Application.SysApplicationInsightsEventTelemetry.addProperty(Dynamics.AX.Application.SysApplicationInsightsProperty)'.

Looking at the changes, it was identified relatively quickly that my change had caused the error.
Link to issue here if you want to tag along.

Okay, we all make mistakes… So, I asked him to turn off the feature. No change!

Deep Dive

Okay, that kind of confused me. So, I spun up a new onebox environment to test it out. As expected, after updating to 10.0.41, it began to throw errors.
Expecting some interface changes or removed methods, I started a compilation of the Admin Toolkit.
No compilation errors!
I added the system administrator role to the same user again, and there was no error! It can find the addProperty method.

That confused me a bit, so I looked at the telemetry code.
The code in the preview version of 10.0.41 related to telemetry had been refactored! The SysApplicationInsightsEventTelemetry now extends from a new class, SysApplicationInsightsTelemetryContractBase, that contains the addProperty. The addProperty no longer existed in the SysApplicationInsightsEventTelemetry class.

It’s somewhat problematic that we have to release on 10.0.41, so I developed a quick work-around that further encapsulates the offending code, so it’s at least possible to disable the telemetry feature.

A Little Bit of Decompilation

The refactoring explained why it broke, but my curiosity still peaked. So I decided to disassemble the binaries and compare the latest release to a release built on 10.0.41.

I highly recommend ILSPY if you like to snoop around. Let me know if you want to learn how to use it for debugging

So, looking at the old code, it’s calling addProperty on SysApplicationInsightsEventTelemetry.

XXX
SysApplicationInsightsTelemetryLogger.instance().trackEvent(SysApplicationInsightsEventTelemetry.newFromEventIdName(_id, _name).addProperty
XXX

Link to original code: 1.7.0.0 Release – sendTelemetryAssignedExpiredRevoked

After compiling it on 10.0.0.41, it now uses the type SysApplicationInsightsTelemetryContractBase when instantiating the SysApplicationInsightsEventTelemetry object. No wonder it’s not happy…

XXX
SysApplicationInsightsTelemetryLogger.instance().trackEvent((SysApplicationInsightsEventTelemetry)((SysApplicationInsightsTelemetryContractBase)SysApplicationInsightsEventTelemetry.newFromEventIdName(_id, _name)).addProperty
XXX

Link to original code: 1.7.0.0 Release built on 10.0.0.41 – sendTelemetryAssignedExpiredRevoked

Conclusion

So the conclusion is that refactoring code can be dangerous, and no good deed never goes unpunished… Even while architectural sound, remember that if you can call it, then it’s a public interface.

Another conclusion we can make is that it’s not enough that you encapsulate the offending lines of code. If the code exists in the method, it will fail.

ISVs, partners, and customers need to remember that interfaces might change and binary compatibility is not guaranteed – Not only when new versions of F&O get released but also own developed code or ISV solutions.

]]>
X++ Syntactic Sugar – Method Chaining and Fluent Interfaces https://melgaard.nu/xppsugar-method-chaining/ Wed, 03 Jul 2024 06:30:00 +0000 https://melgaard.nu/?p=205

As you work with D365 F&O and X++, you will likely pick up some design patterns and techniques from the standard application. One of the recently added features to D365 F&O has been the SysDa framework. When you work with SysDa, you will likely notice how the query blocks are structured in a way that makes them look like natural written language (Also known as a fluid interface).

In this post, we will examine method chaining, a part of the design pattern used by the SysDa framework.

What is Method Chaining and Fluent Interfaces?

Method chaining is a pattern in which multiple methods are called on the same object sequentially. Each method returns itself, allowing the developer to chain additional methods directly onto the previous call, which can lead to more readable code.


Let’s look at the following piece of code:

internal final class FTBXExchangeRateProviderCBODK implements IExchangeRateProvider
{
    ******

    private void addExchangeRatesStatbank(
        ExchangeRateResponse    _response,
        str                     _apiEndpoint,
        str                     _exchangeRateType,
        FromDate                _fromDate,
        ToDate                  _toDate)
    {    
        var httpRequestMessage  = new HttpRequestMessage(new HttpMethod('post'), _apiEndpoint);

        ******

        var statBankRequest = new FTBXExchangeRateProviderCBODKStatbankRequestContract();
        statbankRequest.parmLang(statBankLanguage);
        statbankRequest.parmTable(statBankTable);
        statbankRequest.parmFormat(statBankFormat);
        statbankRequest.parmTimeOrder(statBankOrder);

        var currencyVariable = new FTBXExchangeRateProviderCBODKStatbankVariableContract();
        
        currencyVariable.parmCode(statBankVariableCurrency);
        currencyVariable.parmValues().addEnd('*');
        statbankRequest.parmVariables().addEnd(currencyVariable);

        var exchangeRateTypeVariable = new FTBXExchangeRateProviderCBODKStatbankVariableContract();
        exchangeRateTypeVariable.parmCode(statBankVariableType);
        exchangeRateTypeVariable.parmValues().addEnd(_exchangeRateType);
        statbankRequest.parmVariables().addEnd(exchangeRateTypeVariable);

        var exchangeRateDatesVariable = new FTBXExchangeRateProviderCBODKStatbankVariableContract();
        exchangeRateDatesVariable.parmCode(statBankVariableTime);

        ******

        httpRequestMessage.Content = new StringContent(
            FormJsonSerializer::serializeClass(statbankRequest),
            System.Text.Encoding::UTF8, 
            'application/json');
            
        *******
    }
}

Link to original code: FTBXExchangeRateProviderCBODK

It’s just an ordinary method. However, one issue with this piece of code is that keeping track of variable names can be a bit of a burden. This is where the Method Chaining pattern can be applied.

If you take the method chaining pattern and apply it to the example code, it will look something like this:

internal final class FTBXExchangeRateProviderCBODK implements IExchangeRateProvider
{
    ******

    private void addExchangeRatesStatbank(
        ExchangeRateResponse    _response,
        str                     _apiEndpoint,
        str                     _exchangeRateType,
        FromDate                _fromDate,
        ToDate                  _toDate)
    {    
        var httpRequestMessage  = new HttpRequestMessage(new HttpMethod('post'), _apiEndpoint);

        httpRequestMessage.Content = new StringContent(
            FormJsonSerializer::serializeClass(
                new FTBXExchangeRateProviderCBODKStatbankRequestContract()
                    .setLang(statBankLanguage)
                    .setTable(statBankTable)
                    .setFormat(statBankFormat)
                    .setTimeOrder(statBankOrder)
                    .addVariable(new FTBXExchangeRateProviderCBODKStatbankVariableContract()
                        .setCode(statBankVariableCurrency)
                        .addValue('*'))
                    .addVariable(new FTBXExchangeRateProviderCBODKStatbankVariableContract()
                        .setCode(statBankVariableType)
                        .addValue(_exchangeRateType))
                    .addVariable(new FTBXExchangeRateProviderCBODKStatbankVariableContract()
                        .setCode(statBankVariableTime)
                        .addDateRange(_fromDate, _toDate))),
            System.Text.Encoding::UTF8, 
            'application/json');
            
        *******
    }
}

Link to original code: FTBXExchangeRateProviderCBODK

Why Use Method Chaining?

As the example shows, there are multiple benefits of using method chaining in your code.

  • Improved readability: By chaining parm methods together, it’s possible to reduce the number of number of lines of X++ code, making the logic easier to follow
  • Cleaner code: The pattern helps avoid intermediate variables and keeps related operations together
  • Fluent interfaces: It creates a more fluent interface, making your code look more like natural language.

Implementing Method Chaining

To implement method chaining for data contract, crate a “setter” method for each parameter you have, approximately in this format.

    [DataMemberAttribute('Lang')]
    public str parmLang(str _lang = lang)
    {
        lang = _lang;
        return lang;
    }
    
    public FTBXExchangeRateProviderCBODKStatbankRequestContract setLang(Str _lang)
    {
        lang = _lang;
        return this;
    }

Link to original code: FTBXExchangeRateProviderCBODKStatbankRequestContract

In this example, I have created a set method in addition to the parm method ‘parmLang’.

Best Practices

There are still a few best practices that I have gathered while trying to apply this pattern.

  • Keep track of the chain length: Don’t make the chains way to long, since they can make debugging difficult.
  • Clarity: Each method should have a clear and specific purpose. Don’t overcomplicate things beyond measure.
  • Error handling: Make sure to keep error handling in check. While I haven’t seen any issues using this pattern, I can imagine error handling in long chains can cause headaches.

Conclusion

Method chaining is a valuable pattern in your tool belt when you feel it’s difficult to create readable and concise code. Using this pattern will make your code look cleaner and more intuitive; it has, for me, at least.

https://en.wikipedia.org/wiki/Method_chaining
https://en.wikipedia.org/wiki/Syntactic_sugar

https://en.wikipedia.org/wiki/Fluent_interface

]]>
Dataverse Custom Telemetry Not Exporting to Application Insights https://melgaard.nu/custom-telemetry-not-exporting/ Sun, 23 Jun 2024 18:12:12 +0000 https://melgaard.nu/?p=178

I am working intensively with Application Insights (I want to see and act on errors before my users do!), and when I write plug-ins, I like to export custom telemetry to Dataverse. You can read more about it in the Microsoft Learn article.
Also see this excellent blog post

One interesting issue I had with one of my newly created environments was that it refused to export custom telemetry. It exported everything else. I tried to create new exports, pointing at other Application Insights instances, nothing worked!

It turns out that the telemetry key in this environment was not set correctly for some odd reason. So, the solution was to set it manually by patching it on the Dataverse instances organization record. 
It’s relatively straightforward.

Find the instrumentation key for the Application Insights instance you want to export to

Find the GUID of the Organization from the Power Platform Admin Center

Update the column “telemeetryinstrumentationkey” for the “organization” record of the Dataverse environment, to the instrumentation key of the Application Insights instance. And for good luck sake, set the “orginsightsenabled” to true.
Notice that the “telemeetryinstrumentationkey” will appear as null if you lookup the record.


You can see how to do it easily from the browser here:

You should now see your custom telemetry being exported to Application Insights.

]]>
Patching Dataverse Records Directly from the Browser Without Plugins https://melgaard.nu/patching-browser/ Sun, 23 Jun 2024 17:27:07 +0000 https://melgaard.nu/?p=161

Quite recently, I needed to do some data manipulation in a pinch in one of my heavily restricted Dataverse environments. The environment was only accessible from specific IPs, and the only tool I had available at the time was my browser—no executables allowed!
I could not find an example of how to do this directly from a browser, so it made a good topic for a blog post.
It is possible in the browser! This also means it’s possible to use the same authentication as the browser, with no need to re-authenticate.

Method 1: Patching using the javascript console

I got this one from an old Microsoft support case.

This approach is universally applicable, regardless of the browser you’re using. It’s particularly useful in scenarios where you don’t have access to the built-in tools, such as the network console in Edge.

Your browser might be different, but here is the procedure in Edge.
Open the developer console by hitting Ctrl – Shift – I (Or opening settings – More tools – Developer tools)

Click on the Console button. You might want to clear the console by clicking the clear button left to “top”

Copy this Javascript (Remember to change the values!), paste it into the console, and hit enter.

var queryPath = *Your dataverse instance*/api/data/v9.2/*table you want to patch*(*record GUID*);
var param = {};
param["Column to patch"] = "Column value";
 
var req = new XMLHttpRequest();
 
req.open("PATCH", queryPath, true);
req.setRequestHeader("OData-MaxVersion", "4.0");
req.setRequestHeader("OData-Version", "4.0");
req.setRequestHeader("Accept", "application/json");
req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
 
req.onreadystatechange = function () {
    if (this.readyState === 4) {
        req.onreadystatechange = null;
        if (this.status === 200) {
            alert(this.status);
        }
        else {
            alert(this.status);
        }
    }
};
req.send(JSON.stringify(param));

It should look something like this if all goes well. I have detached the DEV console, it makes I a bit easier to work with.

If you get a “204” alert, that means that it went well and that you patched the record.

Method 2: Using the network console in Edge

This one is a slight twist on the first one. Instead, it uses the network console in Edge’s developer console to create the patch record.
Start by opening the Network console but clicking the Network console icon.

Click on the “Create Request” button. 
Change the method from “GET” to “POST”
Open “Body,” change the type to “Raw text” and enter the column and value as JSON.

{
    "column 1":"value",
    "Column 2":0.01
}


It should look something like this, and as the first example, if it returns 204, it went through.

I hope it makes sense, otherwise, do let me know (-:

]]>
X++ Syntactic Sugar – Custom Types https://melgaard.nu/x-readability-custom-types/ Sun, 13 Aug 2023 22:35:39 +0000 https://melgaard.nu/?p=93

I have been away from X++ for a bit (Doing a bit of technical presales, leading an offshore Dynamics team, and Power Platform work), but now I’m back doing F&O development (-:

Being an end customer taught me much about high-availability systems and the importance of creating maintainable enterprise-scale business applications.

This is a fairly minor thing, but it is a clever way to increase the readability of X++ customizations.

Implementation

Have you ever thought about creating custom types in X++? I’m not talking about data contracts, but actual custom types that can be compared and used in collections such as sets and maps.

Custom types enable you to replace some of the containers in your code (i.e., remembering the ordering of fields or creating macros), implement abstractions for your types, and add functionality to your types.
Let’s say you want to implement a specific way to pack or unpack type to a custom format. You can do that if you implement your custom type; Simply add the methods directly to the type.

Custom types will help to make code self-documenting. Let’s say you want to work with couches.
A couch can have many parameters: color, size, fabric type etc.
Now you want to calculate a price for a specific combination, and store it in a map. This is how you could solve it by storing the value in a map.

var couchToCalculate = new Couch();
couchToCalculate.Color = "Green";
couchToCalculate.Leather = NoYes::Yes;
couchToCalculate.Persons = 4;

if (!calculatedCouches.exists(couchToCalculate))
{
    calculatedCouches.add(couchToCalculate,
        CouchCalculate::calcCouch(couchToCalculate));
}

To get it to work, you need to overwrite getHashCode and Equals from the Object type. These methods ensure that it can work as keys in a collection. Per default, these methods just return the class number and always true: Not good if you want several different keys!

This is an example implementation:

public final class MyCustomType
{
    public System.String    field1;
    public System.String    field2;
    public System.Int32     field3;

    public void new(
        str _field1,
        str _field2,
        int _field3)
    {
        this.field1 = _field1;
        this.field2 = _field2;
        this.field3 = _field3;
    }

    public int getHashCode()
    {
        return field1.GetHashCode() ^ field2.GetHashCode() ^ field3;
    }

    public boolean Equals(System.Object _obj)
    {
        MyCustomType contract = _obj;

        return this.field1 == contract.field1
            && this.field2 == contract.field2
            && this.field3 == contract.field3;
    }
}

In this example I have made a simple type with two string fields and one integer field.
I’m using .Net type (System.String, System.Int32) because I want to use their hash codes in my getHashCode calculation and XOR them together. It’s not possible if you use the standard X++ types…

The uniqueness of getHashCode is not 100% required, only for the sake of performance. You can think of it as the collection functions will create bundles of each hash code, and then call Equals for the objects in the collection. See these remarks

I have not created parm() methods – I’m just making the variables public so I can uses them as fields.

This is how you can use your custom type with a map. And lookups are exactly the same.
You can of course also store the type in a variable, just as a string…

calculationResults.add(
    new MyCustomType(
        SomeTable.SomeField,
        SomeTable.AnotherField,
        AThirdTable.AThirdField),
    1000 * collectionIterator);

Performance in collections

One of my initial concerns was the performance of my custom type in collections, so I built a simple benchmark. All it does is create 4 different types of maps: Two with strings as a key, one using StrFmt and one that adds strings together, 1 with a container, and lastly with my custom type.
The benchmark is fairly simple. It adds 1.000.000 key/value pairs with random values to each pair and, after that, performs 1000 lookups again with random values.

It turns out that using a custom type in this scenario is actually faster. I’m not surprised by the containers; they are notorious for being slow.

Conclusion

Custom types are a great way to improve the readability of your code and make it self-documenting. It is also somewhat faster if you work with an insanely high number of objects in your collections, but I doubt it will make it faster in real life.

You can download the example and benchmarking job here to play around: MyCustomType.zip

]]>
Missing Excel Templates in Inventory Journals https://melgaard.nu/missing-excel-templates-in-inventory-journals/ Sun, 28 Feb 2021 02:00:48 +0000 http://melgaard.nu/?p=57

I have now solved this at multiple customers, so I think it’s time to write a post about it.

Have you ever wondered why custom Excel templates does not show on inventory journals?

The reason is most likely that your template are using an entity that is based on journal lines (InventJournalTrans), and not journal header (InventJournalTable). The way D365 figures out what templates to show, is to look at the forms primary data source. In these cases, the primary data source is InventJournalTable.

You could rewrite your template to use an entity that matches the forms primary data source. But there is an alternative!

Did you know that the “Open in office” menu is 100% configurable? It is! Microsoft has pretty good documentation about how to interface with it: https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/office-integration/customize-open-office-menu

The way I fixed this in the counting form was to extend customizeMenuOptions, to include the needed entities.

    public void customizeMenuOptions(OfficeMenuOptions _menuOptions)
    {
        next customizeMenuOptions(_menuOptions);
                
        OfficeMenuDataEntityOptions entityOptions  = OfficeMenuDataEntityOptions::construct(tableStr(InventInventoryCountingJournalLineEntity));
        entityOptions.dataSourceIdInternal(InventJournalTrans_ds.id());
        _menuOptions.dataEntityOptions().addEnd(entityOptions);
    }

The downside to this approach is that the filters D365 sets by default, is way to restrictive. In this case, it sets LineNum as a filter.

Since the form InventJournalCount does not implement the OfficeITemplateCustomExporter interface, I had to set the filters using a event handler. Place it where it suites best following you development best practice (You do follow a best practice, right?)

    [SubscribesTo(classStr(OfficeTemplateExportMenuItem), delegateStr(OfficeTemplateExportMenuItem, getInitialTemplateFilters))]
    public static void OfficeTemplateExportMenuItem_getInitialTemplateFilters(OfficeTemplateExportMenuItem _menuItem, FormRun _formRun, Map _initialFilters)
    {
        if  (_formRun.name() == formStr(InventJournalCount) && _menuItem.dataEntityName() ==  tableStr(InventInventoryCountingJournalLineEntity))
        {
            // Add an initial filter.
            ExportToExcelFilterTreeBuilder bldr = new ExportToExcelFilterTreeBuilder(_menuItem.dataEntityName());
            FormDataSource formDs = _formRun.dataSource(formDataSourceStr(InventJournalCount, InventJournalTable));
            InventJournalTable InventJournalTable = formDs.cursor();
            
            var filter = bldr.and(
                    bldr.areEqual(fieldStr(InventInventoryCountingJournalLineEntity, DataAreaId), curExt()),
                    bldr.areEqual(fieldStr(InventInventoryCountingJournalLineEntity, JournalNumber), InventJournalTable.JournalId));

            _initialFilters.insert(_menuItem.dataEntityName(), filter);
            _menuItem.applyRecordFilter(false);
        }
    }
]]>
Custom Wave Steps in D365 SCM https://melgaard.nu/custom-wave-steps-in-d365/ Sat, 29 Jun 2019 23:17:51 +0000 http://melgaard.nu/?p=36

I recently got asked to create a generic solution to optimize the picking routes generated by D365 FOE – We have a customer that have two categories of items, heavy items and light items.

They want to place the heavy items at the bottom of the pallets (for obvious reasons), and they can manage this standard using grouping on the work template (create a sort in “Edit query”, and mark the field as “Group by this field” under work header breaks). The issues is that, when they are finished picking all the heavy items, the system dictates that they must start at the lowest sorting code again (eg. they must start picking at the beginning of the aisle again). This is essentially a waste of time, since they could simply pick the light items on the way back to the packing stations.

Instead of trying to customize the way D365 is sorting work (if at all possible), I wanted to create my own routine that optimizes pick work lines after they are created, using a set of configurable queries. My decision settled on a new wave step.

It is simple to create new wave steps. All you have to do is to create a new class that extends from “WHSCustomWaveStepMethod”, implement all abstract methods, and decorate it with the “WHSWaveTemplateTypeFactoryAttribute” attribute.

This also allows the solution to only affect certain wave templates and can easily be deactivated if the code causes issues.

I’d like to give special thanks to Blue Horseshoe. I read about this on their blog some time ago, but I cannot find the post anymore.

[WHSWaveTemplateTypeFactoryAttribute(WHSWaveTemplateType::Shipping)]
class WHSWavePostOptimizerWaveStepMethodDax extends WHSCustomWaveStepMethod
{
    public boolean process(WhsPostEngine _whsPostEngine)
    {
        WHSWorkTable    whsWorkTable;
        WHSWaveTable    whsWaveTable = _whsPostEngine.parmWaveTable();
        boolean         ret = true;

        while select whsWorkTable
            where whsWorkTable.WaveId == whsWaveTable.WaveId
            &&    whsWorkTable.WorkStatus == WHSWorkStatus::Open
        {
            ret = ret && this.processWork(whsWorkTable);
        }

        return ret;
    }

    private boolean processWork(WHSWorkTable whsWorkTable)
    {
        #OCCRETRYCOUNT
        boolean ret;

        try
        {
            ttsbegin;

            ...Insert logic here...

            ttscommit;

            ret = true;
        }
        catch (Exception::Deadlock)
        {
            // retry on deadlock
            retry;
        }
        catch (Exception::UpdateConflict)
        {
            // try to resolve update conflict
            if (appl.ttsLevel() == 0)
            {
                if (xSession::currentRetryCount() >= #RetryNum)
                {
                    throw Exception::UpdateConflictNotRecovered;
                }
                else
                {
                    retry;
                }
            }
            else
            {
                throw Exception::UpdateConflict;
            }
        }
        catch(Exception::DuplicateKeyException)
        {
            // retry in case of an duplicate key conflict
            if (appl.ttsLevel() == 0)
            {
                if (xSession::currentRetryCount() >= #RetryNum)
                {
                    throw Exception::DuplicateKeyExceptionNotRecovered;
                }
                else
                {
                    retry;
                }
            }
            else
            {
                throw Exception::DuplicateKeyException;
            }
        }

        return ret;
    }

    public Name displayName()
    {
        return "Wave post optimizer";
    }

}

Under wave processing methods, click the “Regenerate methods” button. D365 should pick up your new method

You can now add it to your wave template.

And thats it! Can it get any easier than this?

]]>