Decimal.MinValue costs more than you expect

Recently during profiling session one method caught my eye, which in profiler's decompiled version looked like this:

public static double dec2f(Decimal value)
{
    if (value == new Decimal(-1, -1, -1, true, (byte) 0))
        return double.MinValue;
    try
    {
      return (double) value;
    }
    catch
    {
      return double.MinValue;
    }
}

This is part of the legacy code written years ago and according to the profiler (which was in sampling mode) there were too much time wasted in this method. In my opinion this is because try-catch block preventing inlining and I spent some time to refresh my memories about decimal to double conversion tricks. After ensuring this conversion can't throw, I removed try-catch block.

But when I looked again at simplified version of source code, I wondered why decompiled version shows strange Decimal constuctor while it is simple Decimal.MinValue usage:

public static double dec2f(decimal value)
{
    if (value == DecimalMinValue)
    {
        return Double.MinValue;
    }

    return (double)value;
}

First, I thought it is a decompiler's bug, but fortunately it also shows IL version of method:

    public static double dec2f(Decimal value)
    {
      if (value == new Decimal(-1, -1, -1, true, (byte) 0))
.maxstack 6
.locals init (
[0] float64 V_0
)
IL_0000: ldarg.0      // 'value'
IL_0001: ldc.i4.m1    
IL_0002: ldc.i4.m1    
IL_0003: ldc.i4.m1    
IL_0004: ldc.i4.1     
IL_0005: ldc.i4.0     
IL_0006: newobj       instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, unsigned int8)
IL_000b: call         bool [mscorlib]System.Decimal::op_Equality(valuetype [mscorlib]System.Decimal, valuetype [mscorlib]System.Decimal)
IL_0010: brfalse.s    IL_001c
        return double.MinValue;

Well, looks like it is true that the Decimal constructor is involved EVERY time you use 'constant' like Decimal.MinValue! So next question arises - what is inside this constructor and is there any difference in using Decimal.MinValue and defining local field like:

static readonly decimal DecimalMinValue = Decimal.MinValue;

Well, the answer is - there is a difference if every penny counts in your case:

                     Method |        Mean |    StdDev |
--------------------------- |------------ |---------- |
 CompareWithDecimalMinValue | 178.4235 ns | 0.4395 ns |
   CompareWithLocalMinValue |  98.0991 ns | 2.2803 ns |

And the reason of this behavior is DecimalConstantAttribute which specified how to create a decimal every time you use one of the decimal's 'constants'.

The question is - I understand that cost of calling Decimal's constructor every time you use it as 'constant' affects almost nobody, but... Why it is implemented this way anyway? Is it because the lack of notion of copy-constructor from C++?

Jon Skeet
people
quotationmark

If decimal.MinValue were only declared as a static readonly field, you wouldn't be able to use it as a compile-time constant elsewhere - e.g. for things like the default value of optional parameters.

I suppose the BCL team could provide both a constant and a read-only field, but that would confuse many people. If you're in the very rare situation where it makes a difference, introducing your own field looks like an entirely reasonable workaround.

Alternatively, the compiler could decide to just copy the value of the field in cases where it's feasible to do so instead of using the constructor. That would potentially end up accessing memory that wouldn't otherwise be touched - micro-tweaks like this can end up with unintended side-effects. I suspect that making the compiler handling of this simple was deemed more important than trying to guess what's going to be most efficient in every scenario.

people

See more on this question at Stackoverflow