What is the difference between using \u and \x while representing character literal

I have seen \u and \x used interchangeably in some places while representing a character literal.

For example '\u00A9' == '\x00A9' evaluates to true

Aren't we supposed to use only \u to represent unicode character? What is the use of having two ways to represent a character?

Jon Skeet
people
quotationmark

I would strongly recommend only using \u, as it's much less error-prone.

\x consumes 1-4 characters, so long as they're hex digits - whereas \u must always be followed by 4 hex digits. From the C# 5 specification, section 2.4.4.4, the grammar for \x:

hexadecimal-escape-sequence:
  \x hex-digit hex-digitopt hex-digitopt hex-digitopt

So for example:

string good = "Tab\x9Good compiler";
string bad =  "Tab\x9Bad compiler";

... look similar but are very different strings, as the latter is effectively "Tab" followed by U+9BAD followed by " compiler".

Personally I wish the C# language had never included \x, but there we go.

Note that there's also \U, which is always followed by 8 hex digits, primarily used for non-BMP characters.

There's one other big difference between \u and \x: the latter is only used in character and string literals, whereas \u can also be used in identifiers:

string x = "just a normal string";
Console.WriteLine(\u0078); // Still refers to the identifier x

people

See more on this question at Stackoverflow