This is something I really hate about so called software engineering and particularly the gang of idiots who make all sorts of wild a$$ claims about this technique or that methodology and have nothing to backup their claims. If something is ‘obviously’ better then it should be easy to show a valid metric backing up the claim.
That said there is lots that can be suggested, complete with proof of better coding.
If there are any questions about what actually happens for a given piece of code, ILDASM is a great tool to find out. In the case of short vs Int16, there is absolutely no difference at all in the IL code generated by the compiler:
IL_005c: ldc.i4.1 <-- load the literal 1 onto the stack
IL_005d: stloc.s shrt <-- store it into a "short" variable called "shrt"
IL_005f: ldc.i4.1 <-- load the literal 1 onto the stack
IL_0060: stloc.s i16 <-- store it into an "Int16" variable called "i16"
As for the string.Empty vs “” debate, while it’s technically true that this creates a new string instance, it only does it once, as the CLI does string interning:
That’s what I said, it will be only ONE unique instance, no matter how many times you use “”, because the string is “interned”. Once you get over that 16 bytes (or whatever) overhead for the one instance, you’re home free.
If you disassemble mscorlib, in fact, you’ll see this:
Because coding conventions are often based upon decisions that were correct at one point in time but are no longer applicable. I suspect at one point, an optimization was added to the compiler to use String.Empty for “”.
Unless a coding convention was issued on a stone tablet, you are allowed to question them.
[quote]Strings are immutable–the contents of a string object cannot be changed after the object is created, although the syntax makes it appear as if you can do this. For example, when you write this code, the compiler actually creates a new string object to hold the new sequence of characters, and that new object is assigned to b. The string “h” is then eligible for garbage collection.