Coding usage and misusage, please lets talk about it

Well i my initial post here, I opened the door for ALL to provide the examples on how things should be done, what would be the right way to structure and name your code.

I wrote explicitly

PS: I wasnt starting a rant, only started a discussion about this subject.

I was just teasing :slight_smile: if you look at my code at codeshare you’ll see some great examples of your convention :slight_smile:

I even force myself to add XML comments to all public members. and I always treat warnings as errors.

1 Like

OK. I apologize. A discussion from a strong point of view is a better description.

I think we should stop commenting on your intent and discuss the subject.
That would be 1000 times more profitable. :slight_smile:

I have the utmost respect for developers that forces oneself to write those damn documentations, and I do the same even if it is the simplest function.

Absolutely, this was meant to be constructive, so help me out make it constructive.

Personally I like Wouter’s idea to make a series of example posts. I did the same thing last year when I was working with making NETMF launch applications.

I am all for examples, and I am very much willing to provide some examples, but I wont be able to cover everything, that is something I need help from the community to help building.

Since this IS for learning purposes, I believe the examples should be practical examples, and not some theortical scenario.

Perhaps the examples should be put on the wiki.

This is something I really hate about so called software engineering and particularly the gang of idiots who make all sorts of wild a$$ claims about this technique or that methodology and have nothing to backup their claims. If something is ‘obviously’ better then it should be easy to show a valid metric backing up the claim.

That said there is lots that can be suggested, complete with proof of better coding.

If there are any questions about what actually happens for a given piece of code, ILDASM is a great tool to find out. In the case of short vs Int16, there is absolutely no difference at all in the IL code generated by the compiler:

IL_005c:  ldc.i4.1    <-- load the literal 1 onto the stack
IL_005d:  stloc.s    shrt    <-- store it into a "short" variable called "shrt"
IL_005f:  ldc.i4.1    <-- load the literal 1 onto the stack
IL_0060:  stloc.s    i16    <-- store it into an "Int16" variable called "i16"

As for the string.Empty vs “” debate, while it’s technically true that this creates a new string instance, it only does it once, as the CLI does string interning:

The relevant IL is this:

IL_004a:  ldstr      ""
IL_004f:  stloc.1
IL_0050:  ldstr      ""
IL_0055:  stloc.2
IL_0056:  ldsfld     string [mscorlib]System.String::Empty
IL_005b:  stloc.3

It could be possible that the “” option would be faster on NETMF (someone ought to do a microbenchmark) because it saves you the static field lookup.

It probably is faster to use “” instead of string.empty, but it will consume more space in memory, since it will be unique instances that contains the same value.

Either you go for speed or you go for low memory usage, you simply cant have both at the same time.

That’s what I said, it will be only ONE unique instance, no matter how many times you use “”, because the string is “interned”. Once you get over that 16 bytes (or whatever) overhead for the one instance, you’re home free.

If you disassemble mscorlib, in fact, you’ll see this:

.method private hidebysig specialname rtspecialname static void  .cctor() cil managed
  // Code size       11 (0xb)
  .maxstack  8
  IL_0000:  ldstr      ""
  IL_0005:  stsfld     string System.String::Empty
  IL_000a:  ret
} // end of method String::.cctor

So, in the end, you’ve already spent the overhead for “”, meaning you save nothing by using String.Empty.


I am not sure why the user of “” is not recommended by almost any post I read about this subject, when it aparenty is the same.

Because coding conventions are often based upon decisions that were correct at one point in time but are no longer applicable. I suspect at one point, an optimization was added to the compiler to use String.Empty for “”.

Unless a coding convention was issued on a stone tablet, you are allowed to question them.

1 Like

Well that is also my conclution regarding why it is recommended using string.empty.

Even if there has been optimizations, it still makes more sense to use string.Empty, I simply is more readable.

And yes coding conventions ARE guidelines, but as with all guidelines they are based on best practices and understandability.

I’ve always thought (that being the operative word) that String.Empty was better than “” simply because of strings being immutable, thus saving some GC work.

Granted that savings are likely minimal.

I think you are absolutely right here since I remember something about strings has to be recreated each time it is altered.

I thought that “” is identical to String.Empty? Therefore, use of one versus the other is purely a style issue with no performance impact.

[quote]Strings are immutable–the contents of a string object cannot be changed after the object is created, although the syntax makes it appear as if you can do this. For example, when you write this code, the compiler actually creates a new string object to hold the new sequence of characters, and that new object is assigned to b. The string “h” is then eligible for garbage collection.

Taken from Built-in reference types - C# reference | Microsoft Learn

Taken from Built-in reference types - C# reference | Microsoft Learn

True, but this has nothing to do with the use of “” versus String.Empty.

True, but this has nothing to do with the use of “” versus String.Empty.

Only due to, as mhectorgato pointed out, garbage collection.