I learned C# in 2000, when the compiler and tools were still in beta (I think. I know I was working with the beta, but maybe the finals were released unbeknownst to me (hehehe! The Firefox spellchecker does not recognize “unbeknownst”!))
I remember that I liked the language. It was clean and relatively compact, it had a largish and useful library, it was like C++ and Java (benefit? really?) and it had a command-line compiler. My impression at the time was that it was more suitable for small and quick programs, and very suitable for teaching programming. I even suggested it to my teachers at university as a replacement for Pascal, which was then thought as the first language to CE students (this was maybe 4 years before I met Lisp, or better yet, Python.) This was also before the whole .NET fucked-up-ness happened with all the WinForms and ASP.NET and whatever other shit they are peddling these days. In those days, .NET and C# produced console applications, unless you ventured into the river of diarrhea output that is the Win32 GUI API; but that was pretty much what you generally had, back then.
Anyways, my shallow and brief delving into the world of .NET was barely deep enough for me to familiarize myself with the inner workings of MSIL and the JIT compiler and the virtual machine. I learned little about these, and I have not kept up with the new developments in .NET, and I have no regrets there. I generally hate GUIs and network technologies that aim to solve all problems on all levels for everybody. They may suit some, and I have absolutely no doubt that many .NET-based applications wouldn’t have been as easy-to-write for other libraries and runtimes. But I don’t generally like .NET and this opinion (I suspect) would be very hard to change.
The most obvious reason for this obvious dislike is the one I mentioned above. .NET is the champion of the all-for-all thoughtcrime. More than anything else that I have seen, it tries to do all for everyone and everything; without giving them an inkling of what the hell is really going on on any level below the most superficial. This may suit some, but it shouldn’t.
I seriously believe that every programmer needs to know what’s going on under the hood. Total abstractions almost never work beyond the most simple and trivial cases. If you don’t know jackshit about your platform and your programs seem to have worked so far, you are just lucky. Let me give you an analogy. If you don’t know anything about how cars work and you drive one, when your cars breaks down in the middle of nowhere and you have no means of communication, the you are royally fucked. The fact that this has not happened so far, it just means that the Random Number Gods have smiled upon you so far. It may never happen, but it just as well might. That’s the way it is with programming. Except that the level of quality discrepancies among software and hardware products that you use is much wider.
All programming languages abstract the platform in some form and manner. But as some languages hide not much and what they hide, they do with much shame and much apologies (Assembly, C, etc.) others do as much as they can to distance you from the hardware. They even boast this feature!
In short, every good programmer that I know and I know of knows the whole stack of software and hardware underneath deeply and intimately. In fact, it might even be true that the better they know this mess, the better they are.
Let me conclude now. I am not saying that technologies like .NET and Java and all those “high-level” languages are useless. I’m just saying they make it harder to be conscious about the actual platform and the rest that lies under your code. I’m pretty certain that the best .NET programmers can pretty much generate the MSIL code and the machine code that their compiler and the JIT compiler generate for any given part of their code. Maybe you should too.