3

This may be an eminently closeable question, but I'm the type that sees what sticks to the wall. For all of the benefits of memory and lifetime management afforded by a garbage collected runtime, have there been any notable cases of program indeterminacy caused by race conditions between an application and its garbage collector? Has a gestalt of defensive programming against this kind of thing emerged? Surely programmers accustomed to RAII must learn lessons when in the presence of GC.

yacdmnky
  • 487
  • 3
  • 15

4 Answers4

7

The problem with garbage collection is that it only manages memory resources. Unfortunately, programmers must manage many, many other resource types:

  • file and socket handles
  • database connections
  • synchronisation objects
  • gui resources

to name but a few. To manage those succesfully, you really need the concepts embodied in the RAII idiom.

6

I think you misunderstand how automatic garbage collection works. Race conditions between the application and a correctly implemented garbage collector aren't possible, even in principle. The garbage collector only collects objects that the application can't access.

Since only one of the two can ever "own" a given object, race conditions can't occur.

MarkusQ
  • 21,814
  • 3
  • 56
  • 68
  • They can once you start considering finalization (and even more so once you start doing things like resurrecting objects...) – Greg Beech Mar 14 '09 at 11:50
  • 1
    In a correctly implemented GC system you shouldn't be able to "resurrect" an object that is eligible for garbage collection since you'd have no way to refer to it. Likewise, an object undergoing finalization isn't eligible for collection since you can still refer to it. – MarkusQ Mar 14 '09 at 15:26
  • 1
    In the CLR you can get race conditions around finalization. Read this, which highlights one: http://blogs.msdn.com/cbrumme/archive/2003/04/19/51365.aspx – Greg Beech Mar 16 '09 at 01:41
  • 1
    Also see the section on reachability here: http://blogs.msdn.com/cbrumme/archive/2004/02/20/77460.aspx – Greg Beech Mar 16 '09 at 01:42
  • 1
    @GregBeech I said (and I quote) "In a correctly implemented GC system"; your links (if accurate) just demonstrate that the CLR GC isn't correctly implemented. They didn't have to do it that way & many reasons they shouldn't have; saying "Microsoft did it so it must be right" just won't fly. – MarkusQ Mar 16 '09 at 04:29
  • @MarkusQ: Do you have any links to share that go into more depth on shortcomings in the .NET GC implementation? – yacdmnky Mar 17 '09 at 04:53
  • @yacdmnky All I know is what Greg Beech linked to above. If they are correct (I have no direct knowledge; the claims made in the linked documents could be utter bilge, for all I know) the CLR GC is very badly implemented, if it collects as "garbage" objects that are still visible to the application – MarkusQ Mar 17 '09 at 05:41
  • @MarkusQ - Chris Brumme (the author of the articles) was the chief architect of the CLR, so you can take them as authoritative. – Greg Beech Mar 17 '09 at 21:13
  • Also, note that the race conditions arise when dealing with external resources that aren't (and can't be) managed by the GC. When you are dealing only with managed objects then there are none of these problems. – Greg Beech Mar 17 '09 at 21:14
  • @MarkusQ: Finalization is a mechanism via which abandoned objects which may have other entities doing things on their behalf (e.g. reserving an area of memory, file handle, etc.) can notify those other entities to stop doing so. When the GC runs, it effectively divides objects into three categories: live objects, completely dead objects, and abandoned objects that have requested notification of abandonment. While one could, in theory, design a system in which objects request abandonment-notification by registering other unrelated objects (which hold the information necessary for cleanup)... – supercat Feb 03 '12 at 18:20
  • @MarkusQ: ...and such a design could eliminate the need for an "abandoned objects" category by declaring the cleanup objects to simply be "live", this would cause problems if two objects directly or indirectly get registered as being responsible for each others' cleanup. Both objects would always be 'live'--even if the only reference to either was the other's cleanup registration--and neither would ever have its cleanup code run. Under .net GC, both objects would be considered "abandoned", but abandoned objects often have to be able to store references they hold into live objects. – supercat Feb 03 '12 at 18:25
5

When I moved to the .NET world six years a go or so, I felt uneasy with the GC and I sort of took for granted that it should be much slower and that I was to be even more careful with my memory allocations to avoid producing performace hogs.

After six years I can tell you that my perspective has changed totally! I can only recall one time during these years that I've had a memory leak, due to a forgotten .Dispose(). Compare that to C++ where you produce a memory leak each hour of coding... ;-)

I have recenly been forced to return to the C++ world, and I'm totally flabbergasted! Did I use to work with this and like it once? It feels that I'm at least 10 times more productive in C# than in C++. And on top of that: the GC memory allocator is so blazingly fast that I still cannot believe it. Look at this question where I had to draw the conclusion that in my particular case, a .NET version (C# or C++/CLI) executed 10 times as fast as a C++ MFC version: C++ string memory allocation.

I have converted totally - but it took me a long time to fully accept it.

Community
  • 1
  • 1
Dan Byström
  • 9,067
  • 5
  • 38
  • 68
  • In my case, there really *WAS* a leak! It was around for a long time before I figured out where it originated from. It had to do with a Tab control being populated with UserControls deriving from TabPage. It was very interesting and I have *tried* to isolate it on it's own just to demonstrate... – Dan Byström Mar 14 '09 at 10:02
  • "...or when some obscure reference path locks it...". Agreed! :-) And when it comes to WinForms, things really are obscure because it has to live on top of Win32. – Dan Byström Mar 14 '09 at 11:06
0

When I first began programming in C I had to be very methodical with my malloc's and realloc's and I had to free everything I wasn't using. This was an easy task with tiny college assignments such as creating a binary tree. Simple...

Now when I started developing an application that had a GUI written in all C, I was having to think more and program less due to the fact that I have to pay attention to possible memory leaks. This was becoming a hassle. I would much rather have a half product than a half ass'd product.

I began moving over to Java and C#. I loved that all I had to do was dereference an object and the garbage collector would come along and pick it up for me. I have also noticed that my programs ran a bit slower using Java's Swing (as expected), but it was manageable.

In my findings, since processors are becoming cheaper and memory is becoming cheaper and faster, and GUI programs are consuming more memory than before. A garbage collector really helps with getting a product out that works with minimal issues with memory leaks. Really handy and can possibly lead to bad coding habits, however those can be remedied.

EDIT:

Also see this it may help you answer your questions. Good read IMO

Community
  • 1
  • 1
WarmWaffles
  • 531
  • 5
  • 16