So you’ve resolved application level bugs in your software and are ready to deploy your app whether it’s in your own Production environment or that of a customer. You’ve covered your bases as a developer, and then you run into the dreaded OutOfMemoryError; something that may not have been exposed in your testing. Diagnosing and fixing memory issues will require more than an understanding of your application code. Here are some insights I’ve gathered about memory management by the Java Virtual Machine.
Java memory management
The jvm has three main areas where it stores objects created by your application. The heap holds all objects and their instance variables. The perm gen space holds the classes as they are loaded by the class loader (and any static variables). Finally the stack holds local variables. These variables are private to the object they belong to and not accessible by any other threads.
A 64 bit java process (running on a 64bit os) can address 4GB of memory whereas a 32 bit process running on a 32 or 64 bit os can only address 2GB of memory. This addressable memory must contain the heap, perm gen and stack. However it also serves one more (lesser discussed) purpose. The memory that’s available after accounting for these three areas is used by the jvm to execute native code, i.e. run the container that’s serving your application.
That said, there are ways to configure how much memory each of these areas will consume and knowing this can help address several types of Out Of Memory errors.
Configuring memory management in Java
The size of the heap is configured using two jvm parameters –Xms and –Xmx which specify the initial and maximum size of the heap. Small and mid size applications typically do well with 500 - 750 MB while those under heavy load could require upto 1 GB of heap. Increasing the heap size beyond this will gradually start hurting application performance as garbage collection will take longer to sweep through the huge pool of objects. This can have another interesting side effect which I will discuss in a bit.
The –Xss setting configures how much stack space is available to a single thread. The default stack size depends on the jvm implementation and the operating system but its usually no more than a few hundred kilobytes.
Collecting dumps to debug an error
The memory usage in a live jvm can be analyzed by obtaining a heap dump and a thread dump. A heap dump has information about all the objects present in the heap at that point of time. A thread dump has a listing of all threads and their status’ (running, blocked etc.). Note that a thread dump can also be obtained from a heap dump, since the threads are essentially stored in the heap; though it’s a lot easier to read the thread dump when you suspect something unusual with the threads in your application and want to pinpoint conditions like deadlocks.
One other useful log is the core dump, i.e. if the jvm process crashes entirely; it leaves a dump (usually an hprof file) with information about events leading to that state. Such a crash need not always indicate a faulty application, rather it could be a deeper problem within the jvm.
Profiling tools
Profiling tools can be used to obtain and analyze heap and thread dumps.
For example, a heap dump can be obtained using jmap and a thread dump using jstack (both shipped with JDK)
Sample usage –
jmap –heap [pid]
jstack [pid]
This may not work with java processes running as a service on windows as the SIGQUIT signal (equivalent of kill -3 meant to trigger a core dump) cannot be passed programmatically to a service.
For a graphical display and the ability to view all instances of a class, sort by memory usage, find GC roots and many other powerful capabilities, use a profiler like jvisualvm (shipped with JDK 1.6).
Now you can select an instance and view its GC Root. Objects that accessible by GC roots cannot be garbage collected.
If your java process is running as a service, jvisualvm will not be able to connect to it.
You’ll need to establish a jmx connection to retrieve profiling information. But first you must configure the jvm to allow incoming jmx connections. Add these jvm parameters.
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
Then establish a JMX connection from jvisualvm on localhost:9999
In case your application is crashing before giving you a chance to profile it, you can configure the jvm to create a heap dump when it encounters an out of memory error.
Add these jvm parameters.
-XX:-HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=\heapdump.hprof
Solving certain types of OutOfMemory conditions
Profiling can help identify the reason behind various memory related problems such as:
OutOfMemoryError:unable to create a new thread –
Inspect your heap usage. Get a thread dump to see if certain threads are never exiting. If your application is not leaking memory, perhaps you simply need to increase the max heap size (-Xmx). A high stack size setting (-Xss > a few hundred kb) could also cause this error. Consider reducing this setting, so more threads can be spawned.
OutOfMemoryError:PermGen space –
This error indicates that the perm gen space is not sufficient to load all the classes needed by the application.
Try increasing the max perm gen setting (-XX:MaxPermSize).
OutOfMemoryError:Out Of Swap Space –
This is a rather interesting error and one of the causes could be that there isn’t sufficient memory for the jvm to execute native code. The fix could be decreasing the heap size in contrast to the first error above, so as to make more space available for native code
Garbage collection in java
As described earlier, the heap consists of objects and their instance variables as well as the perm gen space where classes and their static variables are loaded. For the sake of garbage collection, the heap space is further broken up into generations as depicted below.
Newly created objects are stored in the eden space. As objects survive successive garbage collection runs,
they are moved from the eden space to one of the survivor spaces and finally to the tenured generation.
Garbage collectors exploit this partitioning in the memory to optimize collection routines. When a particular generation is filled up, garbage collection occurs in that generation.
The default garbage collector kicks off a minor collection to reclaim the young generation area. These are generally fast (hence minor) due to a short life of most objects in this area. Major collections sweep over the tenured generation and are relatively longer operations.
In addition to the default garbage collector that’s suited for most applications, there are three types of garbage collectors. The jvm can be instructed to use either of these by passing in certain jvm paramenters:
The throughput collector (-XX:+UseParallelGC)-
Uses a parallel version for minor collections.
The concurrent low pause collector (-XX:+UseConcMarkSweepGC) –
Tenured generation collections occur without completely halting the application. Minor collections are done using a parallel collector as with the throughput collector. Use this collector if your application cannot afford long pauses, but can afford to share processor resources with the garbage collector.
The incremental low pause collector (-Xincgc) -
Tenured generation objects are collected in small chunks at a time during minor collections, rather than kicking off independent major collections. Use this collector if your application cannot afford long pauses, and can’t afford to share processor resources with the garbage collector either
Tuning garbage collection and diagnosing issues
Garbage collection tuning is a two step process. First you need to size the various areas of the heap based on the nature of objects your application is anticipated to create. Various tuning parameters are available to control the size of each partition. If that by itself does not meet performance objectives, explore the option of using one of the specialized garbage collectors.
Java memory management
The jvm has three main areas where it stores objects created by your application. The heap holds all objects and their instance variables. The perm gen space holds the classes as they are loaded by the class loader (and any static variables). Finally the stack holds local variables. These variables are private to the object they belong to and not accessible by any other threads.
A 64 bit java process (running on a 64bit os) can address 4GB of memory whereas a 32 bit process running on a 32 or 64 bit os can only address 2GB of memory. This addressable memory must contain the heap, perm gen and stack. However it also serves one more (lesser discussed) purpose. The memory that’s available after accounting for these three areas is used by the jvm to execute native code, i.e. run the container that’s serving your application.
That said, there are ways to configure how much memory each of these areas will consume and knowing this can help address several types of Out Of Memory errors.
Configuring memory management in Java
The size of the heap is configured using two jvm parameters –Xms and –Xmx which specify the initial and maximum size of the heap. Small and mid size applications typically do well with 500 - 750 MB while those under heavy load could require upto 1 GB of heap. Increasing the heap size beyond this will gradually start hurting application performance as garbage collection will take longer to sweep through the huge pool of objects. This can have another interesting side effect which I will discuss in a bit.
The –Xss setting configures how much stack space is available to a single thread. The default stack size depends on the jvm implementation and the operating system but its usually no more than a few hundred kilobytes.
Collecting dumps to debug an error
The memory usage in a live jvm can be analyzed by obtaining a heap dump and a thread dump. A heap dump has information about all the objects present in the heap at that point of time. A thread dump has a listing of all threads and their status’ (running, blocked etc.). Note that a thread dump can also be obtained from a heap dump, since the threads are essentially stored in the heap; though it’s a lot easier to read the thread dump when you suspect something unusual with the threads in your application and want to pinpoint conditions like deadlocks.
One other useful log is the core dump, i.e. if the jvm process crashes entirely; it leaves a dump (usually an hprof file) with information about events leading to that state. Such a crash need not always indicate a faulty application, rather it could be a deeper problem within the jvm.
Profiling tools
Profiling tools can be used to obtain and analyze heap and thread dumps.
For example, a heap dump can be obtained using jmap and a thread dump using jstack (both shipped with JDK)
Sample usage –
jmap –heap [pid]
jstack [pid]
This may not work with java processes running as a service on windows as the SIGQUIT signal (equivalent of kill -3 meant to trigger a core dump) cannot be passed programmatically to a service.
For a graphical display and the ability to view all instances of a class, sort by memory usage, find GC roots and many other powerful capabilities, use a profiler like jvisualvm (shipped with JDK 1.6).
Overall monitoring with a live display of heap usage and number of threads |
Monitor threads |
Visualize the heap dump at a certain point in time |
You can choose a class and view instances of that class from the right click context menu. |
Now you can select an instance and view its GC Root. Objects that accessible by GC roots cannot be garbage collected.
If your java process is running as a service, jvisualvm will not be able to connect to it.
You’ll need to establish a jmx connection to retrieve profiling information. But first you must configure the jvm to allow incoming jmx connections. Add these jvm parameters.
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
Then establish a JMX connection from jvisualvm on localhost:9999
In case your application is crashing before giving you a chance to profile it, you can configure the jvm to create a heap dump when it encounters an out of memory error.
Add these jvm parameters.
-XX:-HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=\heapdump.hprof
Solving certain types of OutOfMemory conditions
Profiling can help identify the reason behind various memory related problems such as:
OutOfMemoryError:unable to create a new thread –
Inspect your heap usage. Get a thread dump to see if certain threads are never exiting. If your application is not leaking memory, perhaps you simply need to increase the max heap size (-Xmx). A high stack size setting (-Xss > a few hundred kb) could also cause this error. Consider reducing this setting, so more threads can be spawned.
OutOfMemoryError:PermGen space –
This error indicates that the perm gen space is not sufficient to load all the classes needed by the application.
Try increasing the max perm gen setting (-XX:MaxPermSize).
OutOfMemoryError:Out Of Swap Space –
This is a rather interesting error and one of the causes could be that there isn’t sufficient memory for the jvm to execute native code. The fix could be decreasing the heap size in contrast to the first error above, so as to make more space available for native code
Garbage collection in java
As described earlier, the heap consists of objects and their instance variables as well as the perm gen space where classes and their static variables are loaded. For the sake of garbage collection, the heap space is further broken up into generations as depicted below.
Newly created objects are stored in the eden space. As objects survive successive garbage collection runs,
they are moved from the eden space to one of the survivor spaces and finally to the tenured generation.
Garbage collectors exploit this partitioning in the memory to optimize collection routines. When a particular generation is filled up, garbage collection occurs in that generation.
The default garbage collector kicks off a minor collection to reclaim the young generation area. These are generally fast (hence minor) due to a short life of most objects in this area. Major collections sweep over the tenured generation and are relatively longer operations.
In addition to the default garbage collector that’s suited for most applications, there are three types of garbage collectors. The jvm can be instructed to use either of these by passing in certain jvm paramenters:
The throughput collector (-XX:+UseParallelGC)-
Uses a parallel version for minor collections.
The concurrent low pause collector (-XX:+UseConcMarkSweepGC) –
Tenured generation collections occur without completely halting the application. Minor collections are done using a parallel collector as with the throughput collector. Use this collector if your application cannot afford long pauses, but can afford to share processor resources with the garbage collector.
The incremental low pause collector (-Xincgc) -
Tenured generation objects are collected in small chunks at a time during minor collections, rather than kicking off independent major collections. Use this collector if your application cannot afford long pauses, and can’t afford to share processor resources with the garbage collector either
Tuning garbage collection and diagnosing issues
Garbage collection tuning is a two step process. First you need to size the various areas of the heap based on the nature of objects your application is anticipated to create. Various tuning parameters are available to control the size of each partition. If that by itself does not meet performance objectives, explore the option of using one of the specialized garbage collectors.