Agent heap sizing and performance considerations
Parallel deployments using the same agent
Broadly, and assuming that each deployment is performing the same set of tasks on the same set of resources (files, directories, urls, WebSphere instances etc), the amount of heap consumed for multiple parallel deployments will be a approximately the max heap used for one deployment multiplied by the max number of concurrent deployments. You should assume a base requirement of at least 500Mb.
Agent heap for a single deployment
The max heap size consumed by one deployment at any one time will depend on the tasks and activities being performed, and the size of the resource base these are performed on. For example you might need to do a search/replace on 100 files or 100,000 files and the agent code will create a list of all of those file paths, affecting memory use very differently. Therefore it is not possible to say with any degree of accuracy the heap you will need.
Performance considerations
- Files included in the initialisationTask and other tasks that use some form of search/replace: Make sure all binary files are excluded from the search as a binary file of say, 500Mb in size, would need a heap in excess of 500Mb to search it. In this case you should make sure you specify that binary files are not included, for example. See the task documentation on how to exclude these files.
- The size of the tar/jar/zip archive will have an effect on performance, as all of these files are unzipped on the target, which takes up time and disk space. The search/replace under this expanded archive to replace dictionary values may also take a long time if there are thousands of configuration files to trawl through, and the size of some of these individual files could cause a large heap usage (see above). We advise removing redundant/unused config files from the archive.
- Bear in mind that multiple concurrent search/replace activities is CPU intensive due to CPU "waiting on IO", and so the hosts CPU resource should be sized according to anticipated peak number of concurrent deployments.
Recommendations
With the caveats from above, we would suggest a base heap of 512Mb and add 256Mb - 512Mb for each deployment running (in parallel).
RapidDeploy agent ships with a max heap of 1024Mb which under normal circumstances we have found to be sufficient.