Best Practice Scenarios

In this section we cover the best practices for deploying change to WebSphere Application Server cells.

Code and Configuration deployments

In this scenario we deploy application code changes (represented by one or more EAR files, and configuration changes for the WebSphere container (cluster, application server, resources referenced or used by the application).

To achieve this the application code and configuration are associated in just one RapidDeploy project.

Typically, one or more EAR files are deployed to a number of cells in a route to live. Perhaps this consists of smoke test, integration test, UAT and Live cells.

In this scenario we ensure the EAR file is compliant with the "Build Once, Deploy Anywhere" paradigm. Any environment specific configuration inside the EAR file is parameterised and search/replaced at deployment time by use of the RapidDeploy EARExpanderTask and invocations of the SearchReplaceTask and/or XPathReplaceTask to update the EAR files with environment specific configuration.

Each target environment is further configured in RapidDeploy for each target cell and cluster on the Route-To-Live. Each logical target, typically a cluster definition, will contain all of the configuration to create/update that cluster and all other WebSphere resources it relies on (for example data sources, JMS providers, URL endpoints, Virtual Hostsa etc).

We create the first environment using a template, which exracts all of the variables (settings that change between environemtns), to a dictionary file of name/value pairs. We then 'clone' (copy) this environment to further downstream ones by changing the values in the dictionary file (where required) for the new environment.

In this way we use a single file to capture, as name value pairs, the differences between environments.

The EAR files, environment definitions and associated configurations are 'built' into a deployable artifact that does not need to change between environments. This archive file may be built through RapidDeploy or as part of your normal build or CI process.

Generally, we recommend combining the code and configuration deployments for the simple case of deploying a single (or multi[ple version locked) ear file(s) to clusters where the environment configuration changes only rarely.

WebSphere Cell planning

In order to keep differences between environments to as few as possible, we recommend you keep cell, cluster, server, node details as similar as possible. For example, if you keep cell names, cluster names (for this application), application names, DataSource etc names the same between environments, you will reduce the size of the dictionary file (which would otherwise need to contain these environment specific differences), and reduce possible misconfiguration errors creeping into the deployments. Ideally, the dictionary files should be reduced to containing variables that absolutely must change between environments (such as database URLs and passwords).

Code only and Configuration only deployments

In this scenario we split out the code deployment - EAR files and associated configuration, such as resource mappings and search/replace configuration, from target environment configuration - Cell, cluster, resource configuration etc.

Consider the situation where we deploy three EAR files to the same cluster in cells in a Route-To-Live, but each EAR file is version independent of the other EAR files (may be released separately). In this case we create four RapidDeploy projects, one for each application and one for environment configuration. The application projects may well point to an area in the SCM repository used for application development of this project and be managed by the application development team. The application code projects only hold application specific (search/replace and resource mapping) data as well as the EAR files themselves. The environment configuration project holds all of the configuration for the clusters and resources in the cells on the Route-To-Live and may be stored in an SCM location accessible by the operations or operations and development teams.

We perform releases of the application code through the environments according to each application projects timelines.

Changes to the configuration are deployed through the environments only as and when required. This may or may not coincide with an application deployment.

For each application project, the code may be stored in many branches as part of a parallel development lifecycle. However, since the configuration is kept separate, this does not require branching in this scenario. And neither should it, since at any one time there is only one instance of a specific target environment for this application to be deployed into. Versioning of the environment configuration in trhe SCM tool and creation of versioned archive files containing this configuration still allows the configuration to be updated, backed out or set to any historical stored version as desired, via a new deployment through the RapidDeploy console.

Generally, we recommend splitting out the code and configuration deployments, rather than deploying both code and config together, in any scenarios where there are multiple applications being deployed to the same cluster, or where there are many changes to environment configurations between releases.

Environment Configuration promotion

RapidDeploy allows collaboration between development and operations teams. RapidDeploy exposes the environment configurations, stored in an SCM tool, to both developers and operations. You may allow developers, operations staff or both to edit and promote configurations. You can also set up workflows such that developers can manage some of their own environments but "request" changes to more controlled downstream environments. Theese requests must be approved by operations, but once approved, the changes to the target environment configurations are made in the SCM tool automatically (not in the target environments themselves). Requests may be bundled to include multiple changes in multiple files. Following the next build, the changes will be expressed in the deployment archive and the changes will be applied as the deployment progresses through the cells on the Route-To-Live.

We recommend that this approach is used for all changes to target environments.

Patching WebSphere Application Server

The way an IBM WebSphere fix pack or feature pack can applied from RapidDeploy is the same way how WAS binaries installation are done. It can be performed by the same task: IbmInstallationManagerExecuteCommandTask, but with different values supplied to the task resources. The only difference between each task is the combination of the repositories and packageId resources. The repositories are the location to the binary packages and the packageId are the ID of the package to install.

To install WAS binaries, use this task configuration:

<task active=“true" name="IbmInstallationManagerExecuteCommandTask-InstallWASBinaries" order="6">

<class>com.midvision.rapiddeploy.orchestration.tasks.binary.installers.IbmInstallationManagerExecuteCommandTask</class>

<resource type="failOnError">true</resource>

<resource type="installationManagerPath">@@INSTALLATION_MANAGER_PATH@@</resource>

<resource type="imclCommand">install</resource>

<resource type="installationDirectory">@@WAS_BASE_PATH@@</resource>

<resource type="repositories">@@WAS_ND_REPOSITORY_PATH@@</resource>

<resource type="packageId">@@WAS_ND_PACKAGE_ID@@</resource>

</task>

To install a WAS fix pack, use this task configuration:

<task active=“true" name="IbmInstallationManagerExecuteCommandTask-InstallWASFixPack" order="7">

<class>com.midvision.rapiddeploy.orchestration.tasks.binary.installers.IbmInstallationManagerExecuteCommandTask</class>

<resource type="failOnError">true</resource>

<resource type="installationManagerPath">@@INSTALLATION_MANAGER_PATH@@</resource>

<resource type="imclCommand">install</resource>

<resource type="installationDirectory">@@WAS_BASE_PATH@@</resource>

<resource type="repositories">@@WAS_FIX_PACK_REPOSITORY_PATH@@</resource>

<resource type="packageId">@@WAS_ND_PACKAGE_ID@@</resource>

</task>

To install a WAS feature pack, use this task configuration:

<task active=“true" name="IbmInstallationManagerExecuteCommandTask-InstallWASFeaturePack-Web2Mobile" order="8">

<class>com.midvision.rapiddeploy.orchestration.tasks.binary.installers.IbmInstallationManagerExecuteCommandTask</class>

<resource type="failOnError">true</resource>

<resource type="installationManagerPath">@@INSTALLATION_MANAGER_PATH@@</resource>

<resource type="imclCommand">install</resource>

<resource type="installationDirectory">@@WAS_BASE_PATH@@</resource>

<resource type="repositories">@@WAS_FEATURE_PACK_REPOSITORY_PATH@@</resource>

<resource type="packageId">@@WAS_FEATURE_PACKAGE_ID@@</resource>

</task>

Here are an example of data dictionary values used for the environments:

Path to IBM Installation Manager (must be installed on the instance)

@@INSTALLATION_MANAGER_PATH@@=/opt/IBM/InstallationManager

Path to where WAS will be installed on the instance

@@WAS_BASE_PATH@@=/opt/IBM/WebSphere/AppServer

Path to the downloaded and unzipped WAS binaries on a network share drive available from the instance

@@WAS_ND_REPOSITORY_PATH@@=/software/binaries/ibm/websphere/v8

@@WAS_FIX_PACK_REPOSITORY_PATH@@=/software/binaries/ibm/websphere/v8_FixPacks/8_0_0_8

@@WAS_FEATURE_PACK_REPOSITORY_PATH@@=/software/binaries/ibm/websphere/v8_FeaturePacks/web2mobile

Name of the package to install - taken from the repository xml file (often also known as the offering ID).

@@WAS_ND_PACKAGE_ID@@=com.ibm.websphere.ND.v80

@@WAS_FEATURE_PACKAGE_ID@@=com.ibm.websphere.WEB2MOBILE.v11