Deploying configuration and code to an Integration Node
Overview
This guide gives an insight into how with a configuration representing your integration node target(s) you can systematically and securely deploy configuration and barfiles to the integration node. Some concepts are discussed at a high level to give an insight into the possibilities but a simple example is used in this instance.
Prerequisites
Before you start this guide it is assumed you have performed the following tasks:
- Create an "ACE_DEPLOY" project that has at least the "ACE Deployment" task in the orchestration.
- Create a Target configuration that represents the integration node topology you wish to deploy (this will be discussed in more detail below).
- Built barfiles for your integration node that are ready to deploy and place them in the BARSTORE (again this will be covered below).
- Configure the Server object and the Installation object with details of your target host and integration node.
Structure of an Integration Node configuration
The IBM App Connect Enterprise plugin is somewhat different to the IBM MQ plugin in that it is designed not only to handle configuration but also to handle code deployments (i.e. barfiles). The plugin does not currently provide any functionality to create the integration node barfiles. This is often the responsibility of the development teams. In any case you must follow your own processes to create barfiles ready to be deployed to each of your environments. In this example we will be using our standard pattern of a single integration node template that represents your integration nodes at every environment level in your "route to live" (e.g. DEV to PROD). It is therefore necessary to ensure that your barfiles are sufficiently generic either by use of overrides which are supported by this plugin or some other means.
The topology of an integration node configuration is represented in source control (or on local file system) within the project directory structure. A typical structure is reflected below. Showing Directories (D) and Files (f):
(D) ACE_DEPLOY (f) classpath.xml (f) data-dictionary.xml (f) log4j.properties (f) midvision-deploy-orchestration.xml (f) packages-info.xml (f) resource.xml (D) ace (D) config (f) config.properties (D) topology (D) <nodeName1> (f) template.xml (D) EXECUTIONGROUP1 barfile1.bar.placeholder ... (D) EXECUTIONGROUP2 (f) barfile1.bar.placeholder (f) barfile2.bar.placeholder ... (D) DEFAULTBARSTORE (f) barfile1.bar (f) barfile2.bar ...
Project configuration files
The configuration file contains the template configuration properties while the data dictionary file has the specific values for each target (Server.Installation.Configurtion) you have defined.
Configuration properties file
- This file simply point to the correct template file to use for a set of integration nodes. Typical projects would have identical versions of this file pointing to the same template file that represents the integration node in all landscapes (e.g. DEV, SIT, QA, PROD):
ace.template.file=ace-dg/template.xml
- Find more information about the available properties that can be set in this file here.
Data dictionary file
- This is an XML file that contains all the created targets and the data dictionary values for each of them. A data dictionary can be used to replace parameters in any non-binary file within the deployment package.
- It is extremely NOT recommended to modify this file manually but using the web console instead.
- Below is an examples of a data dictionary XML file:
<?xml version="1.0" encoding="UTF-8"?><dataDictionary> <project>ACE_DEPLOY</project> <target>devServer.ace.DEV_NODE</target> <target>qaServer.ace.QA_NODE</target> <target>prodServer.ace.PROD_NODE</target> ... <entry> <key>@@NODE_NAME@@</key> <value>Default_NODE</value> <helpText/> <type>0</type> <encrypted>false</encrypted> <external>false</external> <resource> <id>devServer.ace.DEV_NODE</id> <value>DEV_NODE</value> <external>false</external> </resource> <resource> <id>qaServer.ace.QA_NODE</id> <value>QA_NODE</value> <external>false</external> </resource> <resource> <id>prodServer.ace.PROD_NODE</id> <value>PROD_NODE</value> <external>false</external> </resource> </entry> <entry> <key>@@NODE_HOST@@</key> <value>localhost</value> <helpText/> <type>0</type> <encrypted>false</encrypted> <external>false</external> <resource> <id>devServer.ace.DEV_NODE</id> <value>development.ace.host</value> <external>false</external> </resource> <resource> <id>qaServer.ace.QA_NODE</id> <value>test.ace.host</value> <external>false</external> </resource> <resource> <id>prodServer.ace.PROD_NODE</id> <value>production.ace.host</value> <external>false</external> </resource> </entry> ... </dataDictionary>
The template.xml file
Every integration node configuration will have a template XML file. It can be called anything you wish, but the default is template.xml. It is referenced by the configuration properties file. By default all integration nodes use the same template.xml file for a particular project/application, but this can easily be overridden by altering this file accordningly to use a different template. This can also be tokenized and overridden by a data dictionary variable.
The template provides a model for your integration node. An example of a template is shown below:
<?xml version="1.0" encoding="UTF-8"?> <!-- Message Broker Template File --> <!-- @@BROKER_NAME@@ = DEV_CORE --> <!-- @@BROKER_QMGR_PORT@@ = PORT --> <!-- @@BROKER_QMGR_NAME@@ = QMGR --> <!-- @@BROKER_QMGR_HOST@@ = SERVER --> <!-- @@BROKER_VERSION@@ = 80 --> <!-- Additional fields for deployments --> <BrokerRepository> <brokers> <binaryInstallLocation>@@BINARY_INSTALL_LOCATION@@</binaryInstallLocation> <!-- <httpListenerPort>@@BROKER_HTTP_LISTENER_PORT@@</httpListenerPort> --> <adminSecurity>active</adminSecurity> <!-- <userLilPath>@@BROKER_USER_LIL_PATH@@</userLilPath> --> <sharedWorkPath>/MQHA/WMB/@@BROKER_NAME@@</sharedWorkPath> <brokerAliasName>@@BROKER_NAME@@</brokerAliasName> <brokerHostName>@@BROKER_QMGR_HOST@@</brokerHostName> <!-- <brokerListener>@@BROKER_NAME@@_LISTENER.TCP</brokerListener> --> <brokerPhysicalName>@@BROKER_NAME@@</brokerPhysicalName> <brokerPort>@@BROKER_QMGR_PORT@@</brokerPort> <brokerQmgrName>@@BROKER_NAME@@</brokerQmgrName> <defaultBarFileStore>WMBBARSTORE</defaultBarFileStore> <deploymentMode>@@DEPLOY_MODE@@</deploymentMode> <executionGroupList>@@EXECUTION_GROUP_LIST_TO_DEPLOY@@</executionGroupList> <sslChannelName>@@BROKER_CHANNEL@@</sslChannelName> <executionGroups> <name>EXECUTIONGROUP1</name> </executionGroups> <executionGroups> <name>EXECUTIONGROUP2</name> </executionGroups> <localOverrideProperties> <barFileName>WS_ReturnTimeStamp.bar</barFileName> <brokerName>@@BROKER_NAME@@</brokerName> <executionGroupName>EXECUTIONGROUP1</executionGroupName> </localOverrideProperties> <retryCount>3</retryCount> <sslCipherSuite/> <sslKeyStore/> <sslKeyStorePass/> <sslPeerName/> <sslTrustStore/> <sslTrustStorePass/> <stopAllFlowsInExecutionGroups>false</stopAllFlowsInExecutionGroups> <timeoutSeconds>180</timeoutSeconds> <version>@@BROKER_VERSION@@</version> </brokers> </BrokerRepository>
Integration Servers
For the IBM App Connect Enterprise plugin, integration servers are represented as directories. The number of integration servers you wish to create is up to you, the integration node template file dictates what integration servers will be deployed to and will assume a directory of the same name exists within the same directory as the template. We recommend that you provide the superset of integration servers you require for your environments. Deployments can then be easily restricted to only those integration servers that apply to your particular environment using the template and a dictionary variable.
You can place barfiles prior to packaging the configuration directly into the EXECUTIONGROUP directory and they will be deployed. Alternatively you can place the barfile in a centralised store and use a placeolder in the exeuction group directory as shown below:
Method 1 - Direct barfile deployment
topology <nodeName1> template.xml EXECUTIONGROUP1 barfile1.bar EXECUTIONGROUP2 barfile2.bar
Method 2 - Using a central bar store
topology <nodeName1> template.xml EXECUTIONGROUP1 barfile1.placeholder EXECUTIONGROUP2 barfile1.placeholder barfile2.placeholder ... DEFAULTBARSTORE barfile1.bar barfile2.bar
Similarly you can place BAR override files in the integration server directory and they will be processed, and again you can use the alternative method of placing the overrides in the centralised bar store and using a placeholder in the integration server directory. As shown below:
Method 1 - Direct barfile deployment with overrides
topology <nodeName1> template.xml EXECUTIONGROUP1 barfile1.bar barfile1.ovr EXECUTIONGROUP2 barfile2.bar barfile2.ovr
Method 2 - Using a central bar store
topology <nodeName1> template.xml EXECUTIONGROUP1 barfile1.bar.placeholder barfile1.ovr.placeholder EXECUTIONGROUP2 barfile1.bar.placeholder barfile2.bar.placeholder barfile1.ovr.placeholder ... DEFAULTBARSTORE barfile1.bar barfile2.bar
Deployment
Once you are comfortable with your structure (i.e. topology), the deployment is a simple matter of creating a Deployment Package and then deploying to the target. As a best practice we advise that your build process clears out the barstore directory ("BARSTORE") and place the BARs to be deployed in there (without checking them into source control) prior to packaging. As a reminder to create a Deployment Package perform the following steps:
- Navigate to the Packages tab in the Project configuration panel.
- Click on the "Create Package" button to create an auto incremental version named package.
- Alternatively, use the "Add Package Wizard" button to create a Deployment Package for this project. Note the name must contain the Search String set in the Artifact Repository tab.
To deploy the package:
- Navigate to Jobs > New Job Plan.
- Double click on the "(empty)" blue box.
- Select the Project, the Target and the Deployment Package created previously, and click on "Apply".
- Click on the "Run" button and accept the confirmation dialog.
- Progress can be viewed from the Running Jobs screen, further information can be displayed by clicking on the line entry in the table. Logs can be viewed by selecting the Console tab.