Notes:
AMQSBCG q_name qmgr_namedisplays the contents of the queue q_name defined in queue manager qmgr_name.
Alternatively, you can use the message browser in the WebSphere MQ Explorer.
runmqlsr -t tcp
The listener enables receiver channels to start automatically in response to a start request from an inbound sender channel.
runmqchl -c channel.name
You can create a default configuration by using either the First Steps application or the WebSphere MQ Postcard application to guide you through the process. For information about this, see the WebSphere MQ System Administration Guide book.
You can create and start a queue manager from the WebSphere MQ Explorer or from the command prompt.
If you choose the command prompt:
crtmqm -u dlqname -q winnt
where:
This command creates a queue manager and a set of default objects.
strmqm winnt
where winnt is the name given to the queue manager when it was created.
The following sections detail the configuration to be performed on the Windows queue manager to implement the channel described in Figure 32.
In each case the MQSC command is shown. Either start runmqsc from a command prompt and enter each command in turn, or build the commands into a command file.
Examples are given for connecting WebSphere MQ for Windows and MQSeries for OS/2 Warp. If you wish to connect to another WebSphere MQ product use the appropriate set of values from the table in place of those for OS/2.
Table 18. Configuration worksheet for WebSphere MQ for Windows
| Parameter Name | Reference | Example Used | User Value |
---|---|---|---|---|
Definition for local node | ||||
(A) | Queue Manager Name |
| WINNT |
|
(B) | Local queue name |
| WINNT.LOCALQ |
|
Connection to MQSeries for OS/2 Warp
The values in this section of the table must match those used in Table 16, as indicated. | ||||
(C) | Remote queue manager name | (A) | OS2 |
|
(D) | Remote queue name |
| OS2.REMOTEQ |
|
(E) | Queue name at remote system | (B) | OS2.LOCALQ |
|
(F) | Transmission queue name |
| OS2 |
|
(G) | Sender (SNA) channel name |
| WINNT.OS2.SNA |
|
(H) | Sender (TCP/IP) channel name |
| WINNT.OS2.TCP |
|
(I) | Receiver (SNA) channel name | (G) | OS2.WINNT.SNA |
|
(J) | Receiver (TCP) channel name | (H) | OS2.WINNT.TCP |
|
(K) | Sender (NetBIOS) channel name |
| WINNT.OS2.NET |
|
(L) | Sender (SPX) channel name |
| WINNT.OS2.SPX |
|
(M) | Receiver (NetBIOS) channel name | (K) | OS2.WINNT.NET |
|
(N) | Receiver (SPX) channel name | (L) | OS2.WINNT.SPX |
|
Connection to WebSphere MQ for AIX
The values in this section of the table must match those used in Table 22, as indicated. | ||||
(C) | Remote queue manager name | (A) | AIX |
|
(D) | Remote queue name |
| AIX.REMOTEQ |
|
(E) | Queue name at remote system | (B) | AIX.LOCALQ |
|
(F) | Transmission queue name |
| AIX |
|
(G) | Sender (SNA) channel name |
| WINNT.AIX.SNA |
|
(H) | Sender (TCP) channel name |
| WINNT.AIX.TCP |
|
(I) | Receiver (SNA) channel name | (G) | AIX.WINNT.SNA |
|
(J) | Receiver (TCP) channel name | (H) | AIX.WINNT.TCP |
|
Connection to MQSeries for Compaq Tru64 UNIX
The values in this section of the table must match those used in Table 23, as indicated. | ||||
(C) | Remote queue manager name | (A) | DECUX |
|
(D) | Remote queue name |
| DECUX.REMOTEQ |
|
(E) | Queue name at remote system | (B) | DECUX.LOCALQ |
|
(F) | Transmission queue name |
| DECUX |
|
(H) | Sender (TCP) channel name |
| DECUX.WINNT.TCP |
|
(J) | Receiver (TCP) channel name | (H) | WINNT.DECUX.TCP |
|
Connection to WebSphere MQ for HP-UX
The values in this section of the table must match those used in Table 25, as indicated. | ||||
(C) | Remote queue manager name | (A) | HPUX |
|
(D) | Remote queue name |
| HPUX.REMOTEQ |
|
(E) | Queue name at remote system | (B) | HPUX.LOCALQ |
|
(F) | Transmission queue name |
| HPUX |
|
(G) | Sender (SNA) channel name |
| WINNT.HPUX.SNA |
|
(H) | Sender (TCP) channel name |
| WINNT.HPUX.TCP |
|
(I) | Receiver (SNA) channel name | (G) | HPUX.WINNT.SNA |
|
(J) | Receiver (TCP/IP) channel name | (H) | HPUX.WINNT.TCP |
|
Connection to MQSeries for AT&T GIS UNIX
The values in this section of the table must match those used in Table 27, as indicated. | ||||
(C) | Remote queue manager name | (A) | GIS |
|
(D) | Remote queue name |
| GIS.REMOTEQ |
|
(E) | Queue name at remote system | (B) | GIS.LOCALQ |
|
(F) | Transmission queue name |
| GIS |
|
(G) | Sender (SNA) channel name |
| WINNT.GIS.SNA |
|
(H) | Sender (TCP/IP) channel name |
| WINNT.GIS.TCP |
|
(I) | Receiver (SNA) channel name | (G) | GIS.WINNT.SNA |
|
(J) | Receiver (TCP/IP) channel name | (H) | GIS.WINNT.TCP |
|
Connection to WebSphere MQ for Solaris
The values in this section of the table must match those used in Table 30, as indicated. | ||||
(C) | Remote queue manager name |
| SOLARIS |
|
(D) | Remote queue name |
| SOLARIS.REMOTEQ |
|
(E) | Queue name at remote system | (B) | SOLARIS.LOCALQ |
|
(F) | Transmission queue name |
| SOLARIS |
|
(G) | Sender (SNA) channel name |
| WINNT.SOLARIS.SNA |
|
(H) | Sender (TCP) channel name |
| WINNT.SOLARIS.TCP |
|
(I) | Receiver (SNA) channel name | (G) | SOLARIS.WINNT.SNA |
|
(J) | Receiver (TCP) channel name | (H) | SOLARIS.WINNT.TCP |
|
Connection to WebSphere MQ for iSeries
The values in this section of the table must match those used in Table 49, as indicated. | ||||
(C) | Remote queue manager name |
| AS400 |
|
(D) | Remote queue name |
| AS400.REMOTEQ |
|
(E) | Queue name at remote system | (B) | AS400.LOCALQ |
|
(F) | Transmission queue name |
| AS400 |
|
(G) | Sender (SNA) channel name |
| WINNT.AS400.SNA |
|
(H) | Sender (TCP) channel name |
| WINNT.AS400.TCP |
|
(I) | Receiver (SNA) channel name | (G) | AS400.WINNT.SNA |
|
(J) | Receiver (TCP) channel name | (H) | AS400.WINNT.TCP |
|
Connection to WebSphere MQ for z/OS without CICS
The values in this section of the table must match those used in Table 34, as indicated. | ||||
(C) | Remote queue manager name |
| MVS |
|
(D) | Remote queue name |
| MVS.REMOTEQ |
|
(E) | Queue name at remote system | (B) | MVS.LOCALQ |
|
(F) | Transmission queue name |
| MVS |
|
(G) | Sender (SNA) channel name |
| WINNT.MVS.SNA |
|
(H) | Sender (TCP) channel name |
| WINNT.MVS.TCP |
|
(I) | Receiver (SNA) channel name | (G) | MVS.WINNT.SNA |
|
(J) | Receiver (TCP/IP) channel name | (H) | MVS.WINNT.TCP |
|
Connection to WebSphere MQ for z/OS using queue-sharing groups
The values in this section of the table must match those used in Table 43, as indicated. | ||||
(C) | Remote queue manager name |
| QSG |
|
(D) | Remote queue name |
| QSG.REMOTEQ |
|
(E) | Queue name at remote system | (B) | QSG.SHAREDQ |
|
(F) | Transmission queue name |
| QSG |
|
(G) | Sender (SNA) channel name |
| WINNT.QSG.SNA |
|
(H) | Sender (TCP) channel name |
| WINNT.QSG.TCP |
|
(I) | Receiver (SNA) channel name | (G) | QSG.WINNT.SNA |
|
(J) | Receiver (TCP/IP) channel name | (H) | QSG.WINNT.TCP |
|
Connection to MQSeries for VSE/ESA
The values in this section of the table must match those used in Table 51, as indicated. | ||||
(C) | Remote queue manager name |
| VSE |
|
(D) | Remote queue name |
| VSE.REMOTEQ |
|
(E) | Queue name at remote system | (B) | VSE.LOCALQ |
|
(F) | Transmission queue name |
| VSE |
|
(G) | Sender channel name |
| WINNT.VSE.SNA |
|
(I) | Receiver channel name | (G) | VSE.WINNT.SNA |
|
def ql (OS2) + (F) usage(xmitq) + replace def qr (OS2.REMOTEQ) + (D) rname(OS2.LOCALQ) + (E) rqmname(OS2) + (C) xmitq(OS2) + (F) replace def chl (WINNT.OS2.SNA) chltype(sdr) + (G) trptype(lu62) + conname(OS2CPIC) + (18) xmitq(OS2) + (F) replace
def ql (WINNT.LOCALQ) replace (B) def chl (OS2.WINNT.SNA) chltype(rcvr) + (I) trptype(lu62) + replace
def ql (OS2) + (F) usage(xmitq) + replace def qr (OS2.REMOTEQ) + (D) rname(OS2.LOCALQ) + (E) rqmname(OS2) + (C) xmitq(OS2) + (F) replace def chl (WINNT.OS2.TCP) chltype(sdr) + (H) trptype(tcp) + conname(remote_tcpip_hostname) + xmitq(OS2) + (F) replace
def ql (WINNT.LOCALQ) replace (B) def chl (OS2.WINNT.TCP) chltype(rcvr) + (J) trptype(tcp) + replace
def ql (OS2) + (F) usage(xmitq) + replace def qr (OS2.REMOTEQ) + (D) rname(OS2.LOCALQ) + (E) rqmname(OS2) + (C) xmitq(OS2) + (F) replace def chl (WINNT.OS2.NET) chltype(sdr) + (K) trptype(netbios) + conname(remote_system_NetBIOS_name) + xmitq(OS2) + (F) replace
def ql (WINNT.LOCALQ) replace (B) def chl (OS2.WINNT.NET) chltype(rcvr) + (M) trptype(tcp) + replace
def ql (OS2) + (F) usage(xmitq) + replace def qr (OS2.REMOTEQ) + (D) rname(OS2.LOCALQ) + (E) rqmname(OS2) + (C) xmitq(OS2) + (F) replace def chl (WINNT.OS2.SPX) chltype(sdr) + (L) trptype(spx) + conname('network.node(socket)') + xmitq(OS2) + (F) replace
def ql (WINNT.LOCALQ) replace (B) def chl (OS2.WINNT.SPX) chltype(rcvr) + (N) trptype(tcp) + replace
WebSphere MQ for Windows allows you to automate the startup of a queue manager and its channel initiator, channels, listeners, and command servers. Use the IBM WebSphere MQ Services snap-in to define the services for the queue manager. When you have successfully completed testing of your communications setup, set the relevant services to automatic within the snap-in. This file can be read by the supplied WebSphere MQ service when the system is started.
For more information about this, see the WebSphere MQ System Administration Guide book.
WebSphere MQ for Windows provides the flexibility to run sender channels as Windows NT processes or Windows NT threads. This is specified in the MCATYPE parameter on the sender channel definition. Each installation should select the type appropriate for their application and configuration. Factors affecting this choice are discussed below.
Most installations will select to run their sender channels as threads, because the virtual and real memory required to support a large number of concurrent channel connections will be reduced. When the WebSphere MQ listener process (started via the RUNMQLSR command) exhausts the available private memory needed, an additional listener process will need to be started to support more channel connections. When each channel runs as a process, additional processes are automatically started, avoiding the out-of-memory condition.
If all channels are run as threads under one WebSphere MQ listener, a failure of the listener for any reason will cause all channel connections to be temporarily lost. This can be prevented by balancing the threaded channel connections across two or more listener processes, thus enabling other connections to keep running. If each sender channel is run as a separate process, the failure of the listener for that process will affect only that specific channel connection.
A NetBIOS connection needs a separate process for the Message Channel
Agent. Therefore, before you can issue a START CHANNEL command, you
must start the channel initiator, or you may start a channel using the
RUNMQCHL command.
You can optionally allow a message channel agent (MCA) to transfer messages
using multiple threads. This process, called pipelining,
enables the MCA to transfer messages more efficiently, with fewer wait states,
which improves channel performance. Each MCA is limited to a maximum of
two threads.
You control pipelining with the PipeLineLength parameter in the
qm.ini file. This parameter is added to the CHANNELS
stanza:
With WebSphere MQ for Windows, use the WebSphere MQ Services snap-in to set
the PipeLineLength parameter in the registry. Refer to the
WebSphere MQ System Administration Guide book for a
complete description of the CHANNELS stanza.
Notes: When you use pipelining, the queue managers at both ends of the channel
must be configured to have a PipeLineLength greater than 1.
Note that pipelining can cause some exit programs to fail, because:
Check the design of your exit programs before you use pipelining:
Consider a message exit that opens a queue and uses its handle for MQPUT
calls on all subsequent invocations of the exit. This fails in
pipelining mode because the exit is called from different threads. To
avoid this failure, keep a queue handle for each thread and check the thread
identifier each time the exit is invoked.Multiple thread support -- pipelining
Channel exit considerations