Any trigger monitor you use will open the initiation queue with an extended get wait time.
Deciding between using triggers versus a listener model is an application design decision, that should be based on expected message traffic patterns and application considerations.
Are your messages going to come in as clumps, or a steady stream of individual messages? Are messages going to arrive in large batches or small? Is the chronological spacing between each message likely to be large, or small? Is there a high overhead for starting a new process from a trigger? Do you need to maintain persistant information (like a database connection) between invocations of your program? Does your program make heavy useage of system resources that need to be available to other processes?
Triggering causes an overhead as trigger messages are PUT and GET in addition to the application data messages.
For frequent messages I would generally recommend the model with an outstanding GET with WAIT. You might want to code the WAIT for say 60 seconds before checking for any request to close your application down (and don't forget to use FAIL_IF_QUIESCING as well). Then repeat the GET with WAIT of course if the application is not closing down.
I try to design all such server applications so that multiple instances can be running and will compete for incoming messages from the same queue - this provides a simple way to scale the application or have load-balancing or failover (or both).
You might consider why you are using many queues, if these are ending up using the same application - why not use a single request queue? If you need different reply queues then the name of the reply queue can be passed in the "reply-to-queue" field of the incoming message.
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum