ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » MQ Configuration for Large Message transfer

Post new topic  Reply to topic
 MQ Configuration for Large Message transfer « View previous topic :: View next topic » 
Author Message
liviur
PostPosted: Thu Dec 15, 2005 9:07 am    Post subject: MQ Configuration for Large Message transfer Reply with quote

Novice

Joined: 15 Nov 2005
Posts: 15

Hello,

We're developing software to transfer large files via MQ (5.3) between multiple platforms in our enterprise. Our current design is to split file contents into multiple chunks, which we'll put into MQ messages. The environment is Win2K, MQ 5.3 and .Net (C# - using AMQMDNET.DLL)

Everything looks good for files up to 650 MB after which slowdowns begin.
It doesn't matter how large each individual message is (we've got it configurable and tried from 16K to 10MB), the result it's the same: first messages totaling up to 650 MB are written each in under 1 second / mesage. Messages after the 650 MB point has been reached take increasingly longer to put on queue (5, 10, 15+ seconds).

Configuration: 62 primary log files, 1 secondary log file, LogFilePages = 16384, LogWriteIntegrity=TripleWrite (default), LogBufferPages = 512.

Messages are put under Synchpoint and participate in a COM+ transaction.

I have a workaround but would require redesign - use a status queue or an entry in the DB with the file offset and commit sooner (prior to reaching the 650 limit).

My question: is there a configuration change I could make or have we hit a known limit?

Please help! Thank you.
Back to top
View user's profile Send private message
jefflowrey
PostPosted: Thu Dec 15, 2005 9:12 am    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

Are you using circular or linear logging?

How big are the chunks you are writing?

You realize that you can't open a transaction larger than your total log file size, right?
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
Tibor
PostPosted: Thu Dec 15, 2005 9:41 am    Post subject: Reply with quote

Grand Master

Joined: 20 May 2001
Posts: 1033
Location: Hungary

Jeff,

I wouldn't blame the transactional log of MQ. We sent bigger files through the queues without any slowdown. I suspect the COM+:
Quote:
Messages are put under Synchpoint and participate in a COM+ transaction.

liviur,

Otherwise I don't understand why needed a transactional file copy... in this case I prefer the non-persistent messages.

Tibor
Back to top
View user's profile Send private message
liviur
PostPosted: Thu Dec 15, 2005 9:59 am    Post subject: Reply with quote

Novice

Joined: 15 Nov 2005
Posts: 15

Jeff & Tibor,

Thank you for your prompt replies.

1. We're using CIRCULAR logging but intend to change it to LINEAR in the near future (for all the benefits it brings).

2. The chunks are configurable and I've tried starting at 16K then increasing up to 10MB.

Here are the write times for 5 MB chunks and a large file (1 GB):
Chunk1 - 600 ms
Chunk2 - 600 ms
....
Chunk130 - 2 seconds
...
Chunk140 - 5 seconds
...
and so on... (up to 15+ seconds for the final chunks)

3. Yes, I realized that the hard way . I actually had to increase all the default log file settings for the queue manager - it was backing out the transaction.

4. I believe COM+ is not the issue here because it only coordinates the transaction managers (MQ and eventually DB in this case). It doesn't have to persist any data, it doesn't timeout or error out. After writing the last chunk, I can either Commit or Abort the transaction just fine.

5. Transactional file copy is a both a business and design requirement.
a) Non-persistent messages would not be available upon QM restart (if my understanding is correct).
b) We have to update the DB when we're done with the message, all in one transaction.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Dec 15, 2005 12:22 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

Just as a question and not to reinvent the wheel but
Have you looked at PM4DATA?
Back to top
View user's profile Send private message Send e-mail
liviur
PostPosted: Thu Dec 15, 2005 12:45 pm    Post subject: Reply with quote

Novice

Joined: 15 Nov 2005
Posts: 15

Thank you for the reply. I wasn't aware of this product and will definitely investigate its capabilities and the possibility of using it for our purposes. From what I've read so far it looks pretty good.

Should we not be able to use it however, I'd still be interested to know what's wrong with the current setup.

Again, thank you for pointing out this alternative to me.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Thu Dec 15, 2005 12:46 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

Do any of the MQPUTs actually fail?

Also, maybe MQ is not the problem even. Put a timestamp b4 and after the MQPUT, and see how long these are taking. Maybe your app is bogging down completly unrelated to MQ.

Quote:

but intend to change it to LINEAR in the near future (for all the benefits it brings).

A whooole 'nother topic. Exactly which benifit are you looking to utilize? (me and my buddy hopsala are in the minority on this subject) Regardless, it won't help this scenario in anyway.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
liviur
PostPosted: Fri Dec 16, 2005 6:07 am    Post subject: Reply with quote

Novice

Joined: 15 Nov 2005
Posts: 15

Peter, Thank you for your reply.

None of the MQ PUT calls fails. In fact, if I wait long enough my application finishes and commits all messages to the queue. Oh, I'm sorry I wasn't explicit enough: the times listed are based on individual timings of PUT calls (get timestamp - put - get timestamp - obtain & display difference).

In terms of LL vs. CL, from what I've read LL allows you to recover previous logs (if archived) - still a problem though if current log is corrupt. If corruption wouldn't be a factor, I'd go with CL because it requires less management, it reuses file space and it's faster. However, as you said, it's "A whooole 'nother topic"
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » MQ Configuration for Large Message transfer
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.