ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » File Input node is processing the same file multiple times

Post new topic  Reply to topic
 File Input node is processing the same file multiple times « View previous topic :: View next topic » 
Author Message
lokeshraghuram
PostPosted: Sun Sep 28, 2014 6:32 am    Post subject: File Input node is processing the same file multiple times Reply with quote

Novice

Joined: 10 Dec 2013
Posts: 14

Hello,

I am struck with a weird issue for File Input node. I am placing the file in the path which will be polled by FileInput node. Files of normal size are being picked and processed as expected. When I place 27mb file, it is processed by the same flow multiple times

The issue was identified when the actual output count started exceeding the expected output count.

We tried to make an entry in database when the flow picks the file. Several entries are made when the file is present in mqsitransitin which means the same file is processed multiple times.

I am using version 8. Please suggest a solution.
Back to top
View user's profile Send private message
zpat
PostPosted: Sun Sep 28, 2014 11:06 am    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5849
Location: UK

What do you do with the file? How many records are in it?

I would look at transactional rollback as a possibility - what's in the WMB error log?
_________________
Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Back to top
View user's profile Send private message
lokeshraghuram
PostPosted: Sun Sep 28, 2014 11:52 am    Post subject: Reply with quote

Novice

Joined: 10 Dec 2013
Posts: 14

Thank you for the reply..!!

There are about 150,000 records and every record will be transformed to XML. I am committing every message after creating it. Transnational roll back isn't preferred because of the number of messages that will be created out of one single file.

No errors in WMB log
Back to top
View user's profile Send private message
zpat
PostPosted: Sun Sep 28, 2014 10:05 pm    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5849
Location: UK

I meant as a possibility as the cause of the problem!

How are you processing each record exactly?
_________________
Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Back to top
View user's profile Send private message
vicentius
PostPosted: Sun Sep 28, 2014 11:46 pm    Post subject: Reply with quote

Apprentice

Joined: 01 Mar 2013
Posts: 28

How is the File Input node configured? Do you read the whole file in one go or do you do one record at a time?
In both cases, how do you propagate messages to the output?
Back to top
View user's profile Send private message
lokeshraghuram
PostPosted: Tue Sep 30, 2014 7:01 pm    Post subject: Reply with quote

Novice

Joined: 10 Dec 2013
Posts: 14

Hello Vicentus,

I am reading the file as a whole. Reading in chunks is not possible because the data I get is not the same. It will have two types of records - One starts with H and another starts with D and the pattern will be like below

H 123 abc xyz
D bbc 123 ffg
D ffg 334 tyu
D wwr iit 676
H 123 rt5 665
D 454 65 24
H 435 545 645
D 453 645 gfd
D gfd 534 63

I read the file as a whole. Parse it and convert to MRM. Then I will loop on each record and create one xml for every header + all the details that follow it, till the next header record is found. Once next header is found, I propagate the xml to MQoutput, delete the output structure and create next xml.

@zpat : The above descriptions answers your question as well.
Back to top
View user's profile Send private message
lokeshraghuram
PostPosted: Tue Sep 30, 2014 7:09 pm    Post subject: Reply with quote

Novice

Joined: 10 Dec 2013
Posts: 14

I tried the below things

-> Removed the code which I had to create xml's from input..I just had a loop on each record and propagated outputs with just one field in OutputRoot. This could possibly reduce the memory used by the flow. Doing this we were able to process till 50mb and haven't tried after that.

-> The initial flow I had will try to place the outputs in a different queue manager that is part of the cluster. Tried creating a local queue and the file(27 MB) which failed before has processed successfully.

I have gone ahead with second option as it temporarily given me a solution but I believe its not a permanent and correct solution. I would like to know why my original flow is processing the file multiple times
Back to top
View user's profile Send private message
Esa
PostPosted: Tue Sep 30, 2014 11:49 pm    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

lokeshraghuram wrote:
I would like to know why my original flow is processing the file multiple times


Run a test with your original setup. Write down the PID of the execution group before you start. After the test check if the PID is still the same.

You need to take a look at the large message processing sample. That's an example solution where the memory footprint of input message parsing is small and independent of input file size.
Back to top
View user's profile Send private message
Esa
PostPosted: Wed Oct 01, 2014 3:54 am    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

lokeshraghuram wrote:
Hello Vicentus,

I am reading the file as a whole. Reading in chunks is not possible because the data I get is not the same. It will have two types of records - One starts with H and another starts with D and the pattern will be like below

H 123 abc xyz
D bbc 123 ffg
D ffg 334 tyu
D wwr iit 676
H 123 rt5 665
D 454 65 24
H 435 545 645
D 453 645 gfd
D gfd 534 63

I read the file as a whole. Parse it and convert to MRM.


What do you mean by "parse it and convert to MRM"? Doesn't sound right.

Even if the file contains records of different types, the records seem to be terminated with linebreaks. If this is the case you could process the file record by record by setting File Input node record detection to Delimited with the default Line End delimiter.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Wed Oct 01, 2014 9:54 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20697
Location: LI,NY

I did not see which version you are using.
The question comes up though: why are you using MRM and not DFDL?
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » File Input node is processing the same file multiple times
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.