ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Read 1000 records from a csv file at a time

Post new topic  Reply to topic Goto page Previous  1, 2
 Read 1000 records from a csv file at a time « View previous topic :: View next topic » 
Author Message
lancelotlinc
PostPosted: Thu Jan 17, 2013 10:14 am    Post subject: Reply with quote

Jedi Knight

Joined: 22 Mar 2010
Posts: 4941
Location: Bloomington, IL USA

mqjeff wrote:
lancelotlinc wrote:
sumitha.mp wrote:
Reason for reading 1000 records is the computation needs to be done per 1000 records. If I'm going with record by record , I will have to read record by record till 1000 records are read and store each of those in memory. Is there a better way to do this ?


No.


YES.

Alter the message model to include a record structure that contains up to 1000 records.

Tell the FileInput node that *THAT* is a "record", rather than the structure that holds one record.



This is the same thing. Expressed with different language. Yours is more precise.
_________________
http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER
Back to top
View user's profile Send private message Send e-mail
Vitor
PostPosted: Thu Jan 17, 2013 10:19 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

nathanw wrote:
out of curiosity what happens if there is not 1000 records in a file?

or not a multiple of 1000?



_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
sumitha.mp
PostPosted: Fri Jan 18, 2013 8:21 am    Post subject: Reply with quote

Newbie

Joined: 21 Aug 2012
Posts: 9

HI
I create a DFDL to read 2 rows of data from a cvs file as a single record. But it does not seems to work.
Here is what I tried
I created a DFDL as below . But still I see that only one row is getting read at a time.

<?xml version="1.0" encoding="UTF-8"?><xsd:schema xmlns:csv="http://www.ibm.com/dfdl/CommaSeparatedFormat" xmlns:dfdl="http://www.ogf.org/dfdl/dfdl-1.0/" xmlns:ibmDfdlExtn="http://www.ibm.com/dfdl/extensions" xmlns:ibmSchExtn="http://www.ibm.com/schema/extensions" xmlns:xsd="http://www.w3.org/2001/XMLSchema">

<xsd:import namespace="http://www.ibm.com/dfdl/CommaSeparatedFormat" schemaLocation="IBMdefined/CommaSeparatedFormat.xsd"/>
<xsd:annotation>
<xsd:appinfo source="http://www.ogf.org/dfdl/">
<dfdl:format documentFinalTerminatorCanBeMissing="yes" encoding="{$dfdl:encoding}" escapeSchemeRef="csv:CSVEscapeScheme" ref="csv:CommaSeparatedFormat"/>
</xsd:appinfo>
</xsd:annotation>


<xsd:element ibmSchExtn:docRoot="true" name="model">
<xsd:complexType>
<xsd:sequence dfdl:separator="">
<xsd:element dfdl:occursCountKind="implicit" dfdl:terminator="%CR;%LF;%WSP*;" maxOccurs="unbounded" minOccurs="1" name="record">
<xsd:complexType>
<xsd:sequence dfdl:separatorPolicy="suppressedAtEndLax">
<xsd:annotation>
<xsd:appinfo source="http://www.ogf.org/dfdl/">
<dfdl:sequence/>
</xsd:appinfo>
</xsd:annotation>
<xsd:element dfdl:occursCountKind="fixed" dfdl:terminator="" maxOccurs="2" minOccurs="2" name="field1">
<xsd:complexType>
<xsd:sequence dfdl:terminator="%CR;%LF;%WSP*;">
<xsd:element name="field1" type="xsd:string"/>
<xsd:element name="field2" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>


</xsd:schema>
Back to top
View user's profile Send private message
adubya
PostPosted: Fri Jan 18, 2013 8:51 am    Post subject: Reply with quote

Partisan

Joined: 25 Aug 2011
Posts: 377
Location: GU12, UK

Why don't you read each record individually and then use a Collector node to batch up 1000 records at a time (obviously include timeout capability to cope with the last batch of records). When each batch has been collected then a downstream compute node can perform the processing you require.
Back to top
View user's profile Send private message Send e-mail
kimbert
PostPosted: Fri Jan 18, 2013 2:00 pm    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

Quote:
I created a DFDL as below . But still I see that only one row is getting read at a time.
This is probably easy to fix.
1. Find out what the DFDL parser is doing, and why.
2. Use that information to adjust your DFDL schema.

The only way to do 1. is to look at the DFDL Trace. That is available in the toolkit ( in the DFDL Test perspective ). You can also get DFDL trace from the deployed message flow. Take a debug-level user trace. That means using mqsichangetrace, mqsireadlog and mqsiformatlog. If you search for 'mqsichangetrace' in this forum you will find instructions on how to switch on user trace.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page Previous  1, 2 Page 2 of 2

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Read 1000 records from a csv file at a time
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.