ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Java static data members leaking on re-deploy?

Post new topic  Reply to topic
 Java static data members leaking on re-deploy? « View previous topic :: View next topic » 
Author Message
fvclaus
PostPosted: Tue Mar 11, 2014 3:50 am    Post subject: Java static data members leaking on re-deploy? Reply with quote

Newbie

Joined: 25 Feb 2014
Posts: 5

I am using a RollingFileAppender to log events with log4j in my message broker application. After starting the execution group that executes my application and re-deploying my application (i.e. after making some changes), the roll-over stops working.

A roll-over usually works like this, assuming that all roll-over slots are already occupied (-1.log.archive .... -3.log.archive exist):
1. Move old log files. -1.log.archive becomes -2.log.archive, -2.log.archive becomes -3.log.archive.
2. Copy the current log file. current.log becomes current-1.log.archive.
3. Delete the current log file.

The roll-over works fine, if my execution group is "clean". "Clean" means that the eg is either new or has been stopped and started or the broker has been stopped and started. The roll-over stops working when I re-deploy the application into the same eg. To be more specific, the third step of the roll-over fails, because the file handle from the first deployment is still open (or at least I think so).

I have reconstructed the problem with a small application without log4j. It has a class with a static RandomAccessFile member and a JavaCompute node that accesses the file. The JavaCompute node gets triggered by a TimeoutNotification node upon deployment (so you don't have to put a message in a queue to trigger it). It writes a message to the file and exits. After exiting, the file handle is still live and referenced by static field StaticFileHandle.file. After re-deploying the application the JavaCompute node will again write something to the file and will try to delete the file. The deletion will fail, because the file handle from the first deployment is still open.

Here is a link to the project: http://www.filedropper.com/staticdatamembers . The project consists of a application and a Java project.

Steps to reconstruct the problem:
0. Make sure that the file C:\deleteMe.txt does not exist.
1. Start the toolkit & import the project.
2. Add a breakpoint in line 29 of AttemptToDeleteFile_JavaCompute.
3. Create a new execution group and start the debugger.
4. Deploy the application.
5. The JavaCompute node should be triggered. It should write "Hello World" to the file C:\deleteMe.txt. The file gets automatically created.
6. Press the continue button in the debugger.
7. Redeploy the application into the same eg.
8. Repeat 5. In addition the code should make an attempt to delete the File and it will fail. Check the boolean variable success in the debugger.

During my research, I have found this thread: http://www.mqseries.net/phpBB2/viewtopic.php?t=62600&postdays=0&postorder=asc&start=10, where some describe the same problem. In the discussion, it is suggested to use a database or IAM3 instead of a flat file. The company I work for has a developer guide where flat files for logging are the reference solution, therefore I cannot use a database or another solution.

My questions are: What happens with existing static variables, after a bar file gets deployed? If the behaviour is intended, how can I go about to solve my problem? I also looked into the AdminsteredObjectListener interface that is described here: http://publib.boulder.ibm.com/infocenter/wmbhelp/v7r0m0/index.jsp?topic=%2Fcom.ibm.etools.mft.cmp.doc%2Fcom%2Fibm%2Fbroker%2Fconfig%2Fproxy%2FAdvancedAdministeredObjectListener.html, but I was not able to listen to a "before-deploy" event to close the file handles.

Thanks for reading up to this point and thanks for the feedback.
Back to top
View user's profile Send private message
stoney
PostPosted: Tue Mar 11, 2014 4:33 am    Post subject: Reply with quote

Centurion

Joined: 03 Apr 2013
Posts: 140

When you deploy Java to an execution group, the class loader that contains your deployed classes is closed and then recreated. The static variables in your deployed classes are replaced by new copies in the new class loader.

You should add an onDelete() method to your Java Compute node(s). This method will be called when the node is deleted during a redeployment. In your onDelete() implementation, close any open file handles you may have.
Back to top
View user's profile Send private message
fvclaus
PostPosted: Tue Mar 11, 2014 7:49 am    Post subject: Reply with quote

Newbie

Joined: 25 Feb 2014
Posts: 5

Thanks, stoney. That works perfectly.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Tue Mar 11, 2014 10:02 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

It only works perfectly for being a terrible idea.

Don't use log4j.

Use javax.util.logging.

Don't use javax.util.logging in Broker. Use all of the other features of the product that are built in and do not require you to write your own code and require you to significantly complicated your infrastructure and deployment practices.

But, you know, congratulations on successfully implementing a really poor idea.
Back to top
View user's profile Send private message
kimbert
PostPosted: Tue Mar 11, 2014 2:05 pm    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

Quote:
Don't use javax.util.logging in Broker. Use all of the other features of the product that are built in and do not require you to write your own code and require you to significantly complicated your infrastructure and deployment practices.
That begs the questions
- which features?
- what complications?

Not saying that you're wrong - but the advice is more likely to be heeded when it is properly understood.
_________________
Before you criticize someone, walk a mile in their shoes. That way you're a mile away, and you have their shoes too.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Tue Mar 11, 2014 2:18 pm    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

kimbert wrote:
Quote:
Don't use javax.util.logging in Broker. Use all of the other features of the product that are built in and do not require you to write your own code and require you to significantly complicated your infrastructure and deployment practices.
That begs the questions
- which features?
- what complications?

Not saying that you're wrong - but the advice is more likely to be heeded when it is properly understood.


I agree that I'm not being helpful.

The fact that fvclaus's first implementation left objects leaking in memory should have been a big red flag that the entire approach was a poor idea to begin with.

The fact that log4j requires additional jar files, and javax.util.logging is BUILT INTO EVERY JVM should be a big red flag that log4j is a more complicated solution.
Back to top
View user's profile Send private message
smdavies99
PostPosted: Tue Mar 11, 2014 10:53 pm    Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

mqjeff wrote:


The fact that log4j requires additional jar files, and javax.util.logging is BUILT INTO EVERY JVM should be a big red flag that log4j is a more complicated solution.


It is a sad fact that there are several generations of Java devs who only know Log4j. I've seen them start a new project and the first thing they do is setup Log4j despite clear guidelines stating that it should not be used.

I wish Log4j would disappear up its own backside. Over the years it has given me nowt but trouble. The last thing Log4j advocates think about is what happens when the fill up a file system with GB of logs that no one will ever read 'just in case'.
_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
fvclaus
PostPosted: Wed Mar 12, 2014 1:13 am    Post subject: Reply with quote

Newbie

Joined: 25 Feb 2014
Posts: 5

mqjeff wrote:

Don't use javax.util.logging in Broker. Use all of the other features of the product that are built in and do not require you to write your own code and require you to significantly complicated your infrastructure and deployment practices.


For our logging system we have the following requirements:
- Configure my log file destination on a cluster (bunch of application) / application / execution group or message flow basis without duplicating the log file path. Nice to have: Configure the path a run-time in the explorer. Currently we are using a configurable service for that.
- Add a customized prefix to every logging statement. That customized prefix is the usually the name of the messageflow and the JavaCompute node that created the logging event.
- Add the message id (or any other unique identifier that is quite often calculated in code) to every log statement (log4j MDC). I know that there are monitoring events, but it is a pain to implement them and too much work to make sense out of them during development. For me, it is much easier to take a look at look at a log file and most of the time, you will know immediately what is going on. Not so with the monitoring events. The operations team prefers the log files over the monitoring events too.
- Most importantly: Make logging easy. I want to extend some MbJavaCompute class and have access to a logger without configuring anything. I don't want to litter my messageflow with tracing- and or logging nodes, because this is tedious and not as precise as a single log statement.

We must have a logging system that has a high amount of flexibility, because we are using a Nagios log file adapter to create tickets with varying severity based on certain patterns that appear in the log file. The Nagios adapter reads the log file from the last position every n minutes. One requirement for example was that a roll-over is only allowed, if there was no error in the last n minutes. Otherwise the log adapter might miss one error: nagios-check...error...error...roll-over...nagios-check (says ok - but it just missed two errors). That was very easy to implement with our custom solution.

We considered:
- Monitoring events: There is no way to create different tickets, because the severity of the event is not clear. The monitoring events are put in a database and are checked every n minutes. From there it is only possible to check, if a error happened in the last minutes. One problem is that for every n minutes it is only possible to create one ticket. What if two or more error happened during that time? Another problem is that monitoring events are global to the broker and put in the same DB. How do you create tickets for those application groups?
- IAM3: Afaik this does not allow to configure the log path in a very flexible way. Also this is a log4j solution. I don't see how this is better than my custom solution.

mqjeff wrote:
The fact that log4j requires additional jar files, and javax.util.logging is BUILT INTO EVERY JVM should be a big red flag that log4j is a more complicated solution.

I didn't know about javax.util.logging and I will look into it now.


smdavies99 wrote:
It is a sad fact that there are several generations of Java devs who only know Log4j. I've seen them start a new project and the first thing they do is setup Log4j despite clear guidelines stating that it should not be used.

I have not done anything for years, because I just graduated a few months ago. Additionally, I have never heard of such a guideline. You make it sound like it is something as evil as lets say using public data members. Where are these guidelines? A quick google search for "do not use log4j" or "log4j vs java logging" gave me a rather balanced discussion of the matter.
Back to top
View user's profile Send private message
smdavies99
PostPosted: Wed Mar 12, 2014 1:49 am    Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

fvclaus wrote:

I have not done anything for years, because I just graduated a few months ago. Additionally, I have never heard of such a guideline. You make it sound like it is something as evil as lets say using public data members. Where are these guidelines? A quick google search for "do not use log4j" or "log4j vs java logging" gave me a rather balanced discussion of the matter.


Well done for graduating. It was a very long time ago for me (1975). The java devs I was referring to knew nothing else but Log4j. They'd been taught to use it and that was it.
Much like a lot of Oracle Sys Admins on Unix/Linux seem to name their DB mount points /u/... simply because that was how it was done in the training environment.

IMHO, logging stuff to files is very old school. It is something I'd have done myself years ago. you can't easily do searches on sequential files. Nowadays, I log to DB Tables. Simple SQL Queries save me a lot of time finding problems. There are very many differing views on this topic as I am sure that you will fond out if you stay around long enough
_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
kimbert
PostPosted: Wed Mar 12, 2014 1:57 am    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

One of the requirements was:
Quote:
Configure my log file destination on a cluster (bunch of application) / application / execution group or message flow basis without duplicating the log file path.
That could be achieved by subscribing to monitoring events and using a single MQ client application ( or message flow ) that reads the messages and logs them to disk.
Quote:
Add a customized prefix to every logging statement. That customized prefix is the usually the name of the messageflow and the JavaCompute node that created the logging event.
Every monitoring event contains details of the broker/execution group/application/flow/terminal that emitted it. The logging app could easily extract the prefixes from that info.
Quote:
Most importantly: Make logging easy.
I think this fits the bill. What you log is configured via the monitoring profile, so can be adjusted without redeploying the applications/flows. Individual event sources can be enabled/disabled if necessary. Conditional logging can be achieved using the XPath predicate in the event definition.

I guess my point is that it doesn't have to either monitoring events or file-based logging. You can get the best of both worlds.
_________________
Before you criticize someone, walk a mile in their shoes. That way you're a mile away, and you have their shoes too.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Java static data members leaking on re-deploy?
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.