Monday, October 27, 2008

Sr Solutions Architect Position from RBC

Requisition Number: 97888

Position Title: Sr Solutions Architect

Position Type: Full-Time

Position Category: Information Technology

Relocation: No

Job Description:

The Solution Architecture group of RBC's Enterprise Architecture Services is responsible for end to end solution architecture of all key initiatives across RBC's Global Technology Operations. We currently have several exciting opportunities for Senior Solution Architects to lead development of the end to end technical solution architecture for high complexity and high risk IT initiatives (i.e. $10 million - $25 million) that meet sponsor/stakeholder needs.

Requirements:

Our minimum requirements to ensure your success in this role are:
- Experienced with all aspects of architecture including application, data, security and infrastructure
- Total IT experience of at least 15 years out of which 5 years doing solution architectures on large projects. Proven experience at leading architecture of a large program from initiation to implementation
- Strong leadership skills (not a follower), comfortable managing large cross functional technical teams and ability to define, influence and impact the technical direction
- Ability to quickly evaluate options, make decisions and execute within an intense high-tech environment
- Solid experiences with a variety of technologies/approaches including J2EE, .NET, mainframe platform, packaged applications, multi-platforms applications, human workflow, EAI, SOA
- Experienced with program/projects involving complex integration of disparate types of technologies/platforms
- Solid understanding of RUP, UML and UML tools
- Knowledge of Enterprise Architecture frameworks: TOGAF, Zachman etc
- Experience with different architecture/design techniques (e.g. OO, Top-down, structured analysis, component-based design) and tools (e.g. RSM, RSA)

Additionally, and in anticipation of a high volume of applicants, consideration will be given to candidates who also possess the following:
- Project management experience
- Working knowledge of financial applications as well as host based systems

If you are a confident leader with outstanding conflict and negotiation skills, then we would be very interested in speaking with you. If you are looking for a highly rewarding career in the Solution Architecture group of RBC's Enterprise Architecture Services, please go directly to our careers site to apply.


Key Accountabilities:

Reporting to the Director of Solution Architecture, the Senior Solution Architect will be primarily responsible to:
- Define end to end technical solutions that take into account the enterprise architecture strategies, current state environment and constraints; analyze the viability of the solution to meet program timeline, budget and quality
- Develop and present solution alternatives along with recommendations to executives and senior management; presents program solution architecture at the Architecture review board
- Work with Data Architecture, Security Architecture and Infrastructure SME's to ensure that all aspects of the solution architecture are defined and elaborated
- Develop the architecture artifacts (Solution Architecture Document, Architecture Decisions) as defined by the PMF/SDLC and review with Enterprise Architecture, Lead Architects, Business Architects and the key program stakeholders

This unique career opportunity further allows you to utilize your full complement of skills and experience by playing the role of the lead technical authority on the program and being responsible for planning all activities leading to the development of the overall solution architecture. You will provide critical thinking and technical leadership to the program right from the idea creation stage.

Overall, your main accountabilities will be broken down as follows:
- End to end solutions architecture (60%)
- Technical leadership and mentoring (20%)
- Training and professional development (10%)

Friday, July 18, 2008

Cannot open the web.xml in Deployment Descriptor editor

Problem(Abstract)
When using IBM® Rational® Application Developer v7 to develop web projects, sometimes the Web Deployment Descriptor file (that is, the web.xml) cannot be opened in the Deployment Descriptor editor.

Symptom
The following Redirecting Editor error occurs when you try to open the web.xml file by double-clicking on it:
IWAE0028E The selected input is not valid for this type of editor. Redirecting to the XML editor.


This issue will also cause your web application to be unable to deploy onto the integrated WebSphere® Application Server.

Cause
The deploy path for the project in question is not set up to point to the default WebContent folder.

Resolving the problem
To fix this, switch to the Resource perspective, and open the web project's .settings/org.eclipse.wst.commmon.component file and change:

to:




This is what the project setting file should look like by default (that is, when creating a new Dynamic Web Project).

http://www-1.ibm.com/support/docview.wss?rs=2042&context=SSRTLW&context=SSJM4G&context=SSSTY3&context=SSCGQ7C&q1=IWAE0028E&uid=swg21272575&loc=en_US&cs=u%20tf-8&lang=en

Tuesday, June 24, 2008

Understanding Strong & Weak References and Caches

http://weblogs.java.net/blog/enicholas/archive/2006/05/understanding_w.html

Some time ago I was interviewing candidates for a Senior Java Engineer position. Among the many questions I asked was "What can you tell me about weak references?" I wasn't expecting a detailed technical treatise on the subject. I would probably have been satisfied with "Umm... don't they have something to do with garbage collection?" I was instead surprised to find that out of twenty-odd engineers, all of whom had at least five years of Java experience and good qualifications, only two of them even knew that weak references existed, and only one of those two had actual useful knowledge about them. I even explained a bit about them, to see if I got an "Oh yeah" from anybody -- nope. I'm not sure why this knowledge is (evidently) uncommon, as weak references are a massively useful feature which have been around since Java 1.2 was released, over seven years ago.

Now, I'm not suggesting you need to be a weak reference expert to qualify as a decent Java engineer. But I humbly submit that you should at least know what they are -- otherwise how will you know when you should be using them? Since they seem to be a little-known feature, here is a brief overview of what weak references are, how to use them, and when to use them.

Strong references

First I need to start with a refresher on strong references. A strong reference is an ordinary Java reference, the kind you use every day. For example, the code:

StringBuffer buffer = new StringBuffer();

creates a new StringBuffer() and stores a strong reference to it in the variable buffer. Yes, yes, this is kiddie stuff, but bear with me. The important part about strong references -- the part that makes them "strong" -- is how they interact with the garbage collector. Specifically, if an object is reachable via a chain of strong references (strongly reachable), it is not eligible for garbage collection. As you don't want the garbage collector destroying objects you're working on, this is normally exactly what you want.

When strong references are too strong

It's not uncommon for an application to use classes that it can't reasonably extend. The class might simply be marked final, or it could be something more complicated, such as an interface returned by a factory method backed by an unknown (and possibly even unknowable) number of concrete implementations. Suppose you have to use a class Widget and, for whatever reason, it isn't possible or practical to extend Widget to add new functionality.

What happens when you need to keep track of extra information about the object? In this case, suppose we find ourselves needing to keep track of each Widget's serial number, but the Widget class doesn't actually have a serial number property -- and because Widget isn't extensible, we can't add one. No problem at all, that's what HashMaps are for:

serialNumberMap.put(widget, widgetSerialNumber);

This might look okay on the surface, but the strong reference to widget will almost certainly cause problems. We have to know (with 100% certainty) when a particular Widget's serial number is no longer needed, so we can remove its entry from the map. Otherwise we're going to have a memory leak (if we don't remove Widgets when we should) or we're going to inexplicably find ourselves missing serial numbers (if we remove Widgets that we're still using). If these problems sound familiar, they should: they are exactly the problems that users of non-garbage-collected languages face when trying to manage memory, and we're not supposed to have to worry about this in a more civilized language like Java.

Another common problem with strong references is caching, particular with very large structures like images. Suppose you have an application which has to work with user-supplied images, like the web site design tool I work on. Naturally you want to cache these images, because loading them from disk is very expensive and you want to avoid the possibility of having two copies of the (potentially gigantic) image in memory at once.

Because an image cache is supposed to prevent us from reloading images when we don't absolutely need to, you will quickly realize that the cache should always contain a reference to any image which is already in memory. With ordinary strong references, though, that reference itself will force the image to remain in memory, which requires you (just as above) to somehow determine when the image is no longer needed in memory and remove it from the cache, so that it becomes eligible for garbage collection. Once again you are forced to duplicate the behavior of the garbage collector and manually determine whether or not an object should be in memory.

Weak references

A weak reference, simply put, is a reference that isn't strong enough to force an object to remain in memory. Weak references allow you to leverage the garbage collector's ability to determine reachability for you, so you don't have to do it yourself. You create a weak reference like this:

WeakReference weakWidget = new WeakReference(widget);

and then elsewhere in the code you can use weakWidget.get() to get the actual Widget object. Of course the weak reference isn't strong enough to prevent garbage collection, so you may find (if there are no strong references to the widget) that weakWidget.get() suddenly starts returning null.

To solve the "widget serial number" problem above, the easiest thing to do is use the built-in WeakHashMap class. WeakHashMap works exactly like HashMap, except that the keys (not the values!) are referred to using weak references. If a WeakHashMap key becomes garbage, its entry is removed automatically. This avoids the pitfalls I described and requires no changes other than the switch from HashMap to a WeakHashMap. If you're following the standard convention of referring to your maps via the Map interface, no other code needs to even be aware of the change.

Reference queues

Once a WeakReference starts returning null, the object it pointed to has become garbage and the WeakReference object is pretty much useless. This generally means that some sort of cleanup is required; WeakHashMap, for example, has to remove such defunct entries to avoid holding onto an ever-increasing number of dead WeakReferences.

The ReferenceQueue class makes it easy to keep track of dead references. If you pass a ReferenceQueue into a weak reference's constructor, the reference object will be automatically inserted into the reference queue when the object to which it pointed becomes garbage. You can then, at some regular interval, process the ReferenceQueue and perform whatever cleanup is needed for dead references.

Different degrees of weakness

Up to this point I've just been referring to "weak references", but there are actually four different degrees of reference strength: strong, soft, weak, and phantom, in order from strongest to weakest. We've already discussed strong and weak references, so let's take a look at the other two.

Soft references

A soft reference is exactly like a weak reference, except that it is less eager to throw away the object to which it refers. An object which is only weakly reachable (the strongest references to it are WeakReferences) will be discarded at the next garbage collection cycle, but an object which is softly reachable will generally stick around for a while.

SoftReferences aren't required to behave any differently than WeakReferences, but in practice softly reachable objects are generally retained as long as memory is in plentiful supply. This makes them an excellent foundation for a cache, such as the image cache described above, since you can let the garbage collector worry about both how reachable the objects are (a strongly reachable object will never be removed from the cache) and how badly it needs the memory they are consuming.

Phantom references

A phantom reference is quite different than either SoftReference or WeakReference. Its grip on its object is so tenuous that you can't even retrieve the object -- its get() method always returns null. The only use for such a reference is keeping track of when it gets enqueued into a ReferenceQueue, as at that point you know the object to which it pointed is dead. How is that different from WeakReference, though?

The difference is in exactly when the enqueuing happens. WeakReferences are enqueued as soon as the object to which they point becomes weakly reachable. This is before finalization or garbage collection has actually happened; in theory the object could even be "resurrected" by an unorthodox finalize() method, but the WeakReference would remain dead. PhantomReferences are enqueued only when the object is physically removed from memory, and the get() method always returns null specifically to prevent you from being able to "resurrect" an almost-dead object.

What good are PhantomReferences? I'm only aware of two serious cases for them: first, they allow you to determine exactly when an object was removed from memory. They are in fact the only way to determine that. This isn't generally that useful, but might come in handy in certain very specific circumstances like manipulating large images: if you know for sure that an image should be garbage collected, you can wait until it actually is before attempting to load the next image, and therefore make the dreaded OutOfMemoryError less likely.

Second, PhantomReferences avoid a fundamental problem with finalization: finalize() methods can "resurrect" objects by creating new strong references to them. So what, you say? Well, the problem is that an object which overrides finalize() must now be determined to be garbage in at least two separate garbage collection cycles in order to be collected. When the first cycle determines that it is garbage, it becomes eligible for finalization. Because of the (slim, but unfortunately real) possibility that the object was "resurrected" during finalization, the garbage collector has to run again before the object can actually be removed. And because finalization might not have happened in a timely fashion, an arbitrary number of garbage collection cycles might have happened while the object was waiting for finalization. This can mean serious delays in actually cleaning up garbage objects, and is why you can get OutOfMemoryErrors even when most of the heap is garbage.

With PhantomReference, this situation is impossible -- when a PhantomReference is enqueued, there is absolutely no way to get a pointer to the now-dead object (which is good, because it isn't in memory any longer). Because PhantomReference cannot be used to resurrect an object, the object can be instantly cleaned up during the first garbage collection cycle in which it is found to be phantomly reachable. You can then dispose whatever resources you need to at your convenience.

Arguably, the finalize() method should never have been provided in the first place. PhantomReferences are definitely safer and more efficient to use, and eliminating finalize() would have made parts of the VM considerably simpler. But, they're also more work to implement, so I confess to still using finalize() most of the time. The good news is that at least you have a choice.

Conclusion

I'm sure some of you are grumbling by now, as I'm talking about an API which is nearly a decade old and haven't said anything which hasn't been said before. While that's certainly true, in my experience many Java programmers really don't know very much (if anything) about weak references, and I felt that a refresher course was needed. Hopefully you at least learned a little something from this review.


Comments
Comments are listed in date ascending order (oldest first) | Post Comment

*

In reply to the implicit question from richunger: The Sun JRE does treat SoftReferences differently from WeakReferences. We attempt to hold on to object referenced by a SoftReference if there isn't pressure on the available memory. One detail: the policy for the "-client" and "-server" JRE's are different: the -client JRE tries to keep your footprint small by preferring to clear SoftReferences rather than expand the heap, whereas the -server JRE tries to keep your performance high by preferring to expand the heap (if possible) rather than clear SoftReferences. One size does not fit all.

A nit: since JDK-1.5.0, the java.lang.ref.Reference class has been generified. So, to create one you use

WeakReference weakWidget = new WeakReference(widget);

and weakWidget.get() returns a Widget, just you'd expect.

There are are some details about what the remove() method on a ReferenceQueue returns, but I'll save that for my next job interview.

Posted by: peterkessler on May 06, 2006 at 03:04 PM

*
I never really learned the difference between weak & soft until this guy ran into it: http://weblog.ikvm.net/PermaLink.aspx?guid=ec45dec2-ec22-4079-9b78-d06e15ddabe7 Thanks for bringing up Phantom References, I don't think I'd ever heard of them.

Posted by: ronaldyang on May 05, 2006 at 01:56 PM

*
Very nice writeup indeed. I must admit I was unaware of phantom references. I heard somewhere that sun's jre does indeed treat soft references as weak references. Don't know if that's still (or ever really was) the case, but I remember reading it somewhere.

One other common use of weak references is the WeakListener, which prevents an object from hanging around simply because another object is listening on it.

Posted by: richunger on May 05, 2006 at 01:03 PM

*
A very important use of PhantomReferences is in DGC (Distributed GC like in RMI). You most certainly do not want to perform remote notification in the GC thread.

Posted by: ianschneider on May 05, 2006 at 08:56 AM

*
Good blog. I have also found LinkedHashMap very useful for caching.

Posted by: abhijit_jadeja on May 05, 2006 at 06:46 AM

*
Nice entry about a not too well known feature of Java that come quite handy.

I fortunately discovered them long ago thanks to an article at java.com (when it was still called like that) and I've since used Soft References in a few occasions, like creating smart caches of pre-compiled objects (XSLT sheets in my case) that are able to be garbage collected if a sudden peak in memory usage occurs.

You simply re-create them the next time you need them and if the memory usage has gone down, you go back to normal.

I actually wrote an article about using Soft References for such purpose, but it's in spanish ;).

Implementación de Caches Inteligentes Mediante "Soft References"

Thanks again for spreading the knowledge!
D.

Posted by: greeneyed on May 05, 2006 at 04:57 AM

*
Good article. I must confess to being hazy as to the difference between Soft, Weak and Phantom - the API docs aren't terribly descriptive. It's also good to finally get some justification for why on earth anyone would want to use a PhantomReference.

Posted by: skaffman on May 05, 2006 at 12:04 AM

*
Very nice blog entry. I was talking with the development manager of a large all-Java shop one time and I asked him about his need for profiling tools to track down memory leaks. He replied, "We seem to not have a problem with memory leaks since we started using weak references."

Posted by: gsporar on May 04, 2006 at 06:57 PM

*
weak references are essential, alot of associative memory leaks can be removed just by using those babies. I also use them in simple test cases to show that leaks have been removed.

leouser

Posted by: leouser on May 04, 2006 at 05:24 PM

OpenSource Website Composing Tools

OpenSource Website Composing Tools:

Drupal
http://drupal.org

WordPress
http://wordpress.org/

Monday, June 23, 2008

Visitor Pattern Vs. Double Dispatch

http://java.sys-con.com/read/140105.htm

Deriving the Visitor Pattern: A Review and Discussion

Like most other self-respecting developers I had also read the GoF book, including the section on the visitor pattern. However, when a colleague came over to me with a question, I could not initially justify the complexity of the example code I saw in the book. What follows is a discussion of why the visitor pattern is the way it is.

Brief Review of the Pattern
The definitive description of the pattern is in the GoF book Design Patterns, Chapter 5 (pp 331-344)(see References section). The Wikipedia has a concise and good description, which formed the basis for my brief review here. The visitor pattern is classified as a Behavioral pattern, so the thing to notice is the way in which the classes and objects interact and distribute responsibility. A typical application of this pattern occurs in the following scenario: we have a number of elements in an object structure (common structures include trees & lists) and we want to perform a bunch of disparate operations (e.g. printing or cloning each element) on the elements of the structure.

The visitor pattern is a way of separating the operation from the object structure and a way of collecting together the different implementations of an operation for different kinds of elements in the object structure. A Visitor class is created which knows how to perform a particular operation on the different kinds of elements in the object structure. Each type of element in the structure defines an accept() method that can accept any kind of Visitor. The visitor is passed to each element in the structure in turn, by calling its accept() method and the Visitor then performs the operation on the visited element. One important consequence of this separation of object structure and operation is that we can later add a new operation (a new kind of Visitor) without having to modify the element classes of the object structure.

Each type of Visitor defines several visit()methods, one for each kind of element. The basic insight is that the precise set of instructions to execute (i.e. the method or function to call) depends on the run-time types of both the Visitor & the visited element. Java only lets us call different methods based on the run-time type of one object (via virtual functions), so the pattern advocates a clever solution: The second dependency on the type of element visited is first resolved by polymorphically calling the accept() method of the visited element. accept() then resolves the first dependency by turning around and polymorphically calling the visit()method for its class.

An Example
Before this description gets too confusing, let us study the pattern in the context of a concrete problem: Let us say we need to traverse a list collecting node-specific information. The list has two kinds of nodes, say, Red and Black, which needed to be processed differently. It seems like an ideal application for the visitor pattern. Listing 1 shows the code. (All code samples in this article use a J2SE 5.0 compatible compiler.)

To me and my colleague, this initially seemed like an overly complex solution for a simple problem. NodeVisitor.doVisit() calls into the Node's accept methods, which simply delegates back into NodeVisitor. Furthermore, the accept() methods of RedNode and BlackNode are almost identical. Finally, notice that if we now add a GreenNode class, we need to add a new visitGreen() method to the NodeVisitor class and re-compile it (not to speak of the almost redundant implementation of accept() in the GreenNode class). Ugh! This does not seem kosher by any OO standard.

The Need for the accept() Methods
Novice armchair Java developers might ask why we can't do something simpler, like Listing 2, for example, without touching the Node interface, or the classes RedNode and BlackNode which implement it.

Listing 2 has two significant differences from the previous. First, there is no redundant method (namely accept()) for each node type to implement. Second, we use function name overloading for the visit() implementations, thus enabling the "clever" foreach loop, which iterates over each node and calls the appropriate overloaded version of visit() depending on the type of the current element. With this, we hope to contain all the visiting logic within NodeVisitor.

Alas, real developers have a more difficult job than arm-chair developers! If you are using a language like Java or C++, an overloaded function name like visit() has to get resolved at compile time. Thus line 6.iii will not compile because none of the visit() methods provided in NodeVisitor know how to accept a generic "Node" as argument.

For line 6.iii to work the way we want it to, the decision on what operation needs to be performed has to be delayed until we can determine at runtime the type of the node n being examined in the current iteration of the for-each loop.

Traditional OO languages (Java, C++ etc) provide us with one standard tool for delaying function resolution until run-time: virtual functions. Thus, in Listing 1, 6.iii is modified to a virtual function call n.accept(nv). So the actual function that gets called is decided at run-time. The version called then delegates work by invoking the right version of NodeVisitor.visit().

So Why Not Just Use Plain Vanilla Inheritance?
The explanation I just gave is good, but not good enough. I can almost hear you ask: why doesn't accept() do the work itself? Why does it have to delegate back to NodeVisitor? There are three reasons:

1. Accumulating state: If you read the problem I presented closely, you will notice that I specified a need to collect node-specific information. Since the doVisit passes the same NodeVisitor instance to each accept(), the visitor can be used to accumulate state across the different Node objects. For example, say you have an Employee HR application where the Red nodes represent employees, the Black nodes represent managers, visitRed() calculates the pay raises for programmers, and visitBlack the pay raises for managers. The NodeVisitor nv could print a report of the total increase in salary expense at the end of the for loop.

2. Supporting more than one visitor (the need for double dispatch): Say the next version of your Employee HR application needs to add a new HRPolicyVisitor that checks for compliance with some HR policy and the implementation is different for managers and programmers.

To accommodate both the types of Visitors, we introduce an additional layer of indirection - an abstract EmployeeNodeVisitor interface with virtual visitXXX() functions for each type of element to visit, namely visitProgrammer() & visitManager(). The old PayRaiseVisitor and the new HRPolicyVisitor both implement EmployeeNodeVisitor. The decision on which version of visit() gets called now gets determined by a two-step process. The first step is as before. The node type of the visited element n in the foreach loop determines which version of the virtual function accept() gets called. In the second step, the type of the EmployeeVisitor passed in to accept() determines the (virtual function) version of visitXXX() called. The source files that come with this article show the skeleton of this implementation. Figure 1 illustrates the sequence of calls from both doPayHike(), which uses a PayRaiseVisitor to raise the pay of each employee, and doEnforcePolicy() which uses a HRPolicyVisitor to check HR policy compliance.

This technique, where the types of two objects are used to select the operation invoked is known as double dispatch. By contrast, single dispatch uses the type of one object to select the operation invoked. One known implementation of single dispatch is virtual functions. Since Java and C++ support only this form of single dispatch, the pattern simulates double dispatch by using single dispatch twice!

3. Separation of concerns: A concern is any focus of interest in a program. A classic tenet of good software design is that the different concerns of a program must be broken down into separate modules that have little or no overlap. In the Employee HR program , visitProgrammer and visitManager of a particular visitor have more commonality than the two visitProgrammers of the different visitors or the two visitManagers of the different visitors. In fact, the methods in a given visitor may even share state information as described in 1 above. This makes the Visitor pattern a good way to organize code by separation of concerns.

Notice also that as a consequence of this way of organizing code, it is extremely easy to add a new visitor operation, but adding a new kind of node requires adding a new visitXXX method to all the Visitors.

If none of the above three reasons apply, you would be better off not delegating the work of accept() back to a separate visitXXX() method.i.e. plain vanilla inheritance would be more appropriate than an application of the Visitor pattern. On the other hand, if any of the above reasons apply, the Visitor pattern would be a good solution for you.

But This Still Does Not Preclude Overloading the visit() Methods...
You might still have one lingering question about Listing 1: Why can't we use function name overloading instead of the different visit<>() methods (as in Listing 3)?

The short answer is that nothing prevents you from doing this; Listing 3 is just as correct as Listing 1 For the last word, however, I will have to defer to the GoF, who write the following in a footnote:

We could use function overloading to give these operations the simple name, like Visit, since the operations are already differentiated by the parameter they're passed. There are pros and cons to such overloading. On the one hand, it reinforces the fact that each operation involves the same analysis, albeit on a different argument. On the other hand, that might make what's going on at the call site less obvious tosomeone reading the code. It really boils down to whether you believe function overloading is good or not [in this situation].

Conclusion
In this article we reviewed the Visitor pattern and "derived" it from an armchair sketch of the functionality we wanted: the ability to accumulate state over elements of an object structure, the separation of the operations from the object structure, and the ability to add new operations without recompiling the element types. These requirements called for a "double dispatch"; i.e. the precise method to call for "visiting" each element in the structure depended on two runtime types: the type of Visitor and the type of the visited element. The Visitor pattern was shown to be a way to simulate double dispatch using virtual functions, a form of single dispatch.

References

* Gamma, et al. Design Patterns: Elements of Reusable Object-Oriented Software, 1995, Addison-Wesley, Reading, MA.
* Wikipedia contributors, "Visitor pattern," Wikipedia: The Free Encyclopedia, http://en.wikipedia.org/wiki/Visitor_pattern (accessed Aug 19, 2005)

© 2008 SYS-CON Media Inc.

Sunday, May 11, 2008

Spring: OpenSessionInViewInterceptor vs. OpenSessionInViewFilter

(url: http://mikenereson.blogspot.com/2007/02/spring-opensessioninviewinterceptor-vs.html)

Problem: Assuming you're using an ORM, such as Hibernate, rendering the view with business objects that have many-to-one or one-to-one relationships might try to access a detached object with a lazy property and throw a LazyInitializationException. For this reason, we're provided with the Open Session In View pattern.

Solution: The Open Session In View pattern binds a Hibernate session to the thread to be used throughout the life of the request and closes any transactions when the response is sent. This is what allows your view to access model objects with lazily loaded properties after a transaction has been closed. This is nothing new and this is not what this post is about.

To setup the Open Session In View in your Spring application, you have two options. The OpenSessionInViewInterceptor and the OpenSessionInViewFilter. You can use either solution because the two classes serve the very same function. Well if this is the case, then why do they both exist? Why is there a filter option and an interceptor option? This is what I set to find out.

I've been searching Google all night. I've read through forums, I've read docs, and I've read blogs. Why? Because I want to use the better of the two options on my application. I want to ensure I am getting the best performance and the most reliability for my clients as I can. Anyway, the only thing that I could dig up was that they are equal in almost every way. The only consideration that you really need to make is what servlet spec you are using. If your servlet container support version 2.3 or later, you can use either one. If it's 2.2 or earlier, you'll need to go with the OpenSessionInViewInterceptor.

Surely there are other considerations. Maybe you want to keep your filters down to a bare minimum, or you think interceptors are confusing or messy. Or perhaps you like to keep your configuration files as small as possible. If you use annotations, you might want to go with the OpenSessionInViewInterceptor.

I hope I answered any questions you had about why there were two classes for implementing this pattern.


My tiny little blog here has been getting tons of hits because of my post titled OpenSessionInViewInterceptor vs. OpenSessionInViewFilter so I guess people are interested in these. I don't know why you all are comming here. I do know that I'd hate for you to come here looking for an example and then have to go looking elsewhere for it, so here is an example of each.

Interceptor Configuration (action-servlet.xml)




class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">









...



...

class="org.springframework.orm.hibernate3.support.OpenSessionInViewInterceptor">








Filter Configuration (web.xml)




...



openSessionInViewFilter


org.springframework.orm.hibernate3.support.OpenSessionInViewFilter



...


openSessionInViewFilter
*.do


...





I hope this is what you were looking for.

Lazy Initialization and the DAO pattern with Hibernate and Spring

Lazy Initialization and the DAO pattern with Hibernate and Spring
(from: http://www.blogjava.net/vince/archive/2006/11/27/83850.html)
Hibernate and Lazy Initialization

Hibernate object relational mapping offers both lazy and non-lazy modes of object initialization. Non-lazy initialization retrieves an object and all of its related objects at load time. This can result in hundreds if not thousands of select statements when retrieving one entity. The problem is compounded when bi-directional relationships are used, often causing entire databases to be loaded during the initial request. Of course one could tediously examine each object relationship and manually remove those most costly, but in the end, we may be losing the ease of use benefit sought in using the ORM tool.

The obvious solution is to employ the lazy loading mechanism provided by hibernate. This initialization strategy only loads an object's one-to-many and many-to-many relationships when these fields are accessed. The scenario is practically transparent to the developer and a minimum amount of database requests are made, resulting in major performance gains. One drawback to this technique is that lazy loading requires the Hibernate session to remain open while the data object is in use. This causes a major problem when trying to abstract the persistence layer via the Data Access Object pattern. In order to fully abstract the persistence mechanism, all database logic, including opening and closing sessions, must not be performed in the application layer. Most often, this logic is concealed behind the DAO implementation classes which implement interface stubs. The quick and dirty solution is to forget the DAO pattern and include database connection logic in the application layer. This works for small applications but in large systems this can prove to be a major design flaw, hindering application extensibility.

Being Lazy in the Web Layer

Fortunately for us, the Spring Framework has developed an out of box web solution for using the DAO pattern in combination with Hibernate lazy loading. For anyone not familiar with using the Spring Framework in combination with Hibernate, I will not go into the details here, but I encourage you to read Hibernate Data Access with the Spring Framework. In the case of a web application, Spring comes with both the OpenSessionInViewFilter and the OpenSessionInViewInterceptor. One can use either one interchangeably as both serve the same function. The only difference between the two is the interceptor runs within the Spring container and is configured within the web application context while the Filter runs in front of Spring and is configured within the web.xml. Regardless of which one is used, they both open the hibernate session during the request binding this session to the current thread. Once bound to the thread, the open hibernate session can transparently be used within the DAO implementation classes. The session will remain open for the view allowing lazy access the database value objects. Once the view logic is complete, the hibernate session is closed either in the Filter doFilter method or the Interceptor postHandle method. Below is an example of the configuration of each component:

Interceptor Configuration


class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">






...

...
class="org.springframework.orm.hibernate.support.OpenSessionInViewInterceptor">



Filter Configuration


...

hibernateFilter

org.springframework.orm.hibernate.support.OpenSessionInViewFilter


...

hibernateFilter
*.spring

...

Implementing the Hibernate DAO's to use the open session is simple. In fact, if you are already using the Spring Framework to implement your Hibernate DAO's, most likely you will not have to change a thing. The DAO's must access Hibernate through the convenient HibernateTemplate utility, which makes database access a piece of cake. Below is an example DAO.

Example DAO

public class HibernateProductDAO extends HibernateDaoSupport implements ProductDAO {

public Product getProduct(Integer productId) {
return (Product)getHibernateTemplate().load(Product.class, productId);
}

public Integer saveProduct(Product product) {
return (Integer) getHibernateTemplate().save(product);
}

public void updateProduct(Product product) {
getHibernateTemplate().update(product);
}
}
Being Lazy in the Business Layer

Even outside the view, the Spring Framework makes it easy to use lazy load initialization, through the AOP interceptor HibernateInterceptor. The hibernate interceptor transparently intercepts calls to any business object configured in the Spring application context, opening a hibernate session before the call, and closing the session afterward. Let's run through a quick example. Suppose we have an interface BusinessObject:

public interface BusinessObject {
public void doSomethingThatInvolvesDaos();
}
The class BusinessObjectImpl implements BusinessObject:


public class BusinessObjectImpl implements BusinessObject {
public void doSomethingThatInvolvesDaos() {
// lots of logic that calls
// DAO classes Which access
// data objects lazily
}
}
Through some configurations in the Spring application context, we can instruct the HibernateInterceptor to intercept calls to the BusinessObjectImpl allowing it's methods to lazily access data objects. Take a look at the fragment below:













com.acompany.BusinessObject



hibernateInterceptor





When the businessObject bean is referenced, the HibernateInterceptor opens a hibernate session and passes the call onto the BusinessObjectImpl. When the BusinessObjectImpl has finished executing, the HibernateInterceptor transparently closes the session. The application code has no knowledge of any persistence logic, yet it is still able to lazily access data objects.

Being Lazy in your Unit Tests

Last but not least, we'll need the ability to test our lazy application from J-Unit. This is easily done by overriding the setUp and tearDown methods of the TestCase class. I prefer to keep this code in a convenient abstract TestCase class for all of my tests to extend.

public abstract class MyLazyTestCase extends TestCase {

private SessionFactory sessionFactory;
private Session session;

public void setUp() throws Exception {
super.setUp();
SessionFactory sessionFactory = (SessionFactory) getBean("sessionFactory");
session = SessionFactoryUtils.getSession(sessionFactory, true);
Session s = sessionFactory.openSession();
TransactionSynchronizationManager.bindResource(sessionFactory, new SessionHolder(s));

}

protected Object getBean(String beanName) {
//Code to get objects from Spring application context
}

public void tearDown() throws Exception {
super.tearDown();
SessionHolder holder = (SessionHolder) TransactionSynchronizationManager.getResource(sessionFactory);
Session s = holder.getSession();
s.flush();
TransactionSynchronizationManager.unbindResource(sessionFactory);
SessionFactoryUtils.closeSessionIfNecessary(s, sessionFactory);
}
}

Saturday, March 01, 2008

小资一下,侃侃股市

小资一下,侃侃股市

作者: Lonehand 龙汉

  大凡留美之人,虽“来自五湖四海” ,但都是“为了一个共同的革命目标” ,走得基本上是同一条“革命路线” ,那就是:学业,工作,身份,房子,股。。。中间时不时加点儿爱情及色情,顺序可能有偏差,但都大同小异,八九不离十。前几年,借经济繁荣之东风,趁“达康”( 。COM) 泡沫之汹涌, 真个是男女老少齐上阵,没钱借贷也争先。整个儿一“祖国的大好形势是一派大好”。

  星移斗转,“股” 随“市” 迁,“雨季不再来” ,昨日昙花现。大小股民人仰马翻,从几百块,到几十万,赔得这叫惨!痛定思痛之后,不尽扪心自问:我这里“千金一掷” 为哪般?难道股市真的有“红颜” ?

  各位看官,以上对形势的分析,仅为洒家一孔之见,自是可“砸” 可点,下面咱书归正传。


  先说说纽约股票交易市场(New Your Stock Exchange, NYSE,以下简称纽约股市),通常大家说的“华尔街” ,指的就是这个地方。它采用的是“拍卖制” :在大厅内设置多个“摊位”(Posts),每个摊位都有一个庄家 (Specialist) ,专营几种股票,大小股民,不论是你我这样的散户,还是大银行,共同基金,“海居”基金(hedge fund),只要是买卖在纽约股市上市的股票,都要经过庄家之手,无一例外。(近来有一新系统,叫NX,其功能是,如果交易不超过一千股,可直接用电脑接单。实际运作却大相径庭,NX经常被庄家关掉。) 当有人送单给庄家,他会首先喊出,三千美国在线(AOL),15。3,一旦有人(Floor Traders) 点头,庄家就说“卖了。” ,继续下一轮交易,此乃“拍卖制” 的来源。各位也许要问:为什么要庄家?都用电脑直接操作多好?就象纳斯达克(见下文)那样成不?不行。因在纽约股市上市的多为信誉不错的大公司,为维护其股票的流动性(Liquidity)及稳定性(LessViolatility),非庄家不可,举例说明:当一个公司有坏消息传出,其股票会一泄千里,买家廖廖无几,按纽约股市规则,庄家却是非买不可,价格随他出,以便稳定股价,让股市的革命群众出场。各位看官要问,那庄家不赔死了,不然,听我细细道来。假定“海居”张三要买十万股通用电器 (GE),单子下到庄家那里,按纽约股市规定,他有两分钟时间做决定,此时他会快速买进通用股票,再以高一点的价格转手卖给张三,一出一进,就是白花花的,且无风险,此为“前软”(Front run)客户,在纽约股市每天都有。那边说了,既然庄家的生意这么好做,咱也试试成不?否也,庄家的生意大都是家族世袭的,传子不传女,外人根本沾不上边儿。据洒家我在纽约股市门口与他们抽烟,聊天的两小时经验,多数庄家为意大利及犹太后裔。

  再说说纳斯达克(NASDAQ) 。如你问一炒股老中,NASDAQ是“神马” 意思,相信能说上来的不多,龙某不 才,在此为大家“科普” 一哈,以正视听。NASDAQ是 National Association of Security Dealers Automatic Quotation 的缩写,有点语病。说白了就是:全国庄家协会自动报价盘。它采用“庄家”(Dealer) 制而不同于纽约股市,它没有交易大厅,所有交易全部由电脑完成,主服务器位于康乃狄格州,备用服务器位于马里兰州,时代广场上那个(常上电视的) 只是为了秀。与纽约股市不同,纳斯达克由众多股市经纪人 (Market Maker, 以下简称“那美媚”) 及ECN(Electronic Communication Network)组成。那美媚们多由各大银行 如美林(Merrill Lynch),高盛(Goldman Sachs), 李蔓兄弟(Lehman Brothers)等的顶级操盘手充当,用一种叫做“三级”( 见下文) 的终端,坑蒙拐骗,凶猛异常,整个一吃人不吐骨头。再说说ECN,几年前,共有九家,几经和并之后,只剩四,五家,最著名的有:“立马见效网” (Instinet) ,“岛屿网”(Island) ,“阿奇派勒狗”(Archipelgo, 由原来的“红迪”(Redi) 和“阿奇派勒狗”合并。(注:Archipelgo 现已成为Exchange) ,其他的还有Attain(ATTN,All –Tech Direct), Bloomburg Tradebook(BTRD),Brass Utility (BRUT), NexTrade (NTRD) and MartketXT(MKXT),以笔者之见, ECN为散户的真正朋友,因其没有人为干预,全由电脑自动运作。按执行速度排序,分, Island) ,Instinet ,Archipelgo, 为最佳, 又由于Instinet 多为大户 , 所以极少有部分执行(Partial Fill),举例来说,如你买一千股,Island虽快,却有可能只给你九百三 十七股,剩下的六十三股,你只好再下一张单子,多交一笔手续费。那边说了,你咋知道哪些股票是在纽约股市,哪些是在纳斯达克的呢?不难。纽约股市的
股票都是以一个,两个或三个字母为其符号(如A,GE,AOL等),而纳斯达克的股票都是以四个,或五个字母为其符号(如INTC,CMCSA等 - QQQ不算).

  以上两大市场为最有影响的,其他的如美国股市(American Stock Exchange) 太平洋股市,及一些可有可无的股市如位于费城的,波士顿的。这里要说明一下,位于芝加哥的Chicago Board of Trade(CBOT) 和Chicago Mercantile Exchange(CME), 也为重量级市场,但因CBOT主要是经营债圈(BOND)的,CME主要是经营期货期权(OPTION,FUTURE),本文不再赘述。

  说完股市,再说股民。除了大家熟知的银行,保险公司,共同基金,“海居”基金等大玩家,及你我这样的“革命群众” 外,这里着重侃侃鲜为人知的“第三梯队” -- “波派公司” (Proprietary Trading Firm) 。此类公司面向“工农” ,只要你智商高于“阿赣”(Forest Gump) ,再通过7,55系列考试,就可交一点钱或不交钱,加入公司,在公司的严格监督之下,用公司的资金炒股。此等“股手” 一但练成, 即为仅次于各大银行的顶级操盘手的股市江湖中的大侠。(成功率只有百分之十左右)

  以上所说的只为一点预备知识,为下文打打基础,各位如果看到这还没打呼噜,说明你小子“可教”。咱这就说说那精彩的部份--如何炒股。


  一提炒股,相信有九成的同志们有如下的概念:拿出万八千块来,在网上开个户头,看看电视,翻翻报纸,听听小道消息,网上搜索一哈,买他三,五百股之后,就开始守株待兔了。若运气好,股票涨了,我就小赚一笔,下回再如此这般。若运气不好,股票跌了,我就坚决闷住,来他个誓死捍卫---就不信马王爷三只眼, JDSU涨不回54!各位看官,老少爷儿们,您要是这么炒,就好比拿着菜刀去打伊拉克,若是在头几年,伊拉克人民都用螺丝刀打群架的年代,您一准成富翁,若是换了目前的股市,您连百分之一的希望都没有。。。下面从不同角度详细说说。


  从时间上讲,分长期,中期,短期,超短期 (Day Trading)。长期一般为数月到数年不等。中期一般为数周到数月。短期为几天到数周。超短期是指从几秒钟到数小时。时间的化分,十分重要,入场前,一定要搞清楚,否则,泡妞泡成老公,炒股炒成股东。。。后果不堪设想。


  从策略上说,分为买涨 (Long),买跌 (Short)及买差价 (Market Neutral)。那边说了,买涨 (Long) 咱懂,就是低买高卖。买跌 (Short) 也听说过,就是先借股卖掉,等股跌了,再买回来还上(Cover) ,这买差价(Market Neutral) 是“神马” 东东?且听洒家慢慢道来:

  买差价(Market Neutral) 作为一种市场策略,有很多变种(Variations) ,这里仅以其最著名的“对炒” (Pair Trading)为例说明一哈:荷兰皇家(Royal Dutch -RD) 和壳牌 (Shell -SC) ,名为两家公司,实是源出一脉。换句话说,就是RD涨,SC就涨;RD跌,SC就跌;二者同步,即这两种股票的差价趋于一常量(RD-SC=N,N为一长期的平均值),不论市场是牛是熊, 此N应保持不变。但由于股票的瞬间波动, 有时两者出现偏差,(RD-SC大于N ) 或 (RD-SC小于N ),此时的做法是,若 RD-SC大于N,则同时Long SC(低的那个) ,Short RD(高的那个) ;反之,若RD-SC小于N,则同时Short SC(低的那个) ,Long RD(高的那个) ,因炒家心里清楚,RD_SC早晚会回到N来。。。Pair Trading不仅可用于股票,还可用于其他市场,如几年前以John Meriwether 为首的 Long Term Capital Management 就把Pair Trading 用于炒公债(Bond) 。须要说明的是,Pair Trading 需大量资金,不太适合革命群众,由其是刚参加革命工作的同志。



  从战术上讲,分为“基础研究” (Fundamentalist) ,技术分析(Technical Analysis) 和看盘(Tape Reading) 。


  基础研究指的是对公司的经营,财务,人事调动,产品开发等一系列基本设施进行深入细致的调查研究。其代表人物有如日中天的巴菲特(Buffet) ,及各大共同基金(Mutual Fund)。此类打法适用于长期投资,而绝大多数股民亦采用这类玩法。说几句题外话,相信各有工作的茶客都知道401K,而绝大多数401K实际上就是共同基金。共同基金的特点是,

1。它只能买涨(Long) ,而不能买跌(Short) ;

2。它的管理人(Fund Manager) 是拿工资的,即不论基金营亏,经理照拿不误。

3。它不允许把超过5%的资金投入任何一种Instrument(股票,债卷等) ,为的是多元化(Diversification) 。

  而大市已连年走底,这就是为什么90% 401K 等近两年来都在赔钱。以洒家孔见,今年大市上扬,故力劝各位要Stick With 401K,不要失去信心。

  技术分析(Technical Analysis) 作为一种与基础研究截然不同的战术,则侧重于对股价过去的表现进行分析,来短期预测股价的未来走势。它适用于中短期的操作,最著名的代表为Moving Average,Moving Average Convergence Divergence,Relative Strenth Indicator,Bowlinger Band,Elliot Wave,等诸多理论,这里不一一赘述,有兴趣的朋友可买本“大全”之类的书,自己研究研究。

  看盘(Tape Reading) 是一类鲜为人知的高难技术,它是通过对股票现时的交易量,价格及参与者(Specailist and Market Maker)进行快速分析,从而预测股价的瞬间动态,来决定入场与出场的时机。 这种技术对工具的要求很高,二级(Level2) 软件及宽带网络联接为必备手段,最适合于超短期 (Day Trading) 的操作。洒家曾仔细研究过,还与一老美股友在YAHOO上开一聊天室现场(Real Time) 示范达半年之久。后因 Conflict Interest,只能作罢。那老美当年年仅二十,为一富家子弟,近来听说自己聚了几个Million,开了“海居”(Hedge Fund)基金,此题外话。


  最后侃侃炒股所用的工具。在网络技术普及的今天,大多数股民都是在网上交易。网上工具按执行速度,价格及所提供的信息量又分三种,现分别介绍如下:

  第一类,Web Based。此类多为股票中介公司提供的网站,其中以ETrade及(Datek-AmeriTrade) 为代表。其特点为,界面相对简单易懂,价格相对便宜,也提供Level2。不足之处为速度慢,经常出故障,用户服务跟不上---打电话过去,半天没人接。但适合中,长期的投资。


  第二类为所谓的“直接连接” (Direct Access),采用Server/Client结构,股民须安装相应软件才能操作。此类软件速度快,信息多,Level2为标准设置,是真正炒家的必备。虽然贵一点,但绝对是物有所值。举几个例子:Real TickIII, Cyber Trader, e-Signal, DTN-IQ, TradeStation…这里着重要说的一个是IB(InteractiveBrokers), 它的特点是,价钱奇低无比,Java的界面也很快(用HotKey) ,可与 e-Signal 等联用 。美中不足的是,服务太差,有时很麻烦 。


  第三类为专业炒手所用,除具有第二类的功能外,再加上现时新闻(Real Time News) , 洒家只是见过,从未有机会使用,没资格品头论足。此类的代表为,AT-Financial,Watcher等。


  洋洋洒洒,码了几天,本打算写着玩玩,一不小心上了贼船,下不来了。弄了"懒婆娘的裹脚布--又臭又长"。还不知是否会误人子弟。。。下回一定写点轻松的。此文如能在各位入场之前,起到一路灯的作用,给大家照个三米五米的,也不枉我码这万八千Bytes。


后记:

  此文为我之旧作。只凭记忆,没有参考任何资料,码错之处在所难免,更非号召大家入市一博,本欲与同道朋友切磋交流,看过反应,发觉又成龙汉“独” 家之言,(爱打架的比有功夫的人多?)故特在此声明:大千股市,风险多多。不论输赢,概凭各人修行,龙汉概不负责。

Saturday, February 23, 2008

2008 Study Plan:

Business: CSC

Technical: Java
+ Multi-Thread
+ MQ & JMS
+ SOA & Web services & Modern Design Patterns
+ Sybase & other DB, data model, design, query optimization.
+ Trading platforms such as credit derivatives.
+ Real-time, high volume, distributed application.
+ Tibico


Management: PMP



==========================================================
Java Developer - Fixed Income

Location: Downtown Toronto

Salary: Commensurate with experience

Requirements:

5+ years experience developing and delivering high performance server side applications in Java. Experience on Servlet/JSP, Spring, AJAX, JavaScript, HTML/DHTML, CSS, XML, Tomcat.

Excellent communication skills, verbal and written, and ability to work in a fast-paced environment as a constructive team member.

Experience with real time, high volume, distributed applications.

Demonstrated usage of threading and asynchronous processing.

Expertise with RDBMS a plus, including database schema design, writing stored procedures, performance optimization, preferably using Oracle.

A general understanding of trade settlement in a financial environment.

Knowledge of Foreign Exchange products a major plus.

Experience using a continuous integration environment with tools such as Cruise Control, Ant and Junit a plus.
==============================================================
Senior Developer (J2EE) in RBC Fixed Income Product Technology group to develop, implement and support a first class risk monitoring application.

-10 years experience in system design and development
-Extensive understanding of latest J2EE, XML and Messaging technologies
-Extensive knowledge of Sybase (Sybase IQ preferred), Transact-SQL and stored procedures
-Ability to formulate and implement efficient algorithms for fast-time systems
-Solid understanding of multi-threaded application development
-Broad knowledge of Fixed Income Credit terminology and products, and Market Risk reporting
-Previous leadership or mentoring experience
-Excellent communication skills
==============================================================
A senior developer in Money Markets Sales and Trading Systems team, which supports the Fixed Income Business of RBC Capital Markets.

• 7+ years of experience in software development
• Extensive knowledge of core Java technologies (both server side and client side) and development tools
• Extensive knowledge of relational databases (that includes ability to write high-performing SQL queries, optimize performance of existing queries and ability to design database models)
• Extensive knowledge of messaging technologies (TibRV, MQ, JMS)

Business Knowledge
• Solid understanding of Money Markets business and general understanding of Fixed Income related businesses (bonds, repos, fixed income derivatives)
• Completion of Canadian Securities Course
==============================================================

Job Title Team Lead

Location Toronto, ON, CA

Organization Name Global Futures Technology

Department Description



The Futures Front Office technology group is part of Global Futures Technology within the Merrill Lynch Global Markets Technology organization. The group supports Futures Front to middle office trading and post-trade allocation for Merrill Lynch globally, with users and technology support located in the US, Canada, Europe and Asia.

Brief Description

The successful candidate will have a proven record of enterprise application development and delivery, preferably working in the financial services industry. The candidate will need to have excellent team management skills.

Job Responsibilities

Development of components to integrate with key Global projects. Candidate will have a proven track record in management, architecture and delivery of large scale trading front/back end/OMS applications. The candidate will focus on a number of key Global Futures initiatives for external clients, FIX trading and middle office functionality.

Qualifications

+Excellent team management skills
+Strong Socket programming
+Strong Multi-threading
+Trading system experience
+Experience in OMS design and coding
+Experience in Financial trading system
+Experience with Java client-server programming
+Experience in designing message-hub type of application (i.e. OMS, Router)
+Experience with Messaging (i.e. JMS, TIBCO)
+Oracle (SQL/JDBC)
+FIX protocol
+Good design pattern knowledge
========================================================

Job Title Sr. Java Developer

Organization Name Credit Trading Risk Technology Group

Department Description

The Credit Trading Risk Group was created in 2007 to server all risk applications development and maintenance support across all Credit trading business activity in all regions (AMRS, EMEA and PAC RIM)

This team has a global presence in AMRS, EMEA and PACRIM and serves Credit business (cash and derivatives including flow and structured) to deliver risk metrics using overnight batch or intraday calculation platforms all ML built in (incl some third party vendors for data caching and grid computing)

Brief Description

The Credit Trading Risk Technology group is looking to hire a Senior Java Developer to take the lead for the development and maintenance support of some specific Credit Risk applications

Job Responsibilities

+ Strong ability to analyze functional needs that drive the analysis and technical design of quality technical solutions.

+ Strong ability to work collaboratively with IT staff on development, troubleshooting and other technical efforts.

+ Responsible for design and development phases (inc technical testing phase and documentation)

+ Strong ability to face off business users for requirements gathering

Qualifications

+ 5-6 years experience.
+ Bachelors Degree or equivalent work experience in Computer Science, Information Systems or other related field.
+ JDK 1.4.2 and 1.5.
+ Weblogic 8.X
+ Spring Framework
+ Web Services
+ Design Patterns
+ TDBMS SQL (Oracle/Sybase)
+ RDBMS (Oracle or Sybase)
+ Strong analytical ability and problem solving skills.
+ Strong communication skills

Preferred Skills :

+ Knowledge of Service Oriented Architecture
+ Knowledge of any Data cache system
+ Credit business knowledge (or any Fixed income knowledge)
=================================================================
Job Title Java Developer

Department Description

Liquidity & Risk Technology (LRT) provides programs, applications and platforms that help report, monitor, manage, and measure risk. LRT is comprised of five major technology groups: Credit Risk, Market Risk, Regulatory, Finance, and Treasury.

Brief Description

The successful candidate will have a proven record of enterprise application development and delivery, preferably working in the financial services industry.

Job Responsibilities

The Java Developer’s role is to design, execute, and deliver components and/or solutions while adhering to strict project deadlines. In addition to technical and analytical skills, the developer must demonstrate strong communication skills and a “team player” attitude.

Qualifications

Core Competencies

* 5+ years experience with Java
* Demonstrated experience using Web Services
* Experience applying modern design patterns in building SOA components
* Demonstrated experience using JDBC and Oracle required (9i and 10g)
* Experience using one or more app servers like Weblogic, Websphere
* Experience in designing and implementing robust, highly scalable, n-tier transaction processing applications using J2EE

Good To Have

* Experience using OR tools such as Hibernate
* Experience with trading platforms, especially Credit Derivatives
=====================================================================