<wlevs:channel id="myStream" event-type="MyEventType" advertise="true"> <wlevs:listener ref="myListenerClass" /> </wlevs:channel>
Category Archives: Programming
All things related to programming. This includes Java and SQL programming for different vendors.
The Java programming breaks down into serverside and clientside
SQL Programming covers Oracle, SQL Server and Sybase engines
RMDS RFA OMM Madness
So recently I was engaged by a client to lifecycle some complex event processor applications, migrating from the old and creaky object-based TibMsg interface to the new and shiny OMM interface to listen to Reuters prices over the Reuters Robust Foundation (RFA) API version 7.6.
So the implementation was done – which required a complete change in thinking. It was a sharp departure from the warm and familiar world of over-the-wire object transmission to the far less reassuring domain of asynchronous messages.
The TibMsg interface requires actors (consumers and producers) to interact with RFA using preconstructed objects. Login and subscription/publication objects are constructed by the actors application code and then listeners are implemented to receive responses from RFA in the form of TibMsg objects. The listeners only need to be implemented for market price data and system events. Nice and easy.
The OMM interface uses the same login/subscription/publication pattern, but requires the caller to “message” RFA instead. That means that the login step is independent of the subscription/publication step and so on. Added to the mix is the fact that each message will be responded to asynchronously, using a listener running on a different thread which only must exist until the desired response is received and then the thread must exit – with the notable exception of the market price listeners. The upshot is that the caller is forced to write a bunch of Futures (i.e. threads that return a value later on) to monitor response messages. It got quite messy to orchestrate.
Anyway, the OMM interface was successfully implemented and deployed to a development linux server for two components . One component was (accidentally) configured to log each RFA call at the finest trace level, producing very large and useless log files for, get this, both components!
Yep, configure OMM for one component, and the other “automagically” takes on that unrelated components OMM config too. Bah!
After much head scratching and a quite a bit of blue air, it became apparent that this automagic was in fact due to a hidden configuration directory which had been snuck into the root of the app users home directory i.e. ~/.java/.userPrefs/com/reuters/rfa/RDMS. Here there were two directories: Sessions/ and Connections/. The directory names under these sub directories were mad: “/::|&%%’ and so on.
So whenever one of the components was (re)started, RFA would copy the config shipped with the application to this hidden location and then start. It was not clear to me if the copying action was exclusively additive, so if a config property, such as logging, was removed from the component config, the hidden properties location did not remove the unwanted property.
Anyhow, this is explained why when only one component was started it took on the (bad) configuration of the other application!
The cure was simple: fix the component config, removing the unwanted logging properties, shut down all relevant components, delete the Sessions/ and Connections/ directories in the hidden preferences directory, set the hidden preferences directory to read only AND then start a component. Piece of cake.
Suffice it to say that this spooky action at a distance was not well documented … Thanks RFA!
Testing my patience or “Why do we make this so hard for ourselves?”
Anyone following my intermittent rants on rubbish code would surely have got the sense that I don’t tend to suffer fools too gladly. It is also no secret that I don’t believe in commoditising the act of writing code. The results are always poor and always expensive.
Here, I fear, is yet another example of why I feel this way.
Recently I was wondering aloud with an offshore colleague how it can be that a simple bug which we budgeted half a day to fix was taking 3 days and as almost many attempts to rectify. Paraphrasing for you:
Me: Show me your unit tests
Them: Um.
Me: You DO have unit tests, right?
Them: Um. Well. Yes but they have been disabled.
Me: … Really? Please tell me why.
Them : Not sure.
Me: Let me see if I have got this right. You have attempted to fix a simple bug and its taken you two failed attempts over three days and you STILL don’t have unit tests?
Them: …. [silence} … [a sort of gargling sound, which I put down to a noise representing an uncomfortable mental state] … Um.
Me: [swearing internally] OK. Show me your code.
Therein ensued a 45 minute discussion which I shall summarise for you the dear and busy reader:
- Colleague did not know why the existing unit tests were disabled
- Colleague discovered why. They didn’t work. Better out of sight and out of mind, eh?
- I (strongly) suggest that the key methods be broken up into smaller test-able methods. This was done
- New unit tests were written
- Unit tests were then rewritten as esteemed Colleague had not understood the concept of determinism i.e. you put values into the test with the expectation that they will produce a predictable result
- Unit tests now passed
- Third bugfix attempt was deployed to test
- Third bugfix attempt was passed by the tester
- For completion, the colleague was then advised to write unit integration tests. This was initially deemed “impossible” due to a dependency on external services, to which I issued a sharp “rubbish!” in reply
- Mocking of the services and creation of the unit integration test then went ahead smoothly, once I pointed out that it is indeed possible to have more than one constructor per class (sic)
- Unit integration tests well under way
- *sigh*
I am going to take up drinking hard stuff on a regular basis to thin my blood if I see more of this sort of behaviour
My heart can’t take it.
Sanity Maintenance
So often these days inside large corporations the IT landscape for a particular business domain is chock-full of home-grown legacy applications. In many cases the reason for the existence of these applications has been lost in the mists of time and yet the organisations that spawned these beasties still waste good money and good people’s time on keeping these old Jalopies running. Teams that work in this world of pain no longer focus on the improvement of an application. Instead their blood sweat and tears are spent on a product that should have died years ago.
What a waste.
It comes as no surprise that the good and great developer minds that invented these systems are also long gone, presumably following the quest in the Holy Grail of software development: the Green Field Project. However back on the ranch, the powerful people that commissioned these software gems in the first place are getting tetchy because of the perceived stagnation in terms of functional development coupled with an ever-increasing cost of ownership. So they start throwing their toys in exasperation at the IT department.
By this stage the major stakeholders in any given software system have had so many sparing matches that they are no longer able to communicate effectively.
Two distinct and diametrically opposed architectural approaches bring about this malaise. Some of the larger legacy applications that I have worked on appear to reach a “critical level of complication”. This is a point where the code has become so complex that even the most seasoned senior developer blanches at the very thought of simple enhancements as there may be many unwelcome side effects . Put simply, some systems are allowed to grow to become too monolithically complex to be usefully maintainable
Other projects have gone the other way, embracing Solution Oriented Architecture (SOA) principles too tightly. The result is an application landscape that represents a teenagers face: pimpled with lots of little applications that do things no one understands and are all inextricably linked to one another through a spaghetti of messaging flows. This is what I call “Implicated Dependency”
To me these two cases seem like two ends of a spectrum. I have often wondered why projects tend toward one architectural extreme or another and not toward a moderate middle ground. Both end games seem, well … unnatural.
While we are all striving for software utopia, expecting that our digital solution is somehow cleaner than our real world problem, the cost is pragmatism. There is a distinct lack of moderation in the way the techniques behind how software systems are written. Many developers insist on using only certain techniques at the expense of others. These are the software evangelists.
Software evangelists tout a narrow band of technologies in the belief that use of this collection is the veritable silver bullet to any conceivable software engineering problem. Evangelism simultaneously halts the software’s architectural trajectory at a point in time, effectively carbon-dating the code. Under hard, critical analysis, these touted techniques may not in fact be as good a fit to the problem as expected.
A narrow band of technologies may limit application authoring to developers with a narrow set of skills, skills which can become rare and therefore harder to supplement as time goes on. Ironically, many open-source engineering approaches such as Methodologies, Frameworks and Patterns abound for the sole purpose of software development simplification. Yet software evangelism, even of these approaches, may contradict the simplification ideal by inadvertently creating a state of unmaintainable complexity.
As a result of influential evangelism, unwittingly or otherwise the current developer(s) on your project are probably not applying maintenance friendly development techniques. What do I mean? Here are some examples of development unfriendliness:
- not enough care is taken over making the code as simple as it possibly can be. All to often the adage “there is no problem that cannot be solved by another layer of indirection” makes an execution pathway too convoluted with respect to its purpose.
- the code is not written to allow ease of maintenance. Let us not forget that maintenance occurs both during future development cycles and within the current runtime context.
- what internal configuration mechanisms exist are simply not flexible enough. They are often the last item considered in a development cycle and as such implemented in a hurry.
- so often business configuration is not exposed to the user but instead left to the IT department to take care of, dramatically increasing cost, unnecessarily lengthening time to market and increasing the risk of failure.
- documentation (both technical and user) is almost always deficient. Not enough care is taken to articulate the systems core behaviour in a clear and effective manner
All of these anti-techniques lead to the embarrassing situation where developers and users alike have no choice but to research the systems that they inherit in order to understand the software behaviour before any changes can be made to the code. This undesirable effect is independent of context: it occurs in both monolithic- or swarming SOA- component landscapes.
So beware your software evangelists.
OK, you say. Big deal. So we like to use what we know. Whats so wrong with that? We know what we are doing and how to do it. True, but shortsighted. Most projects with a development budget over 500,000 USD have a production life span of 5-15 years. Within that lifetime at least two significant frameworks are expected to appear that will be directly relevant to the application. Why prevent the project from reaping the benefits of these new methods?
The solution? There is no silver bullet. There is however a very successful human strategy we can employ here: common sense. Sit in your chair for a few moments and run through some simple “What If” scenarios. Imagine you are the poor sod who has to come along in 12 months and perform some major surgery on the code you are writing today. Can you make implementation choices now that make your future developer’s life easier? Imagine for another moment you are the poor soul who has to make some future static data change. What can you do to make that person’s world less pain-wracked: a simple GUI perhaps? How easy is it to change the logging level in your software? How easy is it to reconfigure resources? How easy is it to automate the testing of this thing? This is a useful exercise but be careful not to turn the few moments anticipated for deliberation into the next few days.
And the rest of the time you spend pondering should be spent on a question that no developer likes to consider: How easy is it to decommission the application? What nasty little dependencies have you inadvertently coded into your solution? Are your interfaces clear? Have you pared down your code to its absolute minimum? How is it presented? Hows your documentation?
A few common sense/natural decisions made when the software is being born will dramatically reduce the chance that your code will become hated in years to come. Common sense will reduce the production lifetime dollar numbers. Common sense will help save the relationship to the sponsor. Fewer people will be needed. Enhancements will be made to keep the application current and not to back fill original user requirements. Time to market will be shorter. Testing will be simpler. Above all, it will be a pleasure to work with such a system.
“KIS,S”: yes, you know what that means.
“Moderation in all things, especially lines of code”
“Work smart, not hard”
“Indirection is powerful. Use it wisely and sparingly.”
Avoid unnecessary bouncing in Spring
Spring is great. Those guys really know their Java onions. The only problem with Spring is that despite a load of really good API documentation, the actual real-world application of this framework is only gleaned by blood sweat and fear.
Some basic tips then to avoid some messy quagmires in the code:
- don’t go mad with chained config files. You will kick yourself when it comes to debugging during testing. No more than five should be sufficient
- in your test branch (assuming your are following a maven-like code structure) link in configuration files from the production branch THAT DO NOT NEED TO BE CHANGED FOR TESTING. This also saves you a sanitorium’s worth of palpitations. Sure, create config files that do give you specific testing conditions. There should not need to be too many, if any, of these.
- make your Spring Managed Beans (SMBs) stateless where possible. That means NOT declaring their scope as prototype. Leave the scope blank.
- inject as many dependencies as possible into your bean during the beans definition. Avoid injecting at runtime as this just adds to the pain.
- create bean parent definitions in your config files so that you don’t end up repeating the same definitions over and over and over and over …
- avoid gathering beans on the fly from the BeanFactory. Your blood pressure will drop dramatically as this reduces runtime issues and handily cleans up the code nicely.
- if you must gain a reference to a managed bean in your code at runtime, make sure that you use the BeanFactory that Spring magics up by implementing the BeanFactoryAware interface into your class and adding the method public void setBeanFactory(BeanFactory beanFactory) to set a local instance BeanFactory member. Don’t be clever and try to invent your own BeanFactory implementation. It will blow up in your face.
- don’t attempt to mess with the inner workings of Springs DI/IoC stuff. Its complicated and will only give you a nosebleed.
- Autowire if you must, but just be warned that you can come really quite unstuck at runtime because you have even less of an idea what Spring is doing than under the declarative bean wiring approach.
And campers, remember this. The boys that have spent years building Spring into what it is today have spent a lot more time agonising over and testing their work than you can ever hope to achieve in your app. If you have a problem with a Spring implementation, more often than not it will be down to something you have done with it/to it rather than a bug in the implementation itself. Google is your friend.
Someone quite obscure once said: “There isn’t something you are trying to do in Spring that someone hasn’t already done. The trick is to hunt down how THEY solved the problem.”
Remember, its all objective
No getting past it, Java is an OO language. So many cases exist where a procedural programmer has grabbed Java by the scruff of the neck and forced it to be unnaturally flat-file-like by making classes and methods static.
This just simply will not do. You are robbed of any of the benefits of inheritence and delegation. It makes it neigh on impossible to refactor your code, forces you to write more of it (introducing the cut&paste antipattern, kisses any chance you might have to employ standard OO patterns and worst of all, you can get HORRIBLE threading issues with variable overwriting, deadlocks, starvation – the whole 9 yards.
A great deal of our efforts as programmers is spent on trying to maximise the best and widest functional coverage with the minimum amount of code. There are screeds of books written on the benefits of this, so lets not bother with why that is a good thing here.
All that is being pointed out to you, Dear Reader, is that creating sensible object and interface hierarchies and a liberal dose of polymorphism, overloading and overriding saves you a load of finger work at the cost of a few more moments of brainpower. Not only that but it makes your code more PRESENTABLE to the casual observer. Something that becomes essential when you are working on a project where there is little or no technical documentation. It also shows that you UNDERSTAND what you are doing, at least most of the time.
Its easier to test too.
“Work smart, not hard”, “Don’t PlanDooooooo, PlaaaaaaaaanDo. Better for your health”
How do I evaluate a boolean. Omigod.
This is no joke. A bloke on our team came up to me today and asked me about a simple change to some code that was evaluating the BigDecimal.compareTo() operation. (See my earlier blog on why you should do this.) The current code was written defensively to avoid NPE’s:
if (blee != null){
if (blah != null && blee.compareTo(blah) != 0){ return true; } else { return false;}
}
Turns out that the blah != null
term was causing an issue as the line would always return false
if blah was null instead of the correct answer, which is true
.
So matey wants to avoid the blah != null
test failure. You and I would simply break up the if..statement into two if..statements comme ca:
…
if (blah == null ) return true; // assuming that blee above was not null
if (blee.compareTo(blah) != 0) …
Much to my dismay, this colleague wanted to NOT do this, but instead add an additional line setting the blah
to some random BigDecimal
value because “I will chose a number that will never be used”.
…
if (blah == null) blah = new BigDecimal(-7878); // WTF?!!?!
if (blah != null && blee.compareTo(blah) != 0){ return true; } else { return false;}
What? Eh? Can he really guarantee that claim? I see our future, and it involves many hours digging out this hard-of-thinking code.
Don’t worry, I straightened him out.
Work smart, not hard. Ten tips when dealing with MuppetCode
Banging your head all day against a codebase that has been so badly polluted by incompetent developers can really take its toll. So here’s a list of things you can do to relieve the stress. In each case note who the culprit is so you can “name and shame” later:
- count the number of times the plonkers have put double negatives in the code. You know, things like
- if (! (something != somethingElse) ) { …
- hunt around for a) nested loops and b) nested loops that don’t sensibly exit when they should. You get extra brownie points if you can spot one in a critcal execution path.
- for(Idiot iDot : collectionOne){
- …
- for (Stoopid duh : collectionTwo){
- …
- }
- }
- // tada! look mum, I just wasted a whole load of time for no reason at all
- return somthingThatCouldHaveBeenFoundAgesAgo;
- Count the number of times you see the same 5 lines of code repeated in the same class. Extra points here for spotting this across sibling classes. Even more points if you spot a case where an implementation in the base class already exists so the subclass is overwriting the base class implementation with the exact same code. Gah!
- Count the number of times a method begins with the same two or more words. Like createSomethingUsingA() and createSomethingUsingAWellAlmostAllOfA(). Accessors don’t count.
- Look for classes that implement THE SAME METHODS as a generalised utility class, but make no reference at all to the utility class and the clever clogs who thought to put his code there.
- Count the number of //TODO comments scattered throughout the code. If you are really bored, use your SC system to work out how old the comment is. Make a note of its age.
- See how many recursive loops you can find in a class. Play a game with yourself and predict how many recursions it will take before the VM explodes with a StackOverflowError. Make a bet with a colleague and wait for the production defect ticket to roll in.
- Count the number of static classes and static methods in instantiated classes. Order a small trophy with the words “luddite of the century” engraved on the side, include the statics count and ceremoniously give it to the responsible colleague. If he/she is no longer there, have a posthumous ceremony in the Pub instead to drink away the pain they have caused you.
- See how many methods modify the content of an argument without making any comment or effort of any kind to indicate to the sucker following along behind that this method is going to mysteriously change the data on you without warning. Even better, see if you can find a method that returns a COMPLETELY DIFFERENT OBJECT TYPE while also modifying your argument on the sly.
- And to my favourite: Count the number of offences above commited by each member of your team. There will be certainly one, if not two, who are consistently rubbish. Go get them a large trophy with “Consistently Rubbish” on the side. Adorn it with all of his/her wrap sheet, for extra zing add the number of hours you have spent looking at and unravelling their cock ups. Present this trophy to the “winner” in front of his/her line manager.
OK, so you may be without a job after 10), but you will have had a certain satisfaction knowing that you have exposed a serious weakness in the team and that you had public display of your own saintly suffering.
The BigDecimal rolls into town …
Like most programmers, I am probably the laziest person I know. That means that when I can be bothered to do something, I want to take the minimum amount of time to do it, naturally with the best possible results. This thought occured to me the other day when working on a collection of Domain Objects that had, you guessed it, similar but slightly different implementations of the same method. Although that is bad (See my post in “Pet Hates” about this particular problem) the main issue here was how a misunderstanding of the internal workings of the BigDecimal can really screw things up for you.
The issue in this case was a utility method that attempted to determine whether any data in the current Domain Object differed from that seen in a another Domain Object. Simple stuff. Not so when you consider that the DO in question used a number of closely typed fields, including the infamous BigDecimal type.
The problem arose when the values contained in DO #1’s BigDecimal fields were compared with that in DO #2’s BigD’s. The BigDecimal.equals() method does a sterling job of comparing the two BigD values, right down to the scale (or mantissa if you are of a mathematical bent) i.e. the number of decimal places. So 2.00 is NOT equal to 2.0, even though it clearly is to us Humans.
Every now and then, depending on how the BigD was populated, the equals() would return a false result when to the naked eye the data should be equal in all cases. The “every now and then” part was worked out to be dependent on whether the source DO was obtained directly out of the DB via Hibernate or hauled out of Hibernate 2nd level cache. Turns out that the scale differs when reading from the DB as opposed to pulling an existing object out of the 2nd level cache. Although this issue was limited to our code, you can see how easily an issue like this can creep in. Scary stuff.
So the scale variation depended on whether the object had been cached or not by Hibernate, which in turn depended on whether the VM had been restarted recently or not. The net effect was that if the DO existed before the VM restart, all was well with the equals() evalutation. If it was created after the VM restart, the equals() call ALWAYS failed, despite the numbers in question appearing to be equal.
To the rescue came the BigDecimal.compareTo() method, which effectively normalises the scale without enforcing any potentially damaging data truncation. Once implemented, our system became a lot more predictable.
Getting a bloodied nose from Cut and Paste
One of the more annoying aspects of working with a predecessor’s codebase is seeing cut and pasted code all over concrete implementations that can so easily be refactored into the base class implementation. Many books tell us that refactoring upwards is a good thing for a number of obvious reasons. Code reuse, behavioural predictability and simpler maintenance are the most often quoted.
On a recent project I have seen no less than 12 places IN THE SAME CLASS where the SAME CODE has been cut and pasted. This suggests that the author of these unnecessary method implementations did not have a firm grasp on what the original code was doing. Rather than make what would have been a fairly lightweight modification to the existing implementation the author chickened out and simply duplicated the code in a another method (or 10). Oh, and unit tests were nonexistent, proving that the author did not attempt to understand the existing code, properly check the existing behaviour nor modify what was there carefully. This deficiency makes matters much, much worse.
When code like this is released into production, all hell breaks loose. As the code does not have any unit testing at all, proof of implementation was left to integration testing which is notoriously unreliable because it is very time consuming. The end result, as has been proven by a recent release with such code was that it needed 9, yes that’s right, NINE, patches in the first month after release. The main issue was that Defects seen were caused by subtly different implementations of each cut&paste method implementation.
The message here? DON’T DO THIS UNLESS YOU WANT TO LOOSE YOUR JOB.