Moving to VI

04 February 2015

Tags: vim

A while back I tried out the Atom editor and while it isn't a bad edit per se, there were two things I just didn't like about it.
The first was the startup time was way too slow. The second was the fuzzy file search kind of sucked. I almost always had to put the entire file name in to get what I wanted. It always seemed to find everything else but what I really needed.

Well, in watching some videos about Ember, I noticed many were actually using vim. This kind of surprised me, but I thought I take a closer look. Now, I'm no stranger to vim, having used it for years, but in a very limited capacity. Basically to just edit config files and such on servers, never as a daily editor.

Going back in time to my Zappos days, I remember when I first got there almost everyone used vi. Now when I first got there the site was built in perl and each dev had their own dev server, so basically they spent their day SSH'd into their dev box doing coding. Only us few Java folks had an IDE (most of the front end devs used TextMate). I remember watching a few of them and damn were they fast. I always remembered that; how quickly they moved around and how efficient they were.

So, fast forward to 2015, I decided to take a closer look. Naturally I hit up YouTube and watched some videos and I was surprised at the number of plugins available and they were the kinds of plugins I would want to use. So I took the leap, cold turkey, and started using it at work a week or so ago. I forced myself to only use vim and nothing else. I install a few plugins like CtrlP, Vundle, airline and a couple others. I also installed iTerm2, and the combination has been quite eye opening to say the least.

I still have a hard time once in a while with managing buffers, and some navigation (like forcing myself to not use the arrow keys), but overall the experience has been pretty positive. So much so that one of my coworkers is also making the switch :)

My daily flow now is to have several split panes open instead of my previous use of tabs. While other editors can to split panes, I never really got into the groove with them, but it seems quite natural in vim. I do have a few annoyances but overall I think I'm going to stay with vim.


Static File Cache With Grails

22 January 2015

Tags: grails

It is a fairly common practice to front a grails app with a static web server such as nginx for serving static files. That is all good and fine, however I have used a technique of generating static HTML files within grails as a 'cache' for several years on some projects. This technique is especially useful if you have an app that is running on a limited resource server, like many VPS solutions where ram is limited and the content rarely changes.

Generating the page

The first step of course is to generate the content, store it in a variable and then save that content as a file in a directory that the web server has access to.

This is easily done in a controller and using a GSP taglib. Here is an example:

<g:applyLayout name="main">
.... content here ....

This file needs to be a GSP template. With that, we render it in a controller:

def content = g.render( template:'templateName', model:[whatever_model_you_need] )

Caching the content

Now that we have the generated content, we can cache that content into a static file. I use a service to do this:

import org.codehaus.groovy.grails.commons.ConfigurationHolder as CH
class CacheService {
   * params key - the filename to store
   * params content - the generated html
  def add( key, content ) {
    switch( GrailsUtil.environment ) {
      case "development" :
      case "production" :
        try {
          if( "/" == key || key == "/store/" )
            key = key + "index.html"

          if( !key.startsWith("/") )
            key = "/" + key
          def f = new File( CH.config.cachedir.toString() + key.substring(0,key.lastIndexOf('/')) )
          if( f.exists() || f.mkdirs() ) {
            new File( CH.config.cachedir.toString() + key ).write( content.toString() )
          else {
            log.warn "COULD NOT MAKE DIRECTORY ${f.absolutePath}"
        catch( Exception e ) {
          log.error e.message, e

Should be easy to follow, but I’ll walk through it.

First thing it does is not actually cache anything in dev mode, but of course you can choose to not do that. The first part of the production block checks to see if the 'key' is the index page. Otherwise it uses the key as the name of the file.

You may notice the code doesn’t tack on the html extension. That is because the key contains it. We get it from the controller, so the full example would be like this:

def key = request.forwardURI

// do some logic here..
def content = g.render( template:'templateName', model:[whatever_model_you_need] )

cacheService.add( key, content )

The forwardURI is the important part. Grails will allow urls ending with .html (or other extensions), and the forwardURI contains that extension and thus used as the key.

I have a corresponding 'remove' method on the cache service so that if the content changes we can remove the static file and it will get regenerated on the next request.

Using this method, nginx serves the html pages with great speed and almost no CPU usage, allowing the grails app to sit idle and only perform actions that require processing. The code above is part of an ecommerce app, so the grails app only gets called for things like checkout, adding an item to a wishlist, logging in, that kind of thing. It allows to deploy the app on a small VPS and have resource issues, thus saving money.

Looking forward

I haven’t used this technique in a while as I haven’t written an app that needed it. However, I recently started messing with EmberJS and it occured to me that a single page app could use this technique for generating static json files which are requested by the client. In particular things like lookup data or other data that doesn’t change very often.

I plan on exploring this in the very near future.


Billion Record Mongo Update

21 January 2015

Tags: grails mongo

A quick update to my previous post on our Grails app and testing with Mongo. While our query time were excellent, it seems iterating over those result sets were much better. At the moment I am not sure if iterating over the result set is the issue or serializing the data into JSON is the issue, or a combination. I plan to do a test dropping down to the the mongo driver itself and bypassing GORM and see what that does.


Grails With Billion Record Mongo Collection

05 January 2015

Tags: grails mongo

The app that I am working on at work is a grails app that deals with a variety of data and started out with a traditional Oracle database. One of the primary features of this application is being able to see trending data. This comes get loaded via a JMS queue and comes in batches of 15,000 to 60,000 records every 1 to 1.5 minutes. Roughly a million records an hour, give or take.

The requirements for the application were to maintain 90 days worth of data, the remaining data gets rolled into a data warehouse. So far no out of the ordinary requirement. The loading of the data took a little work to get performant and all that, but all went well.


With about 60 days worth of data, we started seeing some slowness in the queries for data in the data set. We query this data in small chunks of 7 days usually. At first we started looking at some caching solutions. However doing some quick match we realized to have a cache big enough to hold even 7 days worth of data was way out of our reach. It would require buying dedicated hardware big large amounts of memory, or lots of VMs with medium amounts of memory. Going dedicated would hav been painful. This company likes to run everything on small VMs when they can. We could get lots of VMs, but it seemed like a future maintainence nightmare.

Yes, we had proper indexing in place, but it was still just too slow.

Solution One

Our first solution was to have a short term, on demand cache. The UI is built in ExtJS. So what we did was send calls to the server to prefetch the data as the user was selecting it. The idea was that once the user was done selecting the data items they wanted to chart, and actually brought up the chart, we would have gotten it loaded and cached for use.

Solution Two

That kind of worked. There was the off chance that something wasn’t ready to be seen by time the user was ready. So we went a slight detour and went to a push model. We were already doing push for other features, so it wasn’t terribly hard to implement it. As before we fire off an async request to get the data, but instead of caching it, the server pushes the data once it has been loaded to the client. The client UI gets notified that the data is ready. All was right with the world.

Solution Three

Well almost…​ The queries would still take a couple seconds each. Even though they were done asynchronously, the query times were still a bit too long to hide from the user. On a whim, I had the idea to use MongoDB as a cache. Instead of a full on in-memory cache like Redis (which we use for other stuff), perhaps mongo could be a short term, small persistent cache.

So, I set about making that happen. Converting the domain from Oracle to MongoDB was not big deal, given there was the mongo gorm plugin. I then created a groovy script to reload the mongo cache with some existing data from oracle. All went well.

We then had the bright idea to see at what point does mongo become a bottleneck like oracle. So, we pointed our loaders to mongo full time and let it build up data.

At 500 million records in, we were seeing query times of 15-50 milliseconds, give or take. Remember, we are also doing good size insertions every 1-1.5 minutes as well.

A few days later we hit 750 million. Query times remained the same. We were quite optimistic. I should also note that the UI takes longer in most cases to parse the resulting json data than it does to query it from mongo :)

The Billion Mark

Then we hit a billion records over one weekend. I quickly pulled the app up, loaded up the trending screen and let it fire off some queries. Watching the logs, I was seeing query times of…​ 20-100ms, with most in the 20-50ms range. Hot damn we were on to something.

We quickly decided to not only drop oracle for the trending data, but to also have the UI do synchronous calls and eliminate the push process. The queries were just so fast we didn’t need the extra code to make the push happen. We are also going to drop oracle completely and use mongo for all our database needs. It will help with a few other features that currently require some stored procs and materialize views.

Then this morning while fixing some other issue, I checked the stats on the collection and it was at 1.45 billion records. I had forgotten to set up the script to trim the collection at 90 days. I had done it for dev, but not qa yet. Well I had to see what this performance was like, so I loaded up the app in qa and checked. Yep, still 20-50ms. Some were as fast at 10 or 15ms.

Granted, a billion records in some domains is not uncommon. However, we are running on a linux VM with 32gb of ram and the application itself is also on the same VM. Our data set on disc is 19gb and one of the indexes is 9gb. It should be noted that we are using the TokuMX version of mongo, which compresses the indexes and datasets. I do not have numbers for a pure mongo setup. The compression was the important factor in my choosing TokuMX, given our limited RAM.

Going To Production

Our application is currently not in production, but I have setup the production servers for mongo use. We have 4 servers currently. 3 are setup in a replicaset with the 4th as an arbiter. Our security service is using those mongo instances currently and this application will use it them as well when it is ready for prod in the coming month or so.

A side benefit is the replica sets. The distaster recovery processes here are…​ interesting to say the least. However, with 3 replicas of our data on 3 different physical machines, we have essentially real-time backups of our data, unlike the rest of the apps here that rely on nightly backups.


EmberJS First Thoughts

29 December 2014

Tags: ember

I’ve been looking at javascript frameworks for single page apps again. I’ve used Backbone with Marionette on a smallish project about a year ago and had some success with it, but felt it was time to see what else what out there.

After doing lots of Googling for ABC vs XYZ and reading lots of blogs, watching videos and all that, I decided to give Ember a shot. I can’t say for certain that it was one particular feature over another that lead me down this path, but I think it had a lot to do with not being too bulky (ExtJS) yet was more than a bare bones (Backbone) famework.

Once I made my decision, I spent lots of time reading the guide, looking at tutorials and watching videos. I must say the docs are very nice and cover most aspects that a new user would need, so that’s a definite plus. Last night I finally made the plunge to download the starter zip and see what I could come up with.

No CLI for me

I made the conscience choice to not go the route of using ember-cli. I had a feeling I’d get caught up in nodejs and build hell and that would make me quit in a hurry. I just wanted something simple to work from, so the basic zip file starter setup was just fine.

I also decided to not fixate on a back end at this time. So initially just a couple simple js files and use some fixtures for temp data. Later on I’ll work on integrating a Grails back end operating as a JSON based web service.

My sample domain

The app I chose to work with is a potential contracted application. Yeah I know it is bad form to learn something new while building for someone else, but I’m not yet under contract for it, so I’m learning on my own dime ;)

This first scenario was going to have 3 select boxes for narrowing down a list of vehicles. First one having the make, the second the model and the third the year. I have to actually have select boxes doing anything, but what I did do was get some familiarity with the ember data package.

Ember Data

I have to admit, coming from the Backbone/Marionette world where defining a real domain model was a royal pain, ember data is very nice so far. My model consists of fourt domain objects, a single route and a single view so far. All I did was render out the object model into unordered, nested lists to see how it all works. Other than a few issues where the fixture data was being setup, it works pretty well.

Here are the current models:

App.VehicleMake = DS.Model.extend({
  name: DS.attr('string'),
  models: DS.hasMany('vehicleModel', {async:true})

App.VehicleModel = DS.Model.extend({
  name: DS.attr('string'),
  parent: DS.belongsTo('vehicleMake'),
  vehicles: DS.hasMany('vehicle', {async:true})

App.Vehicle = DS.Model.extend({
  vehicleModel: DS.belongsTo('vehicleModel', {async:true}),
  year: DS.attr()

App.Product = DS.Model.extend({
  name: DS.attr('string'),
  msrp: DS.attr('float'),
  sku: DS.attr('string'),
  weight: DS.attr('string'),
  description: DS.attr('string'),
  image: DS.attr('string'),
  vehicles: DS.hasMany('vehicle', {async:true})

I found that if I didn’t have the async:true on the relationships, they didn’t get 'loaded' from the fixtures. You can also tell that ember data is rapidly moving target as many of the examples you see online via blogs or stackexchange show fixture setups and model relationships defined in different ways. That lead to some minor confusion, but thankfully the ember site itself was a decent enough resource that when in doubt I referred to it.

Multiple model route

Unless I missed something, the docs weren’t that clear on what to do with a route that needs to return multiple models. A bit of searching lead me to an answer, via the RSVP.hash function.

App.IndexRoute = Ember.Route.extend({
  model: function() {
    return Ember.RSVP.hash({

That worked just fine.

Next steps

So that was all done in a couple hours last night. Doesn’t look like much, but hey I’m just starting :)

Next step for me will be to render actual select boxes instead of nested lists. I have seen the select component on the ember site so I may try that, but we’ll see. Then I’ll add in more routes to work with picking a value from one select to trigger populating the next one, etc.


How I Build This Blog

29 December 2014

Tags: jbake

Inspired by a comment on my New Blog (again) post, I decided to share how I build my JBake blog. The commenter referenced this blog post, where the author used Gradle to build and publish his blog to

Well, mine isn’t nearly that elaborate. I don’t publish to, so all I did is write a four line shell script.

cd output
scp -r *
cd ..

Nothing fancy there. I suppose I could have used gradle, but it wasn’t necessary for me. I do use github to store my source files however. A few others do, as seen on the jbake community page. That was quite helpful for me, as the docs don’t cover some things and their source files helped fill in the blanks.


Just found Atom

23 December 2014

Tags: atom

While watching a few videos on Ember JS, I noticed a lot of Mac folks were using Sublime Text. I’ve been a TextMate user for many years now, but I thought it would be interesting to see what Sublime had to offer.

Well, along the way doing some google search on Git integration, I came across Atom, which was released by GitHub. Surely GitHub has to have excellent Git integration right? After downloading Atom and finding a theme I liked, playing with some settings, I opened up one of my projects. I found that, unless I am missing something, Atom doesn’t have git commands built in. Really? seemed odd. So I downloaded a few packages to try out.

The first was atomatigit. This seemed overly complicated to use. What I want to be able to commit whatever file has focus in the editor. Is that so hard?

So I uninstalled that, and found git-plus. This is much closer to what I wanted. Sadly the author doesn’t maintain it anymore, but it seems to work just fine for now. With that settled, I’m off and running.

I was never what one might call a keyboard master with TextMate. The biggest feature I used was CMD-T, to lookup a file quickly, and Atom supports the same concept thankfully.

The split pane feature would be nice as well. I don’t need it often, but occasionally it would be handy, so I’m going to mess with that.

I can see myself going through several themes as well. I always have a hard time finding one that I really like.

The best part about Atom is, unlike TextMate or Sublime, it is free :)


New Blog (again)

19 December 2014

Tags: blog

I've been wanting to start fresh with a blog for some time but just haven't gotten around to it. I've looked at various options on platforms on and off, but I didn't want to use a hosted solution like Blogger, really didn't want to use Wordpress yet again and while I could build one using Grails (and I thought about it often), the time involved didn't appeal to me as I had other, better, things to do.

So, I came across Octopress and the GitHub publishing of blogs and thought that was pretty intriguing, but again, the GitHub as the host didn't appeal to me. Eventually I came across JBake and it seemed to fit what I was looking for. That was about an hour ago :)

I looked at some of the other blogs and sites mentioned on the JBake site and it seemed pretty nice. Nice simple text file writing which I like, had the ability to use Groovy Server Pages (I'm familiar with) along with ASCIIDoc pages (which I want to learn). I also liked that it is completely static generated so I don't have to run a database or an application. Just generate the content, copy to a server and let the webserver handle it all. Saves resources and is super fast. All in all, I figured why not.

I installed it via GVM, initialized my first directory structure, generated the examples pages, did some digging around and here I am, writing this first post. I will probalby migrate a few of my favorite posts from my old blog into here eventually, and maybe add in some non-blog stuff later on like various non-coding projects I do.


The joke that is dependency management

06 August 2011

Tags: java grails

reprint from old blog

For the second time in this job I’ve had to deal with maven. So far I’ve avoided it like the plague, but now I have to work on some old code and the company “standard” is to use maven. So, I download the new version, set up my path and attempt to do a simple build. I add one dependency, c3p0, and maven in all its glory can’t find it. Why on God’s green earth does anyone submit themselves to this kind of bullshit? Not to mention the 440+ pom files maven decide it need to download. WTF?

The first time I had to deal with it was with the stupid Eclipse plugin and maven would keep putting in jars I didn’t need or ask for and wouldn’t supply the ones I did ask for. I gave up and wrote an Ant script and everything worked just like I wanted it to. Seriously, all the dependency management zealots need to see a shrink. By time you setup the stupid pom file, or ivy file if you like, I could have downloaded the jar, dropped it into my lib directory and presto it works. Worse case, you have two libs, one for build only jars and another for build and deploy. That’s it.

Dependency management and tools like maven just underscore the java community’s desire to over build, over architect and generally make things far more difficult than they need to be. In 12 years of doing java I’ve never had that thought that putting jars into a directory was a difficult thing.

“What about different versions of jars?” Are you kidding me? I upgrade needed jars much like I upgrade to a new JVM or a new Tomcat. Evaluate and if needed replace the jar with the new version. Oh boy, that was so hard I should use a tool to make it harder! Adding a new jar (excuse me, dependency) is just as painful. I write four lines of XML to describe the dependency, spend several minutes figuring out what repo has the damn thing, and if needed write some more XML to point to that repo. Then pray it works. Compare that to downloading the jar, dropping it into my existing lib directory and not having to configure anything. Yea, that 6 lines of XML that I wrote in the beginning of my Ant setup telling it where my classpath is still works. Imagine that.


On Git and SVN

06 July 2011

Tags: git grails

reprint from old blog

Naturally if you search for pros and cons of Git vs. SVN you will find all sorts of compelling arguments on both sides. The vast majority are technical such as distributed vs. central and, IMO, completely miss a big point.

As a bit of background, like most non-microsoft based developers I used CVS and SVN for the better part of my 12 years in development. Then a few years ago when I was working for Zappos, and we moved from perl to java, we eventually moved from SVN to Git. Now at the time I was a bit reluctant, but it was more about user friendliness and such. After using Git there and personally via GitHub, I’ve come to the conclusion that most developers are missing a huge opportunity in source control.

See, I’ve come to realize that SVN is a decent versioning system, but a horrible source control system. The typical cycle in a SVN environment is to use the head branch as your working tree and branch/tag when you have a release. Well anyone who has done development long enough knows the pitfalls of such a setup. To add to that, commits are far fewer and at far longer intervals than when properly using Git. The reason is generally quite simple, as branching rather painful in SVN and a total breeze in Git. It seems that in every SVN environment I’ve worked, SVN was just a glorified backup. A versioned backup if you will. A place where you pushed your code and prayed it didn’t break anything. And heaven forbid if you have to work on multiple things at once and then go back and fix something.

The way we had it setup at Zappos, and to this day I still think is the best setup I’ve ever used at a company, is we had a release manager, and amongst other duties, he maintained the “central” git repo. At first glance this isn’t terribly different than using SVN, however, no developer could commit to the master branch. The master branch represented what was currently in production, and as such it was always the baseline. We used Jira for our ticketing/issue system, and everything we did was a ticket. Nothing too out of the ordinary there.

However, what we did was we branched our local git repo for each ticket that we worked on. This is very important. EACH ticket had its own branch. Some people would faint at such a process, but with Git’s seamless branching it was a joy. It also meant that we could work on many tickets at once without affecting the others or more importantly without breaking anyone else’s. So, when we were complete with a ticket/branch, we pushed it up to a specific repo that was controlled by the release manager (RM). Notice I didn’t say merge. We pushed our branch up, and using Gitorius (a local install), we made a merge request.

From there, we switched branches and worked on something else. When a set of tickets were ready for testing, the RM merged in all the tickets required for that particular QA release and pushed to the QA servers. QA did their thing, and if there were issues, we would switch to that branch, made changes and pushed that branch back up and the cycle repeated until it was cleared. Once it was verified and after it was pushed to production, the RM merged those changes back into master and we would pull and chug right along. We would also removed that particular branch from our local repo as it was no longer needed.

Someone coming from the ‘traditional’ way of doing things would read that and think it was a maintenance nightmare, but it really wasn’t. even with 40+ developers it worked pretty smooth. It also meant that two devs, usually a back-end (like me) and a front end/html dev could work on a ticket together and push back and forth between the two without anyone else being affected. That is the distributed nature of Git at work. It was not uncommon for any one dev to have half a dozen or more local branches going at any one time. It was just so easy to branch, merge and push as needed.

To me, that is real source control and not just source versioning.


Interesting transaction issue

22 October 2009

Tags: grails

I’m working on a new Grails based project and came across a transaction issue. The application is an ecommerce app and as such the placing of the order has a lot going on that should all reside in a transaction.

Being a Grails app I decided to put the processing inside a transactional service method. So far so good. The flow looks something like this: . Create billing address . Create credit card . Assign billing address to card . Create shipping address . Loop through the cart items and for each item find stock(inventory) to fulfill each . Mark the stock as sold as applied . Add each order item to the order . Assign order to customer . Save order and customer

That is a little oversimplified, but you get the idea. So, what was the problem? The problem came in during the select for the stock. This select would cause a hibernate exception about the address being a transient object. Well that made no sense at all.

At first I started down the road of abandoning the whole transaction thing and doing it manually. Naturally that was ugly and very error prone, so I did some research to find out what the cause might be. Came up empty.

My gut said that a select should not cause a problem with unsaved data. Then I thought that maybe hibernate was enforcing some sort of isolation level mechanism. Most of us never bother with database isolation levels (and fewer even know what they are sadly enough), but perhaps that was causing it. So what I did was move all the save() calls from the end to where they were being used.

In other words I called save on the two addresses and the credit card before the loops on getting and assigning stock.

Worked like a charm. I now have the whole thing in a transaction as it should be.


Pilot Philosophies

11 July 2007

Tags: humor

The difference between a duck and a co-pilot? The duck can fly.

A check ride ought to be like a skirt – short enough to be interesting, but long enough to cover everything.

Speed is life. Altitude is life insurance.

It only takes two things to fly: airspeed and money.

The three most dangerous things in aviation: A Doctor or Dentist in a Bonanza; Two captains in a DC-9; A flight attendant with a chipped tooth.

Aircraft Identification: If it’s ugly, it’s British. If it’s weird, it’s French. If it’s ugly and weird, it’s Russian.

Without ammunition, the USAF would be just another very expensive flying club.

The three best things in life are a good landing, a good orgasm, and a good bowel movement. A night carrier landing is one of the few opportunities to experience all three at the same time.

The similarity between air traffic controllers and pilots? If a pilot screws up, the pilot dies. If ATC screws up, the pilot dies. It’s better to break ground and head into the wind than to break wind and head into the ground.

The difference between flight attendants and jet engines is that the engines usually quit whining when they get to the gate.

New FAA motto: “We’re not happy, till you’re not happy.”

A copilot is a knot head until he spots opposite direction traffic at 12 o’clock, after which he’s a goof-off for not seeing it sooner. If something hasn’t broken on your helicopter–it’s about to.

Basic Flying Rules: 1. Try to stay in the middle of the air. 2. Do not go near the edges of it. 3. The edges of the air can be recognized by the appearance of ground, buildings, sea, trees and interstellar space. It is much more difficult to fly in the edges.

Unknown landing signal officer to carrier pilot after his 6th unsuccessful landing attempt: “You’ve got to land here son. This is where the food is.”


Being a software snob

24 April 2007

Tags: humor

The other day on the way home after listening to me go on and on about one of the POS vendor apps we have, my wife came to the conclusion that I was a software snob.

At first it took me off guard. Usually she just says something like “that’s nice dear” or “are you finished?”, but this time she came right out and called it like she saw it. After I laughed at her observation, I came to the conclusion that she is right.

Being a software snob

What kind of snob? well, I don’t get into the debates too often about what language rules over another or which framework is best, yada, yada. Sure I’ll like one over the other, but I don’t get religious about them like some people. However, what I do get religious about is bad software. I can’t stand it. I don’t care what language it is written in or if it follows a strict object oriented approach or whatever, just that it is well implemented and it has good features for the price.

The ironic thing is that in the software world you usually don’t get what you pay for. There are few products that are actually worth their price tags, and I would say a good 80%-90% of the time you can find a really good open source product (or 3 or 4) that will do it better, has better features, etc.

Yes, I’ve worked for corporations long enough to know that corps like to buy from a vendor for some perceived benefit of support or whatever, but even then the support is usually horrible and not worth the price that is being paid for it.

I’m not saying that I’m against all vendor or commercial software. Not at all, I’m just against over priced AND poorly done software. One example is the content management system that we have. It comes in at a good low 6 figure price tag, has the most archaic management interface you will ever see, and their whole architecture is a single DLL (written in Delphi for those that care) running against SQL-Server. It doesn’t take a genius to figure out where the bottle neck is in this system. Never mind that there is no server-side validation of anything (can you say SQL-injection?), no CAPTCHA support (gotta love spam…) and at first look the templating system seems way over complicated. One look at say Joomla! or Drupal or one of the many other open source CMS solutions and you can’t help but think WHY?!?


Is Spring becoming what it despised?

09 March 2007

Tags: spring

Spring is a great framework, but it seems like it is turning into exactly what it was trying to solve: a complex, full stack framework. It was created out of frustration with the then J2EE (now JEE of course) stack of technologies. Spring’s approach was good, fairly simple and got the job done. But is it now too big and people jumping ship?

If you take a recent look at the most read items on JavaBlogs, it seems so. From Seam to Guice, it looks like there are competitors out there looking to fight Spring for king of the framework mountain.

To Spring’s credit, it has a large following and won’t go by the wayside anytime soon, but if these new annotation based frameworks get some traction I would expect some pretty good adoption among them. Give a year or so for a few to come out and then another year or so before the prominent ones take the market share away from each other and slowly from Spring.

The Driving Force

Undoubtedly, annotations instrumental to leading this change. Just a few years ago XML configs were the norm, but when Java added annotations to Java 5, everyone was tired of those XML files and looking for a better way. Thankfully the core JEE stack started using them and everyone else jumped on that wagon. I’m still not a fan of using them for everything, but in a lot of cases they certainly make sense.

The second major factor is pure size and complexity. Much like JEE which it sought to replace, Spring has become too big. Just like JEE, you don’t have to use the whole bag of tricks, but people don’t look at it like that. They see this huge feature list and potentially complex configuration and look for something simpler. In a way it is a shame since Spring can be very helpful and non-complex if you just took the time to learn the basics and forget about all the extra features that you can tack on.

Let the framework wars begin…


How often are you setup to fail?

22 February 2007

Tags: career

As developers in any decent sized corporate environment will tell you, being set up to fail is an all too common occurrence. Sometimes it is the business that wants things too quickly, sometimes management won’t take the tech staff’s warnings to higher levels and even sometimes it is our own over ambition that causes us to fail.

It is one thing to have a high pressure schedule and an all together different thing to have an impossible, destined to fail schedule.

Sometimes we tend to lump the two things together, but there are differences.

High Pressure

The high pressure schedule usually translates into a quick timeline. A quick timeline isn’t always a problem from a practical standpoint. The problem here is a matter of how the application is built. Naturally we developers like to use the coolest toys and techniques when building a new application. It keeps are interest up and expands our knowledge. Some developers even want to incorporate the latest elaborate architectures in their new projects.

That is all good and fine, but wanting to do so will almost always impact the schedule. There is a lot to be said for making it simple. I believe it was Einstein that said make it as simple as you can, but no more. That may mean a simple two-tier approach with some simple SQL. I hear the purists falling out of their chairs now. What, no object model?!?!? How dare you! or “Why didn’t you use (insert favorite framework here)?!?!”

I know, I used to be one of those who wanted the best design and architecture I could come up with. The problem is that eventually some of us out grow that and realize that such an approach has its place, just not in every single application. One of the reasons why Ruby on Rails has gained so much popularity is because of the quick turnaround without having to build an elaborate architecture. Yes it is a code generating framework to a certain extent, but it has been proven in many cases to just work. Take a look at the products from 37Signals as proof.

So, a high pressure schedule can be met by making choices on how it is implemented. You may not always like those choices, but you can’t like everything.

Impossible & Doomed to Fail

These projects are known to fail right from the start. Any seasoned developer can smell them many cubicles away. You hear the rumors about management’s new product. The sales staff are already selling it, the CIO has already promised a date and then you hear the requirements. You have to build an eBay killer in 3 months.

At first the dev team laughs, praying it is a joke. Then you realize it isn’t a joke and the whole team starts talking amongst themselves about how this will never work. Finally someone gets the nerve to tell their manager that there is no way this can be done. At first the manager tries to encourage everyone, give a pep rally and buy everyone some cookies. After the first month, a couple more people come forward and say the same thing, but the manager won’t take it any higher, knowing that the business has already set the date, the CIO has promised and there is no way he is going to be a messenger of doom.

Everyone starts working way too much overtime and accomplishing far too little due to stress, fatigue and low morale. Month 3 comes along and naturally it isn’t ready. The business is pissed off, the CIO is furious, the lowly manager is wondering what went wrong. The dev team is ready to quit.

Sadly this happens and I’ve seen it first hand. Being set up to fail is the worst for me. If I fail on my own I can accept that, but going into a project knowing from the outset that there is no way it will ever make it to fruition is a disheartening feeling. It may be the impossible schedule, no resources, or just outrageous goals but the end result is the same. The business ends up not trusting you anymore, you hate everyone for having been put in that position and on and on. Unfortunately there isn’t much you can do about it, other than keep bringing the issue up again and again, but if management won’t listen you are doomed.

I’m in the midst of trying to avoid one of these now, which is why I wrote this. I’m sure there are others out there going through this pain right now.

Are you being set up to fail?


Are you a programmer or developer?

28 December 2006

Tags: career

The two job descriptions seem to be more or less interchangeable, but more and more I’m starting to think there is a clear difference between the two. When someone asks me what I do for a living and they are not in the IT field I usually just say that I’m a computer programmer. Even though they really don’t know what that means, it gets the ‘ahh, OK’ response followed by the usual question of if I can fix their computer, email or whatever else they can’t figure out how to do. At this point my wife usually chimes in and says that what I do is more than just being a programmer and while she is correct most people don’t understand the difference.

The one occurrence that stands out was when I was playing poker one night and one of the guys at the table mentioned that he was a programmer. I asked what kind of programming he did and when he said that he built custom Excel spreadsheets for his clients I was stunned. I thought maybe I missed something. Did this guy say he was a programmer and that he builds Excel spreadsheets? OK, now I know that you can use VBA to create some really cool spreadsheets but that is far cry from what I do. It was then that I realized the distinction between programming and software development.

So what is the difference?

The simplest description that I can come up with is that developers are capable of building the software that programmers use. Think about that for a minute. I just took a look at what Wikipedia had to say about the two and for the most part I agree with their interpretation. Developers are those that usually build software infrastructure such as frameworks and common services while programmers are those that use these services. Most developers don’t really like to create customer facing applications, but usually do at various times and more importantly they can do it.

This means that developers are the next level beyond programmers. Obviously to become a developer you have to had been a programmer for some time to acquire the needed skills and experience. Not all programmers make the progression and that is fine. If all programmers made the progression to developers we would all be in trouble. Talk about too many chefs in the kitchen!

So am I the only one that sees a difference here?


Older posts are available in the archive.