First of all, please feel free of correct me not only technically but also linguistically, this is part of my English training :) If you can read Spanish you might prefer the Spanish version.

One year and a half ago I begun what has been my most complex, important and interesting professional experience: improving software development technique at an organization. This is a good time to look backwards.

I'll do it within four chapters:
I: current (one year an a half ago) situation study.
II: altertatives selection.
III: solution deployment.
IV: conclusions.

0. Precedents

I had been working with J2EE technologies, both profesionally and at home. I had previously done some things with PHP, and I liked to be up-to-date at industry innovations (RoR, Python...), altough I couldn't devote time in depth. Everything I did was, more or less, for decision taking management applications (I can't talk on real-time systems or high technology electronics, sorry).
From this experience I build up some axioms that I use to suppose as certain (altough when you're in computing you know the only right answer is 'it depends'):
  • Layer separation is a Good Thing:
    • There must be a dumb data access layer, decisionless, just to communicate with the database. It should essentially have just 4 CRUD methods.
    • The bussiness rules layer, above data, under presentation, is where you code specifications rules, from application data flow to data control ("who can see/edit what when").
    • Dependencies goes downwards, data, upwards.
  • Don't Repeat Yourself (DRY).
    • Corolary: don't cut'n'paste.
  • If you can choose, PHP, RoR or similar for small applications. For complex applications, J2EE.
    • Please don't begin a flamewar :) Yes, I know you can do far better things with other languages. Nevertheless, with Java you have great tools, great documentation, great libraries... and it's easier to control if you have to manage many people. Yes, of course, 4 great Python programmers will do a far better app that 10 mediocre Java ones, but this wasn't my case, you know what I mean ;).
  • Writing Java snippets at JSPs is bad.
  • Ajax is A Good Thing. If you do it by hand it might be good, but it's much better if a library provides you it..
    • Corolary: ajaxifying data is (probably) more efficient (from a networking point of view), ajaxifying interface is (probably) more productive (from a project manager point of view).
I. Current Situation Study

It was late 2oo6. Struts (Action, 1) begun feeling old, and 'Ajax' was the term you should say at an interview if you wanted to catch the attention. JSF already had its bulletproof bad health, just like now. A myriad of web frameworks (Spring Web, Wicket, Web Works, Tapestry...) waited its actual death to begin bragging. Microsoft, after his war against Sun, had published .NET, and every manager question was J2EE oe .NET?. Dojo was The Javascript Library, with his "no documentation at all" (or "document just a little, badly...) policy. If you wanted to keep a geek conversation, you had to know how cool RoR was, or that Google had broken every convention with GWT.

When I got to the new company I found a situation... well... uncommon. The whole World tried optimizing productivity and improving product quality with Java libraries, but they were developing with plain old servlets plus JSP, with embedded SQL... We belonged to another company (lets call it 'the supercompany') who imposed some other problems:
  • Old IDE:
    • Bad CVS client. It was no problem for the supercompany because they held the code at shared folders (sad but true), but it was for us, forcing us to use an external application.
    • Development on a different server, not the same as the production one.
  • Propietary server, badly documented.
  • In-house developed framework:
    • Classes generating HTML (forcing us to do things like new DropDown() at jsps).
    • One layer design, even with SQL at JSPs.
    • Wrappers for existing classes (we couldn't even access the actual Connection object).
    • Unavailability of source code.
    • Outdated documentation.
    • No standard library, not even Log4J or any other logging system.
    • Undocumented, hidden dependency on session variables.
    • Internet Explorer only.
This restrictions had led to many antipatterns:
  • Tons of repeated code.
  • Javascript only form validation.
After doing a first application "the old way" I was given a chance of doing a brand new design, a new arquitecture, a new framework.

Chrome first impressions

First 5-minute impressions:

  1. It has downloaded and installed seamless and quick, importing Firefox configuration.
  2. I have three tabs (start one, Chrome presentation and this blogger one), and 4 processes at the task manager. Total, 95MB. Firefox, with the same tabs, 60MB. They've already said that one tab - one process model would have some startup overcharge. I don't mind if it cleans memory as good as it should.
  3. Vista windows look... but optimized. No right, left or bottom border, and tabs are placed on the title bar, smaller than XP standard ones. Clean UI.
  4. Quick start page displays also last searches.
  5. Flickr Organizr works REALLY fast, and it doesn't make browser stop at all... Javascript is the key and reason of being of Chrome, that's sure...
  6. Moving through tabs isn't fast, it's instantaneous.
  7. Flash works from the very beginning.
Stay tuned :)

Google Chrome: web operative system

You all know a browser has almost nothing in common with the traditional operative system concept, but if applications keep on their current trend of being at web the browser will be our application runtime environment.
This is why Google (is going to) publish Chrome, and they say it in a subtle way at page 4 of the comic: "we're applying the same kind of process isolation you find in modern operative systems".
It also shows some other interesting things:

  • When they build the core a bot test it against "millions of pages". Can you imagine testing against the n most used pages?
  • They've built a Javascript virtual machine which is a JIT compilar which produces machine code.
  • 'Omnibox': the knowledge they've gained with its search box is applied to the URL bar address. Seems terribly simple and useful.
  • Silent mode for being traceless.
  • They critizise Vista security model, which allows reading upwards at the security stack, in spite of having sensible information at the middle of it. Chrome, isn't based at levels but at a sandbox where code can only retrieve information user explicitally gives. They compare Chrome again against a OS instead of a browser.
  • Plugin isolation at a different process.
  • Blacklisting.
  • Development improvements will be integrated at Google Gears. This way other browsers can still benefit Google improvements, and applications can be cross-browser compilant. But, if Chrome has a much bigger throughput, Gears will crawl instead of running on them (IMHO)...
  • At page 36 they state they believe Open Source, not standards (at least not at their 'unifing' function): "Open standards are one way to help all browsers get better. The team has also done some interesting things with speed, stability and the ui, like the new tab page. Some of them might become standards, some might not. But since it's open source other browser developers cant take what they want out of it".
    • IMHO tis is true... in part. If you "unstandar"what browsers do, pages won't behave the same way. Nevertheless, also IMHO, this is the right way of thinking. Standards are slow and burocracy limited.
Let's see what it has to offer... Reading the comic has created a great envy on me. It must be great working at Google pushing the limits of the web instead of struggling with its limitations!