Saturday 18 December 2010

The Japanese Daimyo and the Ninja School

About three hundred years ago, Japan was a country fragmented into many small kingdoms or clans, whose leaders were constantly battling among themselves.

Wars were constantly going on between different clans, with each one having to depend on an increasingly sophisticated system that provided the manpower and resources necessary to keep their territories safe and at the same time take advantage of the weaknesses on the other side and expand their area of influence.

War lords were looking for creative ways of getting the most out of all the money they spend. And one of their pet peeves was the cost of their armies. Not only it was a huge piece of their budgets being used to pay and feed the army and its suppliers, but also keeping a permanent army of trained people just in case you need them was a drain on resources for other activities on their territories. A solider that is idle is not harvesting, building, or otherwise producing anything else. Add to that the cost of training them and war was an expensive proposition.

To contain the cost drain of having to sustain an army during peace periods, they tried a few solutions. The simpler one was to treat all the population as soldiers, but only ask them to go to war when necessary. This soldier on demand approach did not looked like a bad idea. Most of the time, during peace periods, farmers could farm, craftsmen could work on their craft and artists could create art. If the need arose, those folks could fight for the kingdom as well, and during the usually short time when they were at war was not a significant interruption of their production cycles.

But soon a problem became apparent. Ordinary people, even trained, did not put too much passion when fighting. They had other interests in life beyond becoming skilled in using weapons or killing enemies. On demand armies were routinely defeated by much smaller, professional armies. Those armies were composed of skilled individuals, specifically trained for that purpose, and more important, that wanted to be soldiers in the first place.

But for the war lords, keeping a permanent army had also a few problems beside the cost. For one, professional soldiers want to permanently improve their weapons, which has a cost. Also, they want from time to time to practice and hone their skills, so they tended to propose war as the solution to every problem the lord had. The lords did not wanted to have their decisions become overridden by the military, and the military wanted to have proper careers where they could climb up the command hierarchy based on their merits in the battlefield.

As the armies got bigger and powerful, the lords also felt that the military were limiting their choices, sometimes even dictating his decisions, based on some arcane military strategy concepts impossible to understand. Oh, and the pain of technology upgrades: each time a new weapon was created they had to pay for expensive retraining. Those tried to save on training costs found too late and in the worst place, the battlefield, that savings in that were offset by much higher costs later on. And don't forget the pockets of resistance, of soldiers rejecting new weapons just because they were used to their classic ones. These group was the worst, for the war lord knew they were going to be killed in the field.

So neither the dedicated army nor the on demand one were a good solution. The lords, clever as they were, still wanted to improve the cost efficiency of their armies, loose their dependence from them and at the same time keep intact their capability of fighting over their neighbourhoods.

It all started with the training. Retired soldiers started to set up small warrior schools that trained the future soldiers. Based on their reputation as former warriors, those Ninja masters established themselves as the reference on combat training excellence. Lords started to send their future soldiers to those schools, as a way to save on training.

Soon, they realized that two things were happening. First, every other lord was sending their soldiers to be trained at the Ninja schools, so they armies were on par in terms of skills with their enemies. But they told to themselves that it was their military command that made the difference, after all, fighting skilla were already a commodity. There's little to learn once you've mastered the martial arts, the sword and the firearms. Second, the training costs were a small part of the overall cost of having an army. They still had to keep their soldiers feed and happy during peace periods.

Until one of them, famous for his forward thinking and proactive attitude, had what seemed to be a good idea. Instead of keeping an army just in case we need it, let's hire one on demand when we need it. That had many advantages. Their soldiers will be well trained by the masters, so the army will be as powerful as it could. And the best of all: no war, no costs. It was a win-win situation.

At first, he was a bit worried because he thought, what if other lords have the same idea? Will that end in a ridiculous situation where a Ninja school will end up engaging in war against himself, and get paid by two different lords to do it? Nah, Ninja masters were honourable enough not to do that, and the conflict of interest would destroy their reputation. Could he lose complete control over war operations? Nah, he had a couple of generals that will kept under his servitude that could override the Ninja master decisions at any time.

It was all well and good at the beginning. The kingdom enjoyed immediate benefits, as more resources could be allocated to grow and prosper instead of fighting in conflicts with other lords. Ninja masters charged a reasonable price and even took in their payroll most of what was the army.

But then, one day, the lord had an idea to expand its frontiers. Of course, it would require applying some force over the neighborhood kingdoms. No problem, he thought. He went to the Ninja master and shared his great idea. But the Ninja master nodded and said, "sorry sir, we don't have enough soldiers to do that" To which the lord said "well, just recruit some more" And the Ninja master answered, "I cannot, good soldiers are difficult to hire, take a lot of time to train and I've a good portion of them already engaged in a conflict in the other side of the country. Nothing that affects your security, of course, but I don't have that many resources"

The lord had to back out from his grandiose plans, but he was still confident that his decision had been the right one. At least he could count on having an army as good and as big as the one that he transferred to the Ninja school.

Until one day, he was attacked from the lord in the southern border. At the beginning of the conflnict, he lost a lot of territory because his on demand army took a lot of time to appear in his defence. The ninja master kindly pointed out that his time to react was agreed in advance and he was under no obligation to mobilize its forces sooner than that.

What should have been a short conflict that should have been settled down fairly quickly transformed into a long agonizing war that took a lot of time to come to an end. The lord realized that the Ninja master was not putting all its resources in his war. Instead, he was playing a delicate cost benefit balance, keeping enough forces assigned to defend the lord so that he would not lose the war. But was not willing to go the extra mile to resolve the conflict sooner. In fact, the longer the conflict, the higher the profit for the Ninja master. Worse yet, most of the soldiers that he transferred to the Ninja school were no longer there. The Ninja master had sent them to other conflicts or fired them according to their own interest. There was no loyalty or passion in the fighting. What the lord believed was saving by externalizing his army was more than lost the very same moment that he needed the army.

Worse yet, he was at the mercy of the Ninja master. He had now in fact the power to surrender the kingdom to the rival neighbourhood if he wanted to. The lord had no choice but keep paying increasingly higher sums for a decreasingly empowered army, with soldiers whose skills were not up to date. The icing on the cake was when he asked the Ninja master to improve their soldier's ability, to which the Ninja master answered "well lord, you know, training is expensive and your fees are not enough for that"

At this point, the Ninja schools were the ones that actually decided when and where conflicts were going to be started. And who will win them. And how much would the lords paid for that. The lords had lost any control over their ability to use force to attain their objectives. The Ninja schools ruled the island, and there was no way for them to regain control.

The lesson? They forgot what the purpose of the army was. Blindly lowering its costs without knowing what compromises are being made is not good. Making your war cheaper is useful as long as you don't sacrifice its purpose.

And, if you've read this far, you are probably wondering, what's your point? Replace in the history the fictional war lord with a modern corporation and a Ninja school with a modern outsourcing software services company.

(N.B: any history references not completely false are due to sheer luck and not attributable to any documentation or fact finding work)

Wednesday 17 November 2010

CAB and TAB: a waste of time for everyone



I've been trying for weeks to write a piece on ITIL that did not sounded vindictive or bitter. I really wanted to be fair on ITIL, or to be more precise, I'm all in favor of having some kind of change control and IT management in place.

But having to go through the ITIL recommended change management processes a couple of times has revealed me the true costs and overhead of ITIL. I'm currently immersed in an organization that drunk the ITIL kool aid a few years ago. And so far, my experience with the ITIL implementation is very, very negative.

A couple of stellar examples of this are the Change Approval Board (CAB) and the Technical Approval board (TAB). According to ITIL, these boards should review and approve every change made on production environments. In theory they are composed of the people with the most extensive technology and business knowledge. This is because they are the last line of defence before anyone can touch live systems and wreak havoc.

So far, theory is good. This small but selected team of individuals can detect and sort out incompatible changes, and block them if they step into critical business periods, even deny them completely if they don't fit with the technology standards. To make things more convenient for everybody, and unless there is an emergency, the team meets on a weekly basis. Otherwise, they would be unable to do any other work than review and approve changes.

Let's try to determine the composition of this CAB and TAB teams. For a big organization, you'll have attending CAB and TAB meetings someone with knowledge of the ERP system, right? Oh, yes, and put in some networking guys. And some storage people. With luck, your organization has also a CRM application, so add one to the mix. And is far too common, especially in big multinationals, to have their own localized versions of ERP and CRM packages. Well, also you probably have some customer facing web sites, do you? Another one to add. Done? Not yet, then there is the desktop support people and the security and incident response teams probably have something to say about changes. And don't forget the server maintenance team. Ooops, sorry, if your CRM and ERP have a database component, let's add one DBA to the mix, possibly one for each database flavor.

Done? Wait, are there systems related to production facilities? Add one more. And so on. The point is: in complex and big organizations, the technology environment is complex and the knowledge is fragmented across many different teams. Each and everyone of them needs to be represented in CAB and TAB meetings.

I see a few problems emerging from all this:

One, while the size of the team is by itself something to worry, that's nothing compared with the different degrees of connection between them. The networking team usually do not talk very much with the ERP team, while the DBA team is very close to the storage team but almost completely ignorant of what the networking team does. And so on. For something so focused on productivity as ITIL, it is surprising that they dictate allocating a lot of valuable time from valuable people just to approve changes. More so when a fair amount of those changes are tangentially, if at all, interesting for them. Seriously, if the networking guy wants to add a firewall rule, do you really think the ERP guy is going to stop him and ask if he's using the right subnet mask?

Second, these approvals are holding off changes to a weekly cycle. Rare is the project or change that can be executed ahead of schedule, but if that happens, forget about applying your changes earlier than planned unless you've managed to save a whole week of time and can catch up the prior approval meeting.

Third, the cost of validating the change could be very well higher than the cost impact of the change going wrong. ITIL is full of "everything has to be managed" in the name of being able to find efficiencies, but little is said of the cost of this management.

Clearly ITIL does not see those things as problems. And reading around ITIL, I think I've found why. Picture this, when you last saw a small group of highly skilled individuals that could have visibility and knowledge of a big production environment, whose cost was already sunk in other activities and working in organizations with long, long, change cycles?

The mainframe world, of course. ITIL was created for the mainframe shops, and it is likely to be a much better fit for those environments.

For the rest of today's world, CAB and TAB are a waste of time. Any half baked web based workflow management system can do better, and provide the same level of change control. Plus, it allows the DBA guy to skim over the network changes, and the network guy to just surface scan the ERP module changes. The problem? It would not be ITIL. The consecuences of that are the subject of another post, probably.

But I don't have time to write that post, I need to attend a CAB meeting now....

Sunday 31 October 2010

Unicode explained to the younger developers



I had recently a question from one developer about the different ways that Oracle has to declare a string column. He was specifically confused between VARCHAR2(xx), VARCHAR(xx) and finally VARCHAR2(CHAR xx).

Short answer: forget about VARCHAR, use always VARCHAR2. The difference is related to how it handles Unicode characters. VARCHAR2(xx) will have room for xx bytes, where VARCHAR2(CHAR xx) will have room for xx characters. If your dealing with new code, use VARCHAR2(CHAR xx), if you're maintaining legacy code use VARCHAR2(xx). The legacy code will have issues dealing with Unicode, but you'r going to fool it if you think that using VARCHAR2(CHAR xx) will improve the situation, because other places in the code are likely assuming VARCHAR2(xx) has some way of dealing with Unicode, or none at all. Either way when those parts use your tables, they will likely not properly understand what's there.

While the question was interesting in itself, he was puzzled by the answer. His answer was, "that's all well and good, but why there are so many different ways of declaring what is simply a string?"

Here's one of those moments where experience weights in and you can indulge in a bit of history, and your interlocutor, who feels that at least your answer deserves some gratitude, listens to your explanation anyway. So here it goes.

In the dawn of time, character sets had 7 or 8 bits. ASCII, ECBDIC or some ASCII variant. In those ancient times, 8 bits were enough to store a character. Maybe you had some bits to spare, especially if you were using English characters, but 8 was enough. The only problem for applications was to know which character set they were using, but was usually easy to solve. Did you say the only problem? No, there was another, much bigger problem. 8 bits were enough to represent most of the western world characters, but 8 bits were not enough to represent all the characters at the same time.

That meant that if your application had to deal with one character set, you were fine. If your application had to deal with many different languages at once, then you were in trouble. What you do, store for each string the character set it was using?

There was no good solution for this problem. But using 8 bits for everything had a lot of advantages. Each character was a byte. Millions of lines of code were written assuming sizeof(char) == 1. Copying, comparing and storing strings assumed that each one took one byte. The world was a stable place for almost everyone, except for the poor souls who had to maintain applications that worked with languages (Chinese?) that needed more than one byte to represent a character.

Then came Unicode to save the world. The only problem is, depending on the way you choose to represent Unicode, you may need more than one byte for each character. In some of the most popular, backward compatible Unicode encodings, you actually need a variable number of bytes to represent a character. Time to review all your string handling code. You can no longer assume that increasing by one a pointer will get you the next character. You can no longer assume that a string needs in memory as many bytes as characters it has. You can no longer compare the byte values of each character to determine their sort order.

Of course, if you're young enough, or have never the curiosity to use C/C++ or FORTRAN, you've never seen this problem. Your handy string class provides everything you need to handle Unicode wrapped in a nice package. The memory size of char[] and byte[] are different, but you essentially don't care about that.

Oracle, being in existence before the Unicode days, is of course greatly affected by the change. Not because Oracle cannot adapt itself to Unicode, but because of the huge codebase that needs to maintain backwards compatibility with. That's why they have invented VARCHAR2(CHAR xx). It is for them the best way to support modern Unicode encodings and at the same time remain backwards compatible.

Since VARCHAR2 is not standard SQL to begin with, extending its syntax further is not a big loss anyway. So next time you have to interface with legacy string data, think about how it is encoded.

Friday 15 October 2010

This code is crap



Helping customers get the most out of their systems is good and interesting job. One gets to know a lot of industry sectors, processes and ways of working, as well as some good people.

Invariably, the job involves reading code. And writing code, usually a tiny fraction of the whole body, because you focus on the parts that provide most benefit to your customer. Whenever I wrote code, I always try to make it stand out for its clarity, performance and readability. Experience has taught me that having your code reused or called from a lot of places is a sign of a happy customer, so it made sense to do my best for the customer.

As you can imagine, one develops a good sight for reading code. And I've seen my fair share of bad code over the years. Very very bad code. It is difficult sometimes to keep yourself neutral and resist the temptation to think of yourself as some kind of elite coder that can see what other people can't. With enough time and experience you learn that there are a number of factors, all of them human related, that can affect significantly the outcome of your work. Even if you try to deliver your best effort, sometimes you just were under pressure to meet a deadline and had to rush something out. Or perhaps you have a child with a cold waiting for you at home and that worries you much more than the quality or performance of what you're writing.

But that perception changed when I recently had to look at some code written five years ago.

At first, it was not that bad: at least the formatting was consistent. But it was pretentious and sophisticated beyond necessity. There were missing oportunities for simplification everywhere. Performance could be easily doubled with a number of simple, apparently obvious changes. There were comments, but essentially useless because they were centered in irrelevant parts of the code. Design decisions were not explained properly, there was no summary explaining why on earth certain algorithm was chosen or modified.

There were obvious refactoring spots all over the place. The code could have been much sorter, cleaner and efficient.

I was becoming impatient with that code, and finding progressively more difficult to sympathize with the original coder.

The only problem was, I was the one that wrote that horrible code.

As it turns out, that was I considered good code five years ago. I was so shocked that I started to dig around other, older fragments, of my old code. Ten years ago it was even worse. What I wrote fifteen years ago was practically unreadable.

Suddenly, I felt the urge to fix all that. I then realized that these lines are still happily executing many times every day, processing millions of transactions. And nobody yet has replaced it with something better, probably because they don't need to touch it.

It took me a while to recover from, but then I think I learned something. I’m not as a good coder as I think I am. You’re probably not a good coder too. Yes, there are good coders out there. Probably the club of good coders has members whose last names are Knuth, Kernighan, Aho, Torvalds, Duff or Catmull. But I'm not there. Not now at least.

I'm now a good enough coder to recognize good code. Perhaps over time I'll get better, enough to be considered a good coder. In the meantime, what I deliver seems to be good enough.

Wednesday 16 June 2010

The huge gap between geeks and business types

I've never been on the buying side of some work for hire sites, but I'm sure that they offer the potential buyer the option of  targeting to a single person a concrete job offer, because sometimes I get job offers that are not visible to the rest of the pool. Forgive me for the self promotion, I only can say in my defence that (a) I'm not making this up and (b) customer loyalty is one of the primary measurements of customer satisfaction and is something I'm quite proud of.

I suspect that they also offer additional options to the buying side to better target their offer to the individuals that look more likely to be a good fit for the job. It really makes sense, since those sites are crowded with hundred of thousands of potential candidates, and the prospective buyer knows in advance that most of those offers, while dead cheap, will not provide the services that they are looking for. One of those options has to be "send this job offer to the top 1%"

Being in the top ranks of some of those sites, I frequently receive job offers that seem to be targeted at that top 1% but unfortunately (or fortunately, see later) fall outside of my main knowledge domain.

The last one I received exemplary illustrates a lot of what is currently wrong with the perception that business types have of software development in general. The job description reads:


"h.264 codec implementation is not following standard specification including for AVCDecoderConfiguration record (you can find it in the specifications for H.264)...please complete that compliance.

if you are comfortable with x.264/ ffmpeg or you are a c++/vc++ expert, this should be a very easy task for you."

My first reading did not trigger any alarms. After all, if you're familiar with the H.264 specs, it should be easy for you to find the reference spec. A bit harder would be to find the place in the codec source code where this record is incorrectly generated, and fixing it with all the testing necessary to make sure you are not breaking anything else would probably be an order of magnitude more complex.

Still, if you're familiar with H.264 this should not be too hard. Not "very easy" as the job offer reads, but not too hard. Much harder is of course, to reach the level of mastery necessary to understand the H.264 standard and be able to program a codec for it. Let's see some of the barriers that someone that has reached that skill level has had to overcome.

  • H.264 is proprietary, to have experience with it means having experience in a media related technical business. Unlike other fields where you can write the next Apache killer at home, usually involvement with H.264 means a paid job.
  • H.264 is large and complex. I doubt that there is a single individual that understands everything in the standard down to the necessary details. Most likely, there are field experts in each area of the codec pipeline that understand completely each associated section in the standard.
  • The skills necessary to write an optimal video codec in C/C++ go far beyond the intermediate level. It takes years to write a good video codec implementation. Nobody is going to come out with one out of a weekend hobby project.

The resulting profile is someone who has mastered the H.264 standard, which means experience in a commercial codec vendor, together with C/C++ skills well above average. Maybe someone that has devoted the canonical ten years to perfect and improve its skills in video compression.

Would you trust someone with that profile to fix the codec? I surely would. The next question is, how much this is worth?

This is the point where I usually read the offered amount. According to the job offer, the maximum amount that the buyer was willing to pay for this was $120. According to some sources, this amounts to less than two days flipping burgers in a fast food restaurant.

Remember: you're not paying for the time, you're paying for the experience.