The Growing IT Mess at Big Banks

There is a very useful and accessible article at BBC (hat tip Richard Smith) on the information technology hairballs and baked-in level of future problems at major banks. The article was prompted by widespread problems at NatWest this week in accessing accounts.

It’s hard to manage large, complex IT installations when they require frequent feature changes and upgrades due to customer and regulatory requirements, plus (on the trading side) product innovation, but it is made vastly worse when it is treated like a stepchild, which is the attitude in most financial firms. Perhaps readers can add to the list, but the only firm I’ve ever worked with that treated IT as a strategic priority was O’Connor & Associates, which in its heyday ran the biggest private Unix network in the world and spent half its budget on technology. Even then, it had the usual trading firm problems of everyone wanting their work done yesterday and not wanting to spend any money on documenting the work the developers did (which would have added 20% to the development costs but lowered lifetime costs).

What is not sufficiently well recognized is that IT risks are a not-sufficiently-well-recognized source of systemic risk. We seem some recognition of that, in the emphasis firms place on having backup facilities that are kept in ready-to-go condition. But firms below the TBTF level can have costly, even catastrophic mistakes.

First, systems tend to agglomerate, rather than having data exported out into newer, tidier, faster software:

“There’s been massive underinvestment in technology in banks – it seems to be the case that the whole damn thing is held together by sticking plaster,” he [Michael Lafferty, chairman of the research company Lafferty Group] says only half-jokingly.

“You hear stories of Cobol programmers being dug up and brought back from retirement after 20 years.”

The result of all this agglomeration is either that you lose an clear idea of how things all hang together, or you have people working manually or with kludged programs across systems. The danger with overbuilding is you can have parts that you’d built around and didn’t even know were there any more spring back to life in costly, nasty ways:

“Most IT applications carry around dead code – which lies dormant because none of the live modules are using it. When Knight Capital ran an update in its systems, some of the dead code was brought back to life, causing the system to spit out incorrect trades.”

Then you have more widely recognized problems, that of acquisitions leading to integration failures:

“Because of the banking licences in the UK, when you bring two organisations together, the transition from two systems to one system can take up to 10 years,” he [ Ralph Silva, the London-based vice president of Banking Strategy at HfS Research] explains.

“It takes a long time as it all has to be done by the book by the regulators’ rules.

“There’s one big bank in our country that has a total of 50 different mortgage systems as a result of history and mergers and acquisitions.

“That’s insanity. It should have maybe one for retail and one for wholesale. But 50 is ludicrous… it hugely raises the danger.”

Mortgages are even a bigger mess in the US due to changes in product features and considerable variations in state law. But is is made much worse by poor IT management. From what I can tell, the only services that have decent platforms are relatively young “combat servicers,” and even they maintain that getting too large will make a hash of their operations.

Another factor that can mess up IT integration is a difference in cultures. For instance, believe it or not, Countrywide’s prize asset was its servicing platform, software it had developed internally. But Bank of America didn’t like or do custom, it relied as much as possible on vendor-provided software. It proceeded to upload its customer data and integrate stray systems into the Countrywide platform, and then manage it like a BofA installation, which resulted in it losing the specialists who knew the systems even faster than it would have otherwise.

The BBC piece flags other shortcomings in relying on off-the-shelf programs:

Further complicating matters is the fact many banks have opted to buy in software from third parties, letting them slim down their own IT departments.

“When it’s outsourced you can’t make changes to the [bought] code,” says Ralph Silva…

“You have to make changes to your own code – and that increases the risks.

I’m curious to get the input of IT professionals, since even from my limited experience with financial firm IT, I’ve heard of numerous stories of massive projects that have big overruns and are quietly taken out of their misery. My impression is the record of large projects is so terrible that I wonder if any large-scale projects ever get done in a manner that could accurately be called a success.

But the other problems I’d add to the list are:

Firms going from decentralized to centralized to decentralized IT. We may be long past traders having any control over their lives, but at least in the 1990s, you’d have firms changing their views on how to manage IT, plus traders stealthily funding their own development of risk modeling tools when they couldn’t stand to wait for the IT officialdom to produce whatever it was they thought they needed. One of the notorious legacies of that was that Salomon Brothers until the later 1990s was running its bond trading risk management on a monster Excel spreadsheet because the traders had built it and had never ceded control over it (well, of course, they finally did).

Maybe this fight between the producers and the service departments is a long-settled issue at the really big firms, but I wonder if it is still an issue at medium-sized players.

Personnel policies ill suited to the customized nature of critical bank software. Even if financial firms were willing to spend enough for developers to document their work well, it’s always best to have the members of a team who built code in the picture to help tweak it. But financial firms, like pretty much all of Corporate America, have long gone for shorter job tenures and, particularly in IT, contract workers. Even if a lot of IT “contractors” wind up working for a particular bank for, say, two or three years, that is far less than the expected life of a lot of code. The old career paths, where a seasoned employee could expect at least five, and ideally fifteen to twenty years at the same employer, are much better suited for managing mission critical yet fragile systems like software. The failure to give more job security to programmers working on critical transactions platforms seems remarkably short-sighted.

Reader comments and corrections very much appreciated!

.

Print Friendly, PDF & Email

80 comments

  1. Conscience of a Conservative

    I.T. eats profit, at least in the short run. Corzine didn’t focus on it at MF Global. And during the crisis one of the quips mentioned by Citigroup employees time and time again is the disconnected and disfunctional technology infrastrucuture at the bank. I have not heard anyone say this has been addressed since.

  2. Aussie F

    The banks take a sophisticated view of IT development. At the end of much sober analysisthe same question is inevitably raised. ‘Why can’t a couple of kids locked in a cage in China do it, wouldn’t that be cheaper?’

  3. Tim Johnson

    AndyHaldane, Head of Financial Stability at the Bank of England spoke at a workshop, ‘The Credit Crisis Five Years ON’ organised by Donald MacKenzie at the University of Edinburgh last June. Dr Haldane spoke on the perils of assuming independence between random events, and at the end of his talk he was asked whether this ‘mathematical’ flaw in understanding was a contributory factor to the Financial Crisis. Haldane’s response was “top five but not top three”.

    The next question was obviously “What was the most significant issue”. The answer stunned the audience, the failing banks could not give the BoE any detailed information on their loan book because they did not have adequate IS systems. The banks and the BoE had no idea what sort of mess they were (or were not) in, it is hardly surprising things when wrong when decisions were being made “blind”.

    1. OMF

      Bankers are professional BSers. Competence is a foreign word in these increasingly dysfunctional entities. The bailouts are perpetuating a culture of uselessness and decay.

      Part of me is happy that all their buzzspeak nonsense is being shown up for the vapid nonsence it always was. But most of me is worried about the consequences and furious that these people are still in charge of their own train wrecks.

  4. vlade

    IT projects are hard, because to get the right you need both good IT people and good business people. Big part of the “good” in each case is the ability to talk to each other and understand that they view the world slightly differently.

    Unfortunately, the more common approach is for the IT people to play with their toys, and for the business people to withold their money (and even more critically, time).

    All above holds not just in banking.

    BTW, I’d not be too hung on documentation. Or rather, again, documentation is hard. It’s easy to produce tons (literally) of docs, but it’s much harder to produce documentation that’s actually usefull. The best way to keep the docs is to keep your IT people around for long time, and encourage them to interact and understand the business (and for most time not to hire people who are interested in IT only, as opposed in solving real world problems).

    The additional problem with docs is that because IT problems are complex (by nature), spending inordinate time on docs (and “analysis”) is the easy way to CYA.

    Agile methods help – but only to an extent. Banking has the unpleasant feature that it’s complex and large (from processing information perspective), and critical at the same time. Moreover, because of the sheer variety you’re in general unable to think of everything (unit/integration test), so the usual iteration cycle of agile is not that great (I don’t think NatWest customers would now appreciate someone saying “well, it was first iteration, bugs are expected”).

    All of this just says that we’re hitting complexities we can’t easily deal with anymore, and we will suffer problems due to that. We can either accept it as the price we pay, or think how to reduce the complexities (which I believe is harder than it sounds).

    1. YankeeFrank

      I think agile practices actually would help banks a lot, but they don’t commit the money to adequately implement anything like it. No QA, no docs of any kind, and mostly coders from China and India that are from the bottom rung as they are hired for their price, not their skill. They are also highly transient, being laid off at the first sign of lower bank profits.

      Add to that how critical IT is to the banks, the constant nature of software changes, addition of new applications, and how often the bank-side employees are hired and leave, and its a massive disaster waiting to happen. Most systems I’ve seen have no reporting capabilties, but at least the “back office” has those, so theoretically, everything flows there and can be reported from there.

      And don’t think this is any better at the big banks honestly. They are likely just as bad if not worse. And the days of traders getting their own software built are hardly over. They come in, demand things, get “something’ that may or may not work, and often leave before the changes are even fully implemented. Its a nightmare frankly.

      I have yet to work at any tech firm that truly goes slow enough and is careful enough to produce really well-running and stable systems. This includes software firms as well as banks, and “agile” firms as well as “total chaos” firms. Agile is not well-used in many places because it is treated like a religion instead of a set of principles that can and must be modified to fit real world circumstances.

      Even NASA blew up a space shuttle due to a software timing glitch, and they are looked on as the most careful, stable and dedicated team of developers in the world.

      Another reason software is so tricky is that, as someone mentioned above, developers generally love to play with their toys — simplicity is not their goal — fun puzzles are fun. They also love to bring in their favorite tools, libraries, the latest greatest design patterns and accoutrement… all leading to chaos. Top that off with lazy or rushed developers who copy and past code, unread, from google searches right into their projects and the spaghetti code that gets generated is a nightmare.

      Its an all-around mess, and is worse at some firms than others. The “Tier 1” banks do it better for sure, but even they are at the whim of politics and great projects can get sidelined for the third-party work of a friend of a hot trader who has a shiny new wall street IT firm.

      The only reason these “systems” “work” at all is that there is back office auditing that generally can catch errors in reporting, and that traders know what they trade and can see if something shows up wrong (at least initially). Of course risk calculations of any kind are totally suspect, as most traders wouldn’t know if those were off anyway.

      Its a huge mess and its not going to get fixed, probably ever. The banks will have to fail and collapse, and be rebuilt by better people before that will happen. And we know how that goes…

      Oh, and one more thing I’d like to underline from comments above: this mess is hardly isolated to banking. Our entire society is built on crap code, including hugely important physical infrastructure like our electric grid. Tightly coupled, fragile systems just waiting for an excuse to collapse. Word is the 2003(?) Northeast blackout was caused by just such a glitch. And look at the new military fighters they build: bloated garbage that is unmaintainable and overly-complex (like the Raptor and Joint Strike Fighter). They throw together so many complex systems and lose sight of the big picture (Raptor pilots are now blacking out, they believe due to exhaust fumes getting into the cockpit).

      If we were to use these wonderful tools (like software) responsibly, they would cost ten times what they do now, and do a lot less, but do it all a lot better. But then the IT “revolution” wouldn’t be seen as the cost-cutter and “efficiency-engine” it is viewed as.

      We used to build things to last. Now we slap some garbage together and call it a success. Our culture is in decline and this is just a symptom… a big one, but a symptom all the same.

      I think the reason most people think IT works well is their main experience is the internet, which is heavily tested by the users themselves (and where a problem is quickly spotted because everything is visible), and the fact that most websites are relatively simple and static (and built off standard technology, like WordPress or Zencart) compared to other IT systems.

      1. Tom_B

        As an IT project manager, it has been my experience that these projects are undertaken without the proper understanding of what is being asked… Business describes it one way, and often leaves it as, “Well, just do it and we’ll let you know when it looks right.” That leads to tremendous cost overruns!

        I was at one mortgage firm a few years ago on a project to help document their processes and help realign them to the regulatory edicts. Someone in management finally decided it would save time and money to just send it all to India and have them edit and rewrite it. It was indecipherable when it came back!

        You are right on with your comment that it is not just the banks. Having worked at a large utility company, I know from experience how unprotected our “critical” infrastructure is — and it is something to be worried about!

      2. Garrett Pace

        Isn’t every IT solution just a bridge or patch until the next, “better” one that will fix all the problems? In such an environment who would do more than the bare minimum, even if deadlines weren’t screaming down at them?

    2. Jazzbuff

      I have worked IT with TBTF banks, Fortune companies, startups, and mid-sized firms. Projects do not fail because of technology. They fail becasue of bad management. The old saying is “you can have it Faster, Better, or Cheaper: pick two.” Management almost always picks Faster and Cheaper.

      Agile is the new technology fad. It can work well if you have the right group of talented people who don’t take shortcuts on documentation and testing and the business has committed knowledgable resources and not some expendable group member. Also, there should be some element of uncertainty in the problem being solved (such as user interface design) so that an agile approach can support trial and error. In an environment such as banking where the rules are well defined there is no reason, other than management pressure, to short-change the front end analysis and design. This provides documentation for current development and future system maintenance.

      The biggest problems in large organizations is lack of trust and lack of communications. Managers are worried about their jobs so they want to appear to be in charge but not be responsible for anything. That is the beauty of Matrix Management.

      1. Denise B

        The biggest problem in my experience is that no one knows what all the business rules are. The existing code is the only real repository of this information, and it usually predates almost all the current employees. The rules can be so complex that no one can read the specifications and be able to tell if they’re accurate or complete. Then during testing people come up with all sorts of things that fell through the cracks or were misinterpreted.

        This is one of the reasons that no matter how old and patched together the existing systems are it’s hugely risky to replace them.

        On an insurance project I begged to be allowed to go back to the contracts to get the requirements, but my requests were denied; no one ever did anything that way. Instead, I was given a team of analysts who could not have given me clearly defined business rules if their lives depended on it. What they could give me was examples: if I put this in, this is what I’m supposed to get back.

        I don’t think it’s true that IT doesn’t want to give the clients what they want. It’s just really hard to gain a clear understanding of exactly what that is.

  5. reason

    Yves thanks for this. Yes the attitude of management to IT in information industries is truly amazing. The IT design should be board level important. Outsourcing is very often not even short term effective – but long term means that your companies priorities are often in conflict with the priorities of your supplier.

  6. fred

    Here’s the deal.

    You don’t get ahead in IT management (particularly at Banks) by identifying and solving problems.

    You get ahead by agreeing with everybody in meetings – even if they contradict one another.

    People with opinions who contradict someone on a Org chart in a meeting will not last long in a company. You can’t say “2 + 2 = 4” unless management already agrees. You can collect a lot of paychecks by praising management for thinking outside the box and “realizing” that “2 + 2 = 5”. The trick is to figure out what they most powerfult person in your Dept thinks and then parrot it as if you sincerely believe it. I have watched people parrot things they did not even understand because they inferred management had bought into it.

    Rockstars may get top money to triage a crisis, but they are gone ASAP or they devolve into backslapping managers and lose their effectiveness.

    Non-rockstars keep their heads down and do what their told.

    Documentation is seen by manay as a “How To” manual on getting rid of you. If they know what it is you do, they will fire you and pay someone who can read and implement what you document for half of what they pay you. Not exactly incentive.

    1. ScottS

      I’m not big on documentation. No one reads it anyway, especially when they can talk to the person who wrote the code.

      The best solution is test-driven development. When you realize that the test plan/procedure is the requirements document, you eliminate the redundancy of writing a requirements document and having people read it, and having people understand it. Developers can be spoon-fed the “requirements” when they are tasked with implementing a new feature and submitting it to QA. “You broke such & such test case by implementing this new feature” the QA manager says. “Okay, I’ll fix it and resubmit” the developer says. Now the developer has learned a new requirement. The tribal knowledge can also be spread during code reviews, so it doesn’t have to be so trial & error.

      As far as design issues, focusing on scalable APIs is critical. Reusable interfaces can flatten the learning curve, and allow changes to made “under the hood” and not affect users of the interface. For a deep dive on scalable APIs, read how Amazon got “services” religion here:
      http://highscalability.com/amazon-architecture

      Agile depends on constant, constant testing. So the testing almost necessarily has to be automated, and it’s difficult or impossible to automate the testing of the user interface. But with good APIs and comprehensive automated testing, you can make performance or implementation language changes feasible for everything below the user interface.

      1. ScottS

        And don’t EVER use contractors for projects that are in your core competency. Would you pay regular employees a bonus to leave and take all that crucial knowledge with them 50% through a project? 50% of the software lifecycle is maintenance, so if you pay a contractor to develop something and then leave, you will have to keep crawling back to that contractor assuming they are available, and if it is a body shop the original developers can and probably will be gone. And you’ll pay extra for the privilege!

        Software overhead (communication) goes up exponentially with the number of people on the project. Find a few passionate developers and keep them! The productivity will be incredibly higher with a small cohesive team. Anyone who hasn’t read Fred Brooks’ The Mythical Man Month is operating with only half a brain:
        https://en.wikipedia.org/wiki/The_Mythical_Man-Month

  7. reason


    Even if a lot of IT “contractors” wind up working for a particular bank for, say, two or three years, that is far less than the expected life of a lot of code. The old career paths, where a seasoned employee could expect at least five, and ideally fifteen to twenty years at the same employer, are much better suited for managing mission critical yet fragile systems like software. The failure to give more job security to programmers working on critical transactions platforms seems remarkably short-sighted.”

    Yves this hits the nail on the head. I’ve worked permanent and worked contract. And one big difference is the difference in incentive when you are writing code that you will maintain and when you are writing code that someone else will maintain. If you are writing code that someone else will maintain, depending on management, there may well be more documentation, (but documentation still needs to be understood), but robustness is a low priority. If you are likely to be rung up at 3:00 in the morning if something breaks down, believe me rebustness is a high priority.

    1. Jason Boxman

      While I agree, I think the military industrial complex is a unique case, since everyone is feeding at the trough. It seems nothing ever needs to be completed or working. I have friends that work for defense contractors in IT and everyone gets an insane amount of money.

  8. Jesper

    So it is very difficult to merge the IT-systems of merged banks and therefore it has not been done?

    Would that not be yet another reason to split up some merged banks?

    If the systems aren’t merged then there really can’t be much of a problem to separate them.

  9. ambrit

    Maam;
    IT troubles are also leaping onto the retail floor. The DIY Boxxstore I perambulate about is in the process of “upgrading” its shop floor computer systems. This has been put off twice now, no explanations worthy of the name being supplied. At present, two different systems with significant overlap are used by store retail workers. Added to this mess is one of the lamest public portals this admittedly backwards neo-Luddite has ever encountered.
    The bottom line is that this tangled train wreck of a retail IT system is driving customers away.
    When the public company retail web site doesn’t indicate which products are actually ‘on the shelf’ versus available from the regional warehouse, you end up with lots of pissed off ex-customers. (Many people come in to the Boxxstore expecting to put their hands on a product because the web site declared, “Order online and pick it up today.” The web site lied. The product is in the central warehouse and is subject to overnight shipping.)
    The companies IT department te

  10. Whiskey

    Yes, 80% of the cost of software is in maintenance, and having a dozen different developers maintaining undocumented code is a large portion of that. There are, however, techniques for reducing these long term costs, in particular code refactoring.

    The real problem is that IT developers are disincentivized from adding long term value that is not immediately apparent. If you are a contract worker, why would you devote extra unpaid effort, especially if the end result, easily maintained code, actually makes it less likely they will need you in the future? The incentives extend beyond the work itself, because the business end is generally reluctant to pay for better tools or training, but they are more willing to buy more disk space and more servers. This leads to bloated, poorly thought out data systems rather quickly. If you work in a business with an implied discount rate approaching 100%, the culture inevitably shifts towards short term, get ‘er done fixes, exactly the wrong culture for large IT systems.

    You also have many IT projects championed by business people. In many cases, these champions are not merely non-technical, but anti-technical, in that their natural instincts and decision making is inclined to the wrong decision. For example, in one project, I observed three senior business people express the view that two versions of a calculation should be implemented by copying and pasting into existing code and toggling back and forth. Good IT developers are mostly useless in such an environment.

  11. Can't Help It

    Totally, not to mention the current trend of proliferation of project specific Project Managers. All of them are experts at producing Gant Charts, etc but they have zero understanding of software development.

    Good developers know the following concept i.e. if a concept is important, make sure it becomes a first class citizen in the system be it a function, an Order, a Stock, etc, etc. Organizations should take note i.e. if you want to be IT driven then IT should also be a first class citizen not just seen as a cost center. Honestly though, what organization today can compete without IT? If it’s so important then invest in it, if not then go back to paper and pencils.

  12. Brooklin Bridge

    Mission critical systems, and anything approaching that, for un-sexy banking and insurance are considered by many, and with good reason, to be a real career grave yard unless you’re at the top and like working with suits.

    A lot has been done for industry by developing “easy/safe” languages, and as much or more still by the extraordinary advances in economical super-performing fail-safe hardware, but not without cost in terms of the quality of your average developer. The conflict between types and attitudes in suits vs. code-heads still exists and must be costly as one drills down into all the subtleties and nuances of making, integrating and maintaining reliable and useful complex sytems.

    1. Brooklin Bridge

      Incidentally, a lot of effort is being extended now on software that writes software, particularly for on the fly solutions, but also for integration and even systems from templates and I imagine that is/will make slow incremental inroads in financial software.

  13. Bob Morris (@polizeros)

    California has tried at least twice now to migrate its ancient payroll system to a modern platform and failed miserably. Apparently it’s a rat’s nest of dozens, if not hundreds of systems, with roots going back to COBOL written decades ago. The federal government has the same problem with much of its software too.

    I convert ancient DOS-based databases to Windows so have some insight here. The main problem is that you can’t simply start from scratch and write a new system. The old data has to be brought into the new system, a process which can be maddeningly complicated. Plus, the new system generally needs to have the same functionality as the old system and, since many such systems are mission-critical, the new system has to work perfectly and quickly, a daunting task.

    1. DiracMan

      I am currently a Senior Java Software Engineer but I also came from the long march from when Bill Gates was just a start-up. I have seen it all and been through it all. If you don’t have your IT right in 2013 in ANY business, you are dead. I must say that the potential incredible power in every design situation and the low cost tools that exist today are phenomenal. It is a joy to come to work each day. But along the lines of this article, I heard back in 2008 that one of the major massive Credit Default Swaps derivatives databases in one of the key TBTF players (J.P. Morgan was it?) was in DOS dBase/Clipper! Did anyone else hear this? It was a post on Huffington Post as I remember. That was a great system up until 1993-1995. But in 2008 with the potential to detonate the entire World? Wow!

  14. G guy

    30 years ago banks had software to handle routine day to day transactions. since then every new feature, every new service, most of on-line banking, is a “bolt on”. Bolt-ons require extensive customization to fit in with existing software.

    Today, if a bank wants to add a new feature or service they have to make it work with the base system and 20-30 bolt-ons. The cost and time for the testing required often exceeds the cost of the coding.

    A total rebuild is inevitable at some stage but would be a major risk and a major hit to earnings and cash. Most bank CEO’s will try and get through their 5-10 years without having to face the inevitable to preserve the share value and their legacy. It will take a very long term oriented CEO, or an accident, to force a change to new fully integrated systems.

    Most banks have a hell of a time managing their own IT, managing merged banks IT would be a sink-hole.

    1. reason

      “The cost and time for the testing required often exceeds the cost of the coding. ”

      Should read:
      The cost and time for the testing ALWAYS exceeds the cost of the coding.

  15. SBG1

    Being about 18 months removed from a failed software dev project (moving from a DOS/Windows environment into a completely browser based platform), learned a few things:

    1) If a particular process is complex now, bet on it remaining complex in a new environment.

    2) Design structure vrs. costs. Harder to explain, but the more costs associated with the tools required for development of the basic system structure (back end of the system, languages, user accessibility, etc.), the more compromises one has to make (“You’re 25% over budget due to all the money spent, and you’re only at 30$ implemented”). Caused us to move all our new (subsequent) development efforts to be substantially open source based.

    3) “Mission Creep”. We all know what that is.

    4) Testing/data validation. You can’t possibly test enough, and you have to remember, idiots are soooo ingenious. They will look at things and want to do things that you would never even consider in a million years. But, also remember that existing data validation is extremely critical. Can’t tell you how many times along the process we found problems with existing data due to ‘glitches’ in the existing systems being replaced, and all the resulting finger pointing played a substantial role in the eventual failure of the project.

    5) Documentation. When you write documentation, realize that the way you wrote it is not the way they are going to read it. Sounds stupid, but true. I can tell you that after having hours long discussion over the meaning of specific sentences in documentation, less time got spent on documentation.

    6) Excel is not your friend. See 4) above. “We’re not using the new system because the numbers are coming out wrong”. And if you tell them their Excel numbers are wrong, you better be able to prove it. And then apologise profusely. And then grovel and beg for forgiveness. And you get to go through that almost every single time.

    Yeah, I’m just a touch sensitive on this topic. As a result, everything these days is development of various small modules in a small, open source environment (that’s how you end up with 40-50 different applications within an extremely large organization). And then I get to listen to the complaints about applications that don’t do everything users want. Oh, well……

    1. reason

      I learned the hard way (well the easy way actually – let’s say I watched from a close distance), that you never use new tools on a major project. Always do a proof of concept on something simpler first. If the project takes several years, scrap it. The tools you are using will be out of date before you begin to code. Do something smaller then snowball it.

    2. Voltron

      >> 6) Excel is not your friend. See 4) above. “We’re not using the new system because the numbers are coming out wrong”.

      The first version I release ALWAYS matches the existing system. I have a fix ready and give the customer an estimate of the impact, but I don’t switch to the fixed version until after the parallel test period and the customer is ready.

      trust me, it goes MUCH smoother that way.

      1. SBG1

        Yes, I’m in agreement with you on that one. But when it’s compliance data (quarterly and annual reporting) and it’s not been correct, well, there’s no easy way getting around that. Particularly when you are dealing with ‘dueling’ Excel spreadsheets coming from three different departments, where the numbers between the three different Excel spreadsheets don’t work.

        Herding cats….

    3. Abe, NYC

      In one of the systems I did, it took the customer a year to migrate data from the old system, because my system had far more stringent integrity constraints and lots of datasets would simply be rejected. But I was lucky in that this was recognized and accepted by the customer without questioning. I provided validations to help identify the issues at source. Eventually it all worked. But this was at an in-house IT unit, where budgeting was not a concern. I shudder to think what would happen if this was outsourced.

  16. McMike

    Zombie and orphaned software projects are not limited to banking. But banking, by its nature, has them particularly bad.

    There is nearly as much ink spilled trying to sort out the IT-Business-Budget conundrum as there is bad/useless/broken code resulting from getting it wrong. And no solution in sight….

    IT people need to think more like business people, and visa versa, of course. But I blame mainly the business people. If you think of the code as an automobile assembly line, the software guys are building the machinery for the line. If the business keeps changing the auto design in massive and fundemantal ways, and if the business people want to have endless options that they can change and add on the fly all the way to the end of production, then you are going to get Rube Goldberg software.

    The IT people tend to not be helpful though, as they retreat into black box arrogance, and never quite stop thinking of themselves as gamers and hackers disassembling old PCs in their garage.

    One fundemental change that occurred in the early 2000’s is that technology stopped being an obstacle (as in, we want to do something but there is no technology to do it). There is now, I think almost literally, no obstacle to what you can do with the tech. So the business brains need to learn how to get clear and disciplined about figuring out what it is they want to do.

    But yeah, as long as banks view tech as a cost center (mainly for annoying cutomers, more annoying compliance, or boring backwards looking data), then you are going to see them treat it like a union workforce in a hog processing plant.

    1. Whiskey

      There is no incentive for IT workers to be open or helpful. If you are open and honest about design compromises or changes, then the most likely response from the business end is to get rid of you, not have you be more productive. IT mystique is a form of rent collection, and most IT shops above a certain size look for ways to collect rent to stay employed. One way is to make everything harder than it needs to be. Another is to insist that all processes go through a centralized system controlled by them, which adds to massively coupled messes consisting of separate IT fiefdoms.

      1. McMike

        My job for a while was to be a liaison between IT and business units. It was an experiment in progressive management.

        It sucked.

        But I did manage to roll back a few stones and let some daylight in. On both sides.

        My experience as an IT user in finance, meanwhile, is that management treats software implementations like magic diet wands (the vendor sales reps may have something to do with that expectation). You just wave the magic sooftware wand, and your fat and congested workflows magically become lean and fast.

  17. Mickey Marzick in Akron, Ohio

    The law of diminishing returns with regard to the division of labor?

    IT in banking is just the tip of the iceberg…

  18. Alice

    OT maybe. FYI. I googled this site a few minutes ago and it would not come up. I was redirected to a site that stated I changed my settings on IE to exclude the word ‘Naked’. I have never changed anything but I wonder how many people are now unable to access this site because their security settings have been changed by someone other than them. This all happened so quickly and I was caught off guard. I will need to try to find the page and see what else “I” excluded from my search engine.

    1. mk

      Check your search settings on whatever search engine you’re using. I use startpage.com for my searches, there is a “settings” link, under that link is a setting “Try not to display web/picture results containing adult language” – there are three choices, 1. filter all results, 2. filter depending on search, 3. do not filter results

      1. Alice

        Thank you MK. I found the page and it was exactly as you said. I do need to correct my post about the search engine, it was AOL, not google. I really would like to know why AOL decided to change my settings because I certainly didn’t and the page stated I did. I come to this site most every day and yesterday was the first time that ever happened. I just wonder how many other users have had their settings changed by AOL.

  19. Deloss

    At Citibank–whence I was let go five years ago, in some sort of financial crisis–they fired all the contractors at once–I first documented CDS systems (whee!) and then was sent to the bond department to document the system that traders relied on. I could not do it. The system had been in existence for ten years, and had never had, I don’t think, an architect, but modules were added in a hurry when they were needed. Every link I clicked on led me into a gigantic, multi-purpose program, and I documented what looked like the main pieces, but for the entire thing–nope, impossible.

    Of the sortware I was able to document, even the manager didn’t know what it did, and in working on Continuity of Business, I had to contradict him: the system was not sent the value, the system had to fetch the value, and I was only able to find out and document that because the programmer who had written the code was, I found out, a couple of aisles away.

    But all this is water under the bridge, literally, because the Citibank building where I worked, 111 Wall Street, was unusable after Sandy, and I understand from a high executive that the building has been condemned. All this was a result of global warming, and nobody’s paying any attention to that, either, to which I can only say, Whee! again.

    1. SBG1

      I can’t tell you the number of times we got “Just write documentation telling us what we need to do”.

      Step 1: Locate very high bridge
      Step 2: Jump off bridge.

      For some reason, that particular response was frowned upon.

  20. Dorn Williams

    Having worked for automotive ‘captive finance’ I find railroad maintenance as a good metaphor for banking IT. This year you can cut or under fund with too much of a hit on your bottom line, but 5 or 10 years from now watch out.

  21. Timo

    As a software developer who has spent almost a decade working in banking but currently works more on the fringes of financial software development, a few things always stood out:

    * Software projects at banks are suffering more or less from the same problems as software projects everywhere else – understaffing, unrealistic deadlines (in this case often with someone else’s massive bonus riding on meeting said deadline) resulting in an intense pressure to cut corners and often a large codebase shrouded in lore and half-truths that were hard to tell apart from what is really going on because of short job tenures – people tend to get moved around to fight other internal fires or leave, there is no documenation because that would cost money and take time out of applying further band-aids to fix prior rushed changes so there is no real “memory” of a system.

    * The general work environment and work climate in banking is not really an ideal environment for software developers – it’s very much like software development in gaming with often massive overtime, lots of weekend work and a general attitude that babies *can* be delivered in less than nine months if only one yells at developers louder and maybe dangle a little more money in front of them. Very good developers more often than not are not really motivated by money once they have reached the “comfortable” stage, but more by intellectual curiosity and a desire to “make things better”. I know people laugh at “developers and their toys” but more often than not this suspected polishing or “gold plating” of the work is actually someone thinking ahead a bit and trying to make something more maintainable in the long run. The constant corner-cutting necessary to meet the “OMG we have this regulatory deadline in three months that we have been ignoring for a year and it’s two years worth of work” also both tends to make the mess worse, lead to even more unmaintainable systems and often has the more gifted developers leaving pretty quickly. Developers want to be proud of what they build and something put together from duct tape and toilet paper rolls doesn’t qualify.

    * Banks – especially of the TBTF variety – were on the forefront of outsourcing, preferably to a company that was cheaper than everybody else. I’ve generally got a fairly jaded view of this type of outsourcing for pure cost reasons, but one of the big changes that were part of this is that developer turnover overall including amongst the outsourced developers increased. I’ve been in a situation more than once where a very good software developer suddenly got replaced within a couple of months working on a project because they found another job that paid a little more money. Of course the ones that tended to hang around were the ones that would test everybody’s patience to the maximum and beyond and generally didn’t exactly qualify for the “most competent developer of the month” awards. It’s an open secret that a team comprised of top-notch developers is often an order of magnitude more productive than an average team, but in these cases the reverse can be true when then people with the longest tenure on the team are the ones who don’t have any other options. Not to mention that having to replace even good developers every 3-4 months makes for anything but a stable codebase.

  22. mk

    seems the next big thing in banking is mobile banking on smartphones, which even before reading this post I was unwilling to try due to not trusting that these institutions care about IT security. Remember the Carrier IQ issue, that whatever you type on your keypad can be saved by the phone company? That’s when I stopped checking my account balances on my cell phone, because you have to enter your account numbers with PINs.

    too big to manage…

  23. MrColdWaterOfRealityMan

    Quite frankly, MBAs run management in the USA and what passes for MBA “education” in the USA has become dysfunctional.

    IT is not an afterthought. IT is as critical to business as the fuel system is to a car. Your average MBA, however, is clueless and still has the attitude of “Just hire some of those tech servants and don’t bother me with the details.”

    “Don’t bother me with the details” I expect, will be the epitaph on the tombstone of the western capitalism. Do you think Chinese CEOs don’t want to be bothered with details?

    What happens next is quite formulaic. Treat IT management like dirt. Ignore infrastructure. Eventually you lose the business. Of course, many MBAs follow the IBGYBG principle (I’ll Be Gone. You’ll Be Gone) and could care less. Long term thinking is actively discouraged – another American MBA trait that will soon elevate the economy of the USA to the level of say, Kazakhstan.

    1. NotTimothyGeithner

      “Do you think Chinese CEOs don’t want to be bothered with details?”

      Between empty skyscraper cities and pollution, my guess is they are just like their American counterparts. Oh sure, they may lack the boisterous nature of Americans, but this long term strategy of the Chinese is just propaganda non-sense. I’m sure the Chinese had a long term view when they started to let the British have special rights in their port cities. Some things work better for a time, and economics don’t make it sensible to replace existing infrastructure whenever something new comes out which explains a great deal of relocation to China. When a country comes up with a more attractive industrial policy, the multinationals/merchant class will jump ship for a more attractive environment like they always do and have always done.

      The Chinese CEO probably does have a more rigorous education than some dipshit economics major/MBA type, but the large scale changes matter.

      1. Nathanael

        The mere fact that the Chinese CEO actually understands something about how his busines operates — that makes a huge difference.

        In Veblen’s _Theory of the Leisure Class_, he discusses the degenerative path which goes from industrialists to financiers. (I would extend it one step further, from financiers to upper-class twits.)

        The industrialist is a greedy, callous person who *understands where his wealth comes from*: he doesn’t want to DO the factory work, but he knows HOW to; he knows how the factory works. The financier doesn’t know how the factory works and doesn’t want to; he just manipulates paper.

        The upper-class twit doesn’t even know how the legal and financial paperwork works — “I’ll hire someone to do that” — and doesn’t do ANYTHING except looting. This is what our current bank executives are like, with their failure to actually file the correct paperwork before seizing people’s houses!

  24. Paul

    A couple years ago someone, BusinessInsider I think, published a list of the system development requirements in place at Amazon. There were about 15 as I recall and they read like a laundry list of all the evil shortcuts programmers can and will take. According to the article they also hired a real “Mad Dog” ex-military guy to make sure the rules got followed.

    It makes an interesting contrast.

  25. Whiskey

    There is a reason firms will alternate between centralized and de-centralized IT, and that is because there are two major types of problems in the Big Ball of Mud:

    1. A large number of systems doing the same thing
    2. A smaller number of systems doing a large number of entirely unrelated tasks

    The focus is always on the first problem, because solving it dovetails nicely with feed-good buzzwords like synergy. It’s also easy for everyone to understand that duplication of effort is wasteful.

    It’s the second problem, though, that’s the real killer. In the financial world, the bias is always towards size and centralization, and this mindset includes the CTO. How many of these multi-year Death March projects are attempts to implement some grand unification of systems that, in retrospect, were a great deal more heterogeneous than it first appeared? The end result is a more tightly coupled system by fiat, even if it would be more productive to break up the system, and perhaps also the firm, into smaller, more modular pieces.

    This will not go away any time soon. Senior management loves it because their comp is proportional to firm size. IT loves it, because you get multi-year commitments and opportunities to carve out fiefdoms. TBTF has only made this worse, as the culture doubles down on size, even though the costs of tightly coupled systems are non-linear.

  26. dw

    thinking that IT is the same no matter what business, government or any other entity is using it. they are looking for the lowest cost IT software. but it has to be reliable, and hopefully do most of the important work (though this tends to be what some users of the application deem it to be, some times ots what management deems it to be. which may not really be what they need). but I do agree that the better developers aren’t just tech junkies, but actually get what the business does (and some times can make up for the business side doesn’t contribute to the project). and some times management sets up a big conversion project to get to their favorite platform, only to have that fail becuase that isn’t really what can do the work. but they will try to do that over and over again (seems like the biggest waste of money. but it happenes all time). but management is enamored of off shore help, they that tend to ignore that that ‘help’ ends up requiring finishing (or total rewrites) to make it functional , but they saved money (notice almost never do they really audit that). and i have seen some managers want to run their business off a favorite platform, till they need the actual data to do that. and when tgat data changes they screem, even if its was their idea to do it

  27. Ted R.

    Thank you for this post. I work at a large credit union and we have or are currently experiencing most of the problems you described. We are currently struggling through a redesign of our on-line account access software that is very late and over budget mainly because management felt we needed to outsource the products and development as much as possible.

    I also agree with others that say documentation is over-rated. Well written and commented code should be more than enough, at least at the developer level. Unless you have a dedicated team on a specific set of applications that can write clear consistent documentation, you will usually end up having a hodge-podge of disconnected documentation that is more trouble than it’s worth and fails to adequately describe a system.

    One other thing that drives me and others where I work crazy is the current fad of adopting ITIL to govern the IT department. While there are many good features of ITIL, can anybody explain why a system of IT governance created by the British government should be used in an organization that has a goal of developing and deploying new applications and features as fast as possible??

    1. Mickey Marzick in Akron, Ohio

      Ted,

      Pin making for IT. That’s how I summed up ITIL for the course instructor… Of course, I doubt if he had ever read Adam Smith’s description of pin-making [the division of labor] and/or its negative consequences.

      But gotta get back to making “bytes” …

  28. Brick

    Seems to me a number of people have skirted around the real reasons in the comments.
    G guy says: Most bank CEO’s will try and get through their 5-10 years without having to face the inevitable to preserve the share value and their legacy.

    Think about how you would sell a single replacement system for those 50 mortgage systems Yves mentions to a CEO. The new system would mostly just do what the old systems did and the benefits to the business in terms of profit would be quite small without adding major functionality. There are risks as well since most likely a new system will have glitches, since I bet that CEO will not pay for a test system which completely replicates the operating environment. You also have to consider that a whole load of business people need to be taken away from business activities to work with IT on functionality and testing.

    SBG1 says “Mission Creep”. We all know what that is.

    This is often a key driver for the failure of significant IT projects. Everytime there is a significant change in functionality requirement , service level requirement or volume of transactions expected to be processed, you almost need to start the process again.Its quite difficult for business to think of every impact of business process change when it occurs across multiple functions and IT justs compounds the problems by choosing inappropriate technologies.

    SBG1 says Testing/data validation. You can’t possibly test enough, and you have to remember, idiots are soooo ingenious.

    10000 lines of code probably has at least 50 conditional if statements, with at least 10 input data items with at least 3 variations of data. Thats probably in the region of factorial 30 tests performed to be absolutely confident code works. Code that adapts or is flexible in choices is even more complex to test. Unless you work in the nuclear industry its always a question of a balance between cost and time against the number of bugs. With CEO’s setting deadlines you know where the compromises are going to be.

    Bottom line is that IT software handles 70% of the business yet usually makes up less than 5% of the total costs.Short term profit views, bad planning, poorly thought out business strategies, compartmentalised business functionality and increasing business and IT complexities mean until the customer starts to walk away through bad service not much will change.

    1. Anon

      “unless you work in the nuclear industry”

      Funny you should mention that. There’s a parallel with the retired COBOL programmers being brought in to deal with legacy code.

      Because it happened with Y2K for nuke plants, huge piles of cash flung at those in retirement because no one was quite sure whether Y2K would have an adverse effect or not on the plants’ functioning.

    2. ambrit

      Dear Brick;
      That last line is what we’re seeing now at the DIY Boxxstore I slave away in. Poor design of the public web site leads potential customers to come into the store expecting instant gratification when the items they want are in the central warehouse and a day or two away. As far as home repairs and such go, a small local inventory heavy brick and mortar outlet is a viable business model. Try telling that to the short attention span management class.

  29. swendr

    You know, it’s funny how often IT gets metaphorically represented in construction and architectural terms, and yet nobody bothers to ask why there isn’t a legally enforced “building code” for software construction. The Randroid developers might scream bloody murder, but in my view once your code is handling other people’s money, you should no longer be allowed to wing it. It should be required to pass inspection by a disinterested third party before it is licensed to run in public.

    1. SBG1

      No. Bad idea. Really bad idea. Sorry, but the last thing we need is ‘code police’. It would probably be outsourced to somebody like M$, or worse, if possible.

      When you say a “disinterested third party”, good luck with that. Anybody who got the smarts to do the work to validate software dev tools isn’t going to be “disinterested”. Not in this business.

      You’d see a case of ‘regulatory capture’ like you can’t even imagine.

      1. Brooklin Bridge

        I think Swendr is talking about performance, not code reviews. Testing transactional integrity (ACID), scalability, performance, redundancy and other fail-safe characteristics, etc., certain core features one expects of any financial software, does not require looking at code.

        Swendr’s idea is excellent. I would have assumed it a complete no-brainer for banking software and am fairly sure something along those lines is required (at least internally) by all banking institutions for anything that actually handles money.

      2. swendr

        OK, I don’t buy the argument that since corruption exists, we should abandon the idea of regulation. Good regulation and its thoughtful application are pretty much the only legitimate weapons that we can bring to bear on those who in selfishness or ignorance are damaging the commons beyond recognition. Other than that, there’s violent revolution, and you know the plutocrats are salivating at the chance to bring on some real repression when that time comes.

        I imagine there was a time when buildings burned down or collapsed much more often than they do now because a lack of adherence to standards, and it’s a safe bet builders whined that it’d never be possible to regulate their industry either. With all the talk of code patched together with duct tape and toilet paper rolls, unrealistically compressed development schedules, and misaligned incentives for pretty much everyone involved in the software life cycle, maybe it’s time for the rest of us to hold them accountable before they do us in.

        1. vlade

          A practical question. How do you police the code? And, quis custodiet ipsos custodes? To check a complex system fully you need a system that is at least as complex as the system you’re checking. That’s a (provable) fact, not a random exclamation.
          A very profitable part of the IT industry in the last 20-30 years was selling quality control. Yet I class it together with selling bridges and snake oil – not because people get corrupted, but because it’s fundamentally impossible.

          The reason why people tend to do it is that because the architectural metaphor is actually very wrong. Architect draws the house, and then builder builds it, and we equate the coders with the builders. But code is NOT the building. Code is in fact the blueprint! It’s the executable that is the building, and the process of converting the blueprint to a building is in IT so efficient we tend to think it away entirely.

          In fact, the code is not even the blueprint in the classical sense – the code is the formalization of the requirements (much more so than an architectural drawing is a formalization of someone’s building needs). You can’t solve the problem of checking the requirements by creating yet another system to formalize it and checking it against that, it’s not a real solution – you’re always stuck with “soft” requirements vs. hard ones. Agile talks about unit tests being the formalization of requirements – it’s better as it helps to tease out the vaugeness to an extent, but ultimately it’s still just a translation between human mind and some sort of formal rules – and on small scale. The interplay between the rules is often impossible to predict and capture.

          About the best formalizable rules are sattelite/military systems etc – but they have the advantage that all rules are nature’s laws so formalizing them is just re-write of bits of maths. Formalizing user behaviour is impossible.

  30. Anon

    Is banking IT affected by the need for frequent updates in software, imposed by both proprietary and open-source providers?

    I am beginning to lose functionality for a lot of things right now that I’ve traditionally run on my laptop, simply because I don’t want to upgrade to newer versions of programs I use or OS.

    I’m not that time-poor, but I know from experience that making changes to tried-and-tested apps can take much, much longer than anticipated.

    Plus the main reason I don’t want to upgrade is that I like what I’ve got, what I have works for me, but I’m finding I can no longer use a lot of stuff now because “this version is no longer supported”.

    Built-in obsolescence in software is a real, and growing, pain in the b*** IMHO.

    I think swendr ^ just caused a Scanners-type incident among the ‘roidians, but it’s an interesting point.

    How can such critical business infrastructure be left to market forces? We all know those don’t work.

    I would go so far as to mandate boards of directors to produce and implement five-year plans to ensure critical software/hardware infrastructure is subject to permanent, interrogative oversight, in what is after all plainly an industry (IT) in a state of permanent revolution.

    1. Brooklin Bridge

      I’m near positive there are already various rigid protocols that software must follow and standards that it must meet when money is involved. I believe a general crack-down on wild-west check and card processing occurred in 2005 (or around there) regarding that.

      1. swendr

        Right on. I’m sure you’re right. I’m just thinking out loud, so to speak. What agency is it, I wonder, that handles the crack-down duty you speak of? The FDIC? I’d have more confidence if there was a specific agency for IT standards in general perhaps not so closely related to finance. I mean, there are other bits of code that have disastrous potential out there.

      2. Nathanael

        Nope; money is still handled wild-west style. Keep your paper statements.

        There are rigid protocols for new software used by the military. (They *don’t*, however, apply to some of the major classes of military software, which is scary.)

  31. Abe, NYC

    Off the top of my head can name several issues with IT design and development:

    • Pressure on developers (largely resume boosting, or maybe sheer pleasure) to use the latest platforms, which change so frequently that practically every project is done in a new way. In established IT units, this increases both training and maintenance costs, and results in developers not having truly in-depth knowledge of any one platform.
    • By contrast, lack of pressure to provide good (or any) documentation.
    • Difficulties of integrating technologies. E.g. there are very efficient technologies for Web development, database processing, and data transformation, but their concurrent usage introduces complex dependencies in the chain of execution and makes it far more difficult to maintain coherence.
    • Difficult to properly test data processing. Testing of transaction operations is fairly simple in my environment but I’ve yet to see an efficient and scalable approach to testing set-based operations (and I’m not sure it’s even possible). This is a biggie, can easily lead to e.g. misleading reports or worse.
    • What my colleagues and I call “no version 2”. In a typical life cycle, a lot of decisions need to be made early on, when your knowledge of the domain is poorest. Suboptimal decisions at this stage are very expensive to correct later on. By contrast, your knowledge of the problem domain is greatest at the end of the project, i.e. when you least need it. Ideally, after the end of a project the same development team should go back to the drawing board and design a version 2, using all the experience gained and making corrections. But that must rarely happen in real life and runs contrary to project-based approach. It’s more likely that when upgrade is finally approved, many years down the line, there will likely be new people on both the customer and developer sides, who will make the same mistakes all over again.

  32. sierra7

    The one major action I learned in my short (about 8 years) in a major grocery chain’s changeover to “IT” in its checkout systems was, when anything happened of any consequence on the retail floor (system crashes etc.) was to “….lock the manager in his office and not let him/her touch anything”!
    It worked for me!

  33. Buckaroobanzai

    I have been working in and around Banking IT for 30 years; both internally and with software vendors.

    One of the biggest, if not the biggest reason, for this IT disaster is outsourcing (and H1 guest worker(indentured servant)programs).

    People need to understand that the value of a Senior Bank IT person is not their coding ability. It’s the industry, product knowledge, and producton system experience that they have built up over many years.

    The joke at the TBTF bank I used to work at was that the guy who supported the Bond system knew much more about the Bond Market than anyone in the Front Office. If the bank tried to introduce a Funky new product . It was this guy that confirmed if it could work.

    At my bank, all of these people lost their jobs in 3 massive purges in 2003, 2007 and 2009. The jobs were transferred to well meaning but completely inexperienced kids in India. The results were predictable and disasterous.

    Not that long ago you used to have a kind of working synergy between the users, the designers and the developers. Developers and End-users could look each other in the face.

    Now,all of the developers work in crazy time-zones or they are in the US for 6 to 12 month stints. The full-time US folks are now just project managers and administrators. It’s very inefficient to the point that management would prefer to let things fester rather than to try an fix something. (Forget about innovating!!!)

    1. Dan

      I have worked for in IT at large NYC banks for over 25 years. The software is indeed getting worse. IMHO, the biggest reason for this is the rampant outsourcing of jobs and the replacement of existing qualifed programmers with short term Indian programmers on H1bs or L1 visas. At some banks there are very few local long tenured programmers. While this is ok when the bank buys sofware packages and relies on the vendor to make changes and support the software, it can introduce real peril when the bank decides to write software on their own. While most of this software is glue, connecting vendor packages,some is new logic. The programmers they insource are not real good, chosen by price point and the ability to control over quality, and the churn rate is pretty high. Generally, the quality of software is awful, and the probabilty of catastrophic failures is pretty much certain.

  34. lark

    For complex problems there are no short cuts.

    In IT that means firing experienced people and outsourcing to folks who don’t know your system or problem domain is fraught with risks. Actually – it is almost certainly doomed to failure.

    Another point. I had a small consulting business (software) and the Americans were by far the best to work with. I worked with software people. I found American software developers were interested in fixing the problem and completely open to communicating honestly in order to do so. I found this level of communication impossible to achieve elsewhere, especially with Indian programmers. I think there is a cultural difference there.

  35. Chauncey Gardiner

    Thank you for this post, Yves. As our collective experience over the past six years has amply demonstrated, the individuals who have been selected to run these large corporate organizations are in many instances very poorly able to do so.

    A friend who is deeply knowledgeable about IT (I am not), and bank IT systems in particular, shared the following in response to the BBC article you linked. I value his view and believe this is an extremely important issue that has flown under the radar for far too long. We have been very fortunate.

    … “so nothing has changed.  What makes software complex is poor software design.  It is like anything else.  If you break a complex task into simple steps and code each step into a standalone module with flexible interface the implementation and management of the complexity becomes much easier and less error prone.  It is the old monolithic versus modular approach.”

  36. Brooklin Bridge

    Recently, the web interface to the software for the accounts at my credit union was upgraded. It went from being a wonderfully functional web application that had clear, easily readable and printable (but not too sexy) tables of data to a clunky 3rd party interface (shared by multiple banks) where one has to keep drilling down to get at the same data. Some of the functionality of the old interface is missing completely and slick looking colorful but crowded widgets have taken the place. The main page went from a great at-a-glance table of all accounts to a scrunched up hard to read list moved over to one side so they could advertise all the services they are trying to push on the other side.

  37. Jessica

    The Lament of the Skilled IT Person

    The litany above provide good insight into the inherently contradictory position of knowledge workers in general in the actually existing knowledge economy. (And thank you to those who posted for all the information and insight.)
    It is so common for knowledge workers to experience tension, sometimes severe, between how to do their work right and what their job requires of them. This is a manifestation of a deeper contradiction.
    The current stage of capitalism, dominated by monopoly and rent seeking*, can not run a true knowledge economy. To do so requires both appropriately compensating those doing the real work and turning knowledge loose. Instead, the knowledge sector of the economy, which has been the main source of profits for decades, is assigned one and only one main task: to prop up the dominance of the elite. Many knowledge workers work in the many branches of the meta-propaganda enterprise. Others are tasked with the misuse of technology.
    The elite in turn no longer have any historical function other than to prolong their own domination. They are so morally corrupted by the lack of collective historical purpose that they are no longer even capable of simply maintaining what earlier generations already built.
    This is different from the elite of the Gilded Age or China in recent decades. They were and are vicious but they also served a useful function. Even if China were to now have a lost decade and even if a substantial portion of its infrastructure turns out to be so low quality that it must be scrapped, still what would be left is a huge step forward compared to where China was in the late 70s. An overbuilt airport or train system actually has some utility. A pile of viciously clever molten derivatives does not.
    Knowledge workers are generally paid better, but their jobs themselves are often crazy-making. If it is not what is posted above in IT, it is the health care worker who knows what the patient needs and what the hospital or insurance company demands. This kind of tension will only grow more intense.
    I am not saying that this will lead to a political uprising. It clearly hasn’t until now. But this inherent tension, and the various ways that people try to resolve it without getting fired are an unseen driving force behind many social phenomena. From left-wing (progressive/liberal) politics that are long on posture and short on actual impact to the growing sophistication of mood self-management (meditation, yoga, some sports).

    *Rent seeking: To make money not by providing a competitive product or service but by getting people into a position where they have to give you money regardless. Microsoft Windows and bank fees are good examples. Much of current corporate behavior centers on creating those “gotcha” moments/gateways when they can fleece their “customers”.

    1. Nathanael

      You’re describing the contrast between the industrial stage of capitalism and the financial stage, which Veblen and those who followed in his footsteps discussed.

  38. Not in Kansas anymore

    I spent most of my career in high tech and the nature of the business(es) seemed to require and foster skilled software developers and IT support staff. I moved into the financial services industry and for the first time ever encountered meh, just-meet-the-bar, average IT. It was as though a bunch of bankers had hired the technical people (ya think?)

    Nice people; that’s not the problem, but to this day, year after year, the servers regularly grind to a halt because no one remembers to renew the certificates or intervene when the hard-drives warn of low disk space.

    This level of skill or motivation exists up and down the IT food chain, and like any group, can only hire (it seems) what they know and from their industry. dc al coda.

  39. H. Alexander Ivey

    My two bits:

    First, there is no such thing as “software maintenance”, if you mean that there is a wearing out as in water pipes get corroded and rust out. Software Maintenance usually mean adding new functionality, not repairing “broken” pipes. So, if software is a building, “software maintenance” is adding on new rooms, new toilets, new windows. And the idea of certifying software, akin to having public buildings pass local, state, and federal building standards, is a great idea.

    Second, what most people totally miss, is the role / relationship between the software developer and the end user. What usually happens is the developer builds a software program to do what he thinks is the business problem to be solved. What the end user gets is something that does not fit into the end user’s procedures and processes used to tackle the business problem. I’ve been in many a meeting where the developer poses as the “expert” in some business procedure while the real end user sits and stares in amazement at having the developer tell him (the end user) how the end user does his job.

  40. Howard Beale IV

    based on the quick posts here that article looks awfully euro-specific and specific to securities processing ratherr than bread-and-butter core deposits and lending systems to me; also it makes some staetments that are plain flat out wrong. I’ll cook something up and shoot it to you via email in the few days.

    Just becuase a system may be new to the market doesn’t necessarily mean its better than the one it replaces.

Comments are closed.