Business Press Waking Up to Bank Legacy IT Risk

After we’ve been writing about the problem of the ticking time bomb of bank legacy systems written in COBOL that depends on a shrinking pool of aging programmers to baby them for now nearly two years, Reuters reports on the issue. Chuck L flagged a Reuters story, Banks scramble to fix old systems as IT ‘cowboys’ ride into sunset, which made some of the points we’ve been making but frustratingly missed other key elements.

Here’s what Reuters confirmed:

Banks and the Federal government are running mission-critical core systems on COBOL, and only a small number of older software engineers have the expertise to keep the systems running. From the article:

In the United States, the financial sector, major corporations and parts of the federal government still largely rely on it because it underpins powerful systems that were built in the 70s or 80s and never fully replaced…

Experienced COBOL programmers can earn more than $100 an hour when they get called in to patch up glitches, rewrite coding manuals or make new systems work with old.

For their customers such expenses pale in comparison with what it would cost to replace the old systems altogether, not to mention the risks involved.

Here’s what Reuters missed:

Why young coders are not learning COBOL. Why, in an era when IT grads find it hard to get entry-level jobs in the US, are young programmers not learning COBOL as a guaranteed meal ticket? Basically, it’s completely uncool and extremely tedious to work with by modern standards. Given how narrowminded employers are, if you get good at COBOL, I woudl bet it’s assumed you are only capable of doing grunt coding and would never get into the circles to work on the fantasy of getting rich by developing a hip app.

I’m sure expert readers will flag other issues, but the huge shortcoming of COBOL is that there are no equivalent of editing programs. Every line of code in a routine must be inspected and changed line by line.

How banks got in this mess in the first place. The original sin of software development is failure to document the code. In fairness, the Reuters story does allude to the issue:

But COBOL veterans say it takes more than just knowing the language itself. COBOL-based systems vary widely and original programmers rarely wrote handbooks, making trouble-shooting difficult for others.

What this does not make quite clear is that given the lack of documentation, it will always be cheaper and lower risk to have someone who is familiar with the code baby it, best of all the guy who originally wrote it. And that means any time you bring someone in, they are going to have to sort out not just the code that might be causing fits and starts, but the considerable interdependencies that have developed over time. As the article notes:

“It is immensely complex,” said [former chief executive of Barclays PLC Anthony] Jenkins, who now heads startup 10x Future Technologies, which sells new IT infrastructure to banks. “Legacy systems from different generations are layered and often heavily intertwined.”

I had the derivatives trading firm O’Connor & Associates as a client in the early 1990s. It was widely recognized as being one of the two best IT shops in all of Wall Street at the time. O’Connor was running the biggest private sector Unix network in the world back then. And IT was seen as critical to the firm’s success; half of O’Connor’s expenses went to it.

Even with it being a huge expense, and the my client, the CIO, repeatedly telling his partners that documenting the code would save 20% over the life of the software, his pleas fell on deaf ears. Even with the big commitment to building software, the trading desk heads felt it was already taking too long to get their apps into production. Speed of deployment was more important to them than cost or long-term considerations.1 And if you saw this sort of behavior with a firm where software development was a huge expense for partners who were spending their own money, it’s not hard to see how managers in a firm where the developers were much less important and management was fixated on short term earnings targets to blow off tradeoff like this entirely.

Picking up sales patter from vendors, Reuters is over-stating banks’ ability to address this issue. Here is what Reuters would have you believe:

The industry appears to be reaching an inflection point, though. In the United States, banks are slowly shifting toward newer languages taking cue from overseas rivals who have already made the switch-over.

Commonwealth Bank of Australia, for instance, replaced its core banking platform in 2012 with the help of Accenture and software company SAP SE. The job ultimately took five years and cost more than 1 billion Australian dollars ($749.9 million).

Accenture is also working with software vendor Temenos Group AG to help Swedish bank Nordea make a similar transition by 2020. IBM is also setting itself up to profit from the changes, despite its defense of COBOL’s relevance. It recently acquired EzSource, a company that helps programmers figure out how old COBOL programs work.

The conundrum is the more new routines banks pile on top of legacy systems, the more difficult a transition becomes. So delay only makes matters worse. Yet the incentives of everyone outside the IT areas is to hope they can ride it out and make the legacy system time bomb their successor’s problem.

If you read carefully, Commonwealth is the only success story so far. And it’s vastly less complex than that of many US players. First, it has roughly A$990 billion or $740 billion in assets now. While that makes it #46 in the world (and Nordea is of similar size at #44 as of June 30, 2016), JP Morgan and Bank of America are three times larger. Second, and perhaps more important, they are the product of more bank mergers. Commonwealth has acquired only four banks since the computer era. Third, many of the larger banks are major capital markets players, meaning their transaction volume relative to their asset base and product complexit is also vastly greater than for a Commonwealth. Finally, it is not impossible that as a government owned bank prior to 1990 that not being profit driven, Commonwealth’s software jockeys might have documented some of the COBOL, making a transition less fraught.

Add to that that the Commonwealth project was clearly a “big IT project”. Anything over $500 million comfortably falls into that category. The failure rate on big IT projects is over 50%; some experts estimate it at 80% (costly failures are disguised as well as possible; some big IT projects going off the rails are terminated early).

Mind you, that is not to say that it is impossible to move off legacy platforms. The issue is the time and cost (as well as risk). One reader, I believe Brooklyn Bridge, recounted a prototypical conversation with management in which it became clear that the cost of a migration would be three times a behemoth bank’s total profit for three years. That immediately shut down the manager’s interest.

Estimates like that don’t factor in the high odds of overruns. And even if it is too high for some banks by a factor of five, that’s still too big for most to stomach until they are forced to. So the question then becomes: can they whack off enough increments of the problem to make it digestible from a cost and risk perspective? But the flip side is that the easier parts to isolate and migrate are likely not to be the most urgent to address.

____
1 The CIO had been the head index trader and had also help build O’Connor’s FX derivatives trading business, so he was well aware of the tradeoff between trading a new instrument sooner versus software life cycle costs. He was convinced his partners were being short-sighted even over the near term and had some analyses to bolster that view. So this was the not empire-building or special pleading. This was an effort at prudent management.

Print Friendly
Tweet about this on TwitterDigg thisShare on Reddit0Share on StumbleUpon0Share on Facebook19Share on LinkedIn0Share on Google+0Buffer this pageEmail this to someone

81 comments

  1. Clive

    I got to the bit which said:

    Accenture is also working with software vendor Temenos Group AG to help…

    … and promptly splurted my coffee over my desk. “Help” is the last thing either of these two ne’redowells will be doing.

    Apart from the problems ably explained in the above piece, I’m tempted to think industry PR and management gullibility to it are the two biggest risks.

    Reply
    1. Marina Bart

      As someone who used to do PR for that industry (worked with Accenture, among others), I concur that those are real risks.

      Reply
    2. skippy

      Heaps of IT upgrades have gone a bit wonky over here of late, Health care payroll, ATO, Centerlink, Census, all assisted by private software vendors and consultants – after – drum roll….. PR management did a “efficiency” drive [by].

      Of course after legacy systems [people] were retrenched or shown the door in making government more efficient MBA style, some did hit the jack pot as consultants and made more that on the public dime…. but the Gov balance sheet got a nice one time blip.

      disheveled…. nice self licking icecream cone thingy… and its still all gov fault…. two’fer

      Reply
      1. Colonel Smithers

        Thank you, Skippy.

        It’s the same in the UK as Clive knows and can add.

        In the government, projects “helped” by Siemens, especially at the Home and Passport Offices, cost billions and were abandoned.

        At my former employer, an eagle’s nest, it was Deloittes. At my current employer, which has lost its passion to perform, it’s KPMG and EY helping.

        What I have read / heard is that the external consultants often cost more and will take longer to do the work than internal bidders. The banks and government(s) run an internal market and invite bids.

        Reply
        1. Clive

          Oh, where to start!

          My personal favourite is Accenture / British Gas. But then you’ve also got the masterclass in cockups Raytheon / U.K. Border Agency. Or for sheer breadth of failure, there’s the IT Programme That Helped Kill a Whole Bank Stone Dead (Infosys / Co-op).

          They keep writing books on how to avoid this sort of thing. Strangely enough, none of them ever tell CEOs or CIOs to pay people decent wages, not treat them like crap and to train up new recruits now and again. And also fail to highlight that though you might like to believe you can go into the streets in Mumbai, Manila or Shenzhen waving a dollar bill and have dozens of experienced, skilled and loyal developers run to you like a cat smelling catnip, that may only be your wishful thinking.

          Just wait ’til we get started trying to implement Brexit…

          Reply
          1. Raj

            Oh man, if you only had a look at the kind of graduates Infosys hires en masse and the state of graduate programmers coming out of universities here in India you’d be amazed how we still haven’t had massive hacks. And now the government, so confident in the Indian IT industry’s ability to make big IT systems is pushing for the universal ID system(aadhar) to be made mandatory for even booking flight tickets!

            So would you recommend graduates do learn COBOL to get good jobs there in the USA?

            Reply
            1. Clive

              I’d pick something really obscure, like maybe MUMPS — yes, incredibly niche but that’s the point, you can corner a market. You might not get oodles of work but what you do get you can charge the earth for. Getting real-world experience is tricky though.

              Another alternative, a little more mainstream is assembler. But that is hideous. You deserve every penny if you can learn that and be productive in it.

              Reply
              1. visitor

                Is anybody still using Pick? Or RPG?

                Regarding assembler: tricky, as the knowledge is tied to specific processors — and Intel, AMD and ARM keep churning new products.

                Reply
              2. Synoia

                I am an assembler expert. I have never seen a job advertised, but a I did not look very hard.

                Send me your work!!!

                IBM mainframe assembler…

                Reply
        2. visitor

          What about Computer Associates? For quite a while they proudly maintained the worst reputation amongst all of those consultancy/outsourcing firms.

          How does Temenos compare with Oracle, anyway?

          Reply
  2. MoiAussie

    For a bit more on why Cobol is hard to use see Why We Hate Cobol. To summarise, Cobol is barely removed from programming in assembler, i.e. at the lowest level of abstraction, with endless details needing to be taken care of. It dates pack to the punched card era.

    It is particularly hard for IT grads who have learned to code in Java or C# or any modern language to come to grips with, due to the lack of features that are usually taken for granted. Those who try to are probably on their own due to a shortage of teachers/courses. It’s a language that’s best mastered on the job as a junior in a company that still uses it, so it’s hard to get it on your CV before landing such a job.

    There are potentially two types of career opportunities for those who invest the time to get up-to-speed on Cobol. The first is maintenance and minor extension of legacy Cobol applications. The second and potentially more lucrative one is developing an ability to understand exactly what a Cobol program does in order to craft a suitable replacement in a modern enterprise grade language.

    Reply
      1. MartyH

        Well, COBOL’s shortcomings are part technical and part “religious”. After almost fifty years in software, and with experience in many of the “modern enterprise grade languages”, I would argue that the technical and business merits are poorly understood. There is an enormous pressure in the industry to be on the “latest and greatest” language/platform/framework, etc. And under such pressure to sell novelty, the strengths of older technologies are generally overlooked.

        @Yves, I would be glad to share my viewpoint (biases, warts and all) at your convenience. I live nearby.

        Reply
    1. vlade

      “It is particularly hard for IT grads who have learned to code in Java or C# or any modern language to come to grips with”

      which tells you something about the quality of IT education these days, where “mastering” a language is more often more important than actually understanding what goes on and how.

      My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and this was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language.

      Reply
      1. craazyboy

        IMHO, your old boss is wrong about that. Precisely because OO languages are a huge paradigm change and require a programmer to nearly abandon everything he/she knows about programming. Then get his brain around OOP patterns when designing a complex system. Not so easy.

        As proof, I put forth the 30% success rate for new large projects in the latter 90s done with OOP tech. Like they say, if it was easy, everyone would be doing it.

        More generally, on the subject of Cobol vs Java or C++/C#, in the heyday of OOPs rollout in the early 90s, corporate IT spent record amounts on developing new systems. As news of the Y2K problem spread, they very badly wanted to replace old Cobol/mainframe legacy systems. As things went along, many of those plans got rolled back due to perceived problems with viability, cost and trained personnel.

        Part of the reason was existing Cobol IT staff took a look at OOP, then at their huge pile of Cobol legacy code and their brains melted down. I was around lots of them and they had all the symptoms of Snow Crash. [Neil Stephenson] I hope they got better.

        Reply
        1. Marco

          It never occurred to me that the OOP-lite character of the newer “hipster” languages (Golang / Go or even plain old javascript) are a response to OOP run amok.

          Reply
      2. Arizona Slim

        A close friend is a retired programmer. In her mind, knowing how to solve the problem comes​ first.

        Reply
          1. Tom

            A lot of coders I know are all caught up in the technicalities of their language of choice and seem to lose sight of the fact they are there to solve a problem. Don’t get me wrong, there’s a place for the tech-heads, but there is a greater need for proficient coders with domain knowledge that can just “get shit done”.

            Reply
      3. Mel

        In the university course I took, we were taught Algol-60. Then it turned out that the univ. had no budget for Algol compiles for us. So we wrote our programs in Algol-60 for ‘publication’ and grading, and rewrote them in FORTRAN IV to run in a cheap bulk FORTRAN execution system for results. Splendid way to push home Turing’s point that all computing is the same. So when the job needed COBOL, “Sure, bring it on.”

        Reply
      4. rfdawn

        My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and this was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language.

        Yes. Learning a new programming language is fairly easy but understanding existing patchwork code can be very hard indeed. It just gets harder if you want to make reliable changes.

        HR thinking, however, demands “credentials” and languages get chosen as such based on their simple labels. They are searchable on L**kedIn!

        A related limitation is the corporate aversion to spending any time or money on employee learning of either language or code. There may not be anyone out there with all the skills needed but that will not stop managers from trying to hire them or, better still, just outsourcing the whole mess.

        Either choice invites fraud.

        Reply
      5. reslez

        Your boss was correct in my opinion — but also atypical. Most firms look for multi-years of experience in a language. They’ll toss your resume if you don’t show you’ve used it extensively.

        Even if a new coder spent the time to learn COBOL, if he wasn’t using it on the job or in pretty significant projects he would not be considered. And there aren’t exactly many open source projects out there written in COBOL to prove one’s competence. The limiting factor is not whether you “know” COBOL, or whether you know how to learn it. The limiting factor is the actual knowledge of the system, how it was implemented, and all the little details that never get written down no matter how good your documentation. If your system is 30+ years old it has complexity hidden in every nook and cranny.

        As for the language itself, COBOL is an ancient language from a much older paradigm than what students learn in school today. Most students skip right past C, they don’t learn structural programming. They expect to have extensive libraries of pre-written routines available for reuse. And they expect to work in a modern IDE (development environment), a software package that makes it much easier to write and debug code. COBOL doesn’t have tools of this level.

        When I was in the Air Force I was trained as a programmer. COBOL was one of the languages they “taught”. I never used it, ever, and wouldn’t dream of trying it today. It’s simply too niche. I would never recommend anyone learn COBOL in the hopes of getting a job. Get the job first, and if it happens to include some COBOL get the expertise that way.

        Reply
    2. d

      having seen the ‘high level code’ in C++, not sure what makes it ‘modern’.its really an out growth of C, which is basically the assembler language of Unix. which it self is no spring chicken. mostly what is called ‘modern’ is just the latest fad, has the highest push from vendors. and sadly what we see in IT, is that the IT trade magazines are more into what they sell, that what companies need (maybe because of advertising?)

      as to why schools tend to teach these languages than others? mainly cause its hip. its also cheaper for the schools, as they dont have much in the way of infrastructure to teach them ( kids bring their own computers). course teachers are as likely to be influenced by the latest ‘shinny;’ thing as any one else

      Reply
      1. craazyboy

        C++ shares most of the core C spec but that’s it. [variables and scope, datatypes, functions sorta, math and logic operatives, logic control statements] The reason you can read high level C++ is because it uses objects that hide the internal code and are given names that describe their use which if done right makes the code somewhat readable, along with a short comment header, and self documenting.

        Then at high level most code is procedural and/or event driven, which makes it appear to function like C or any other procedural language. Without the Goto statements and subroutines, because that functionality is now encapsulated within the C++ objects. {which are a datatype that combines data structures and related functions that act on this data)

        Reply
        1. JTFaraday

          So, my babyboom parents were programmers. I’m not, although I took a class or two long long ago. What I hear you saying is that today’s geniuses are using programming languages without really needing to understand the technical precedents that are embedded within them. Just sort of skimming around on the surface, as it were.

          This is a good thing? Do they at least know this? I at least knew that whatever I was ostensibly working in (and C was one of them), it wasn’t “machine language” for example.

          I don’t see how we even get to a place where we can, with a straight face, make a statement like this one:

          “Given how narrowminded employers are, if you get good at COBOL, I woudl bet it’s assumed you are only capable of doing grunt coding and would never get into the circles to work on the fantasy of getting rich by developing a hip app.”

          It seems pretty apparent to me that most apps are canned cheez wiz, technically speaking, and one of the reasons Uber, for example, has no real intrinsic value. Knowing how all these coding ecosystems, for lack of a better term, work together– now, that’s value.

          Reply
          1. flora

            I woudl bet it’s assumed you are only capable of doing grunt coding

            That tells you a lot about management’s blind spots and failures to understand whole systems and what it is they manage.

            Reply
    3. ChrisPacific

      Well put. I was going to make this point. Note that the today’s IT grads struggle with Cobol for the same reason that modern airline pilots would struggle to build their own airplane. The industry has evolved and become much more specialized, and standard ‘solved’ problems have migrated into the core toolsets and become invisible to developers, who now work at a much higher level of abstraction. So for example a programmer who learned using BASIC on a Commodore 64 probably knows all about graphics coding by direct addressing of screen memory, which modern programmers would consider unnecessary at best and dangerous at worst. Not to mention it’s exhausting drudgery compared to working with modern toolsets.

      The other reason more grads don’t learn COBOL is because it’s a sunset technology. This is true even if systems written in COBOL are mission critical and not being replaced. As more and more COBOL programmers retire or die, banks will eventually reach the point where they don’t have enough skilled staff available to keep their existing systems running. If they are in a position where they have to fix things anyway, for example due to a critical failure, they will be forced to resort to cross-training other developers, at great expense and pain for all concerned, and with no guarantee of success. One or two of these experiences will be enough to convince them that migration is necessary, whatever the cost (if their business survives them, which isn’t a given when it comes to critical failures involving out of date and poorly-understood technology). And while developers with COBOL skills will be able to name their own price during those events, it’s not likely to be a sustainable working environment in the longer term.

      It would take a significant critical mass of younger programmers deciding to learn COBOL to change this dynamic. One person on their own isn’t going to make any difference, and it’s not career advice I would ever give to a young graduate looking to enter IT.

      I am an experienced developer who has worked with a lot of different languages, including some quite low level ones in my early days. I don’t know COBOL, but I am confident that I could learn it well enough to perform code archaeology on it given enough time (although probably nowhere near as efficiently as someone who built a career on it). Whether I could be convinced to do so is another question. If you paid me never-need-to-work-again money, then maybe. But nobody is ever going to do that unless it’s a crisis, and I’m not likely to sign up for a death march situation with my current family commitments.

      Reply
  3. Steve

    “Experienced COBOL programmers can earn more than $100 an hour”

    Then the people hiring are getting them dirt cheap. This is a lot closer to consulting than contracting–a very specialized skill set and only a small set of people available. The rate should be $200-300/hour.

    Reply
    1. reslez

      I wonder if it has something to do with the IRS rules that made that guy fly a plane into an IRS office? Because of the rules, programmers aren’t allowed to work as independent consultants. Since their employer/middleman takes a huge cut the pay they receive is a lot lower. Coders with a security clearance make quite a bit but that requires an “in”, getting the clearance in the first place which most employers won’t pay for.

      Reply
    2. d

      not any place i know of. maybe in an extreme crunch. cause today the most COBOL jobs have been offshored. and maybe thats why kids dont lean COBOL.

      Reply
    3. ChrisPacific

      I had the same thought. Around here if you want a good one, you would probably need to add another zero to that.

      Reply
    1. ejf

      you’re right. I’ve seen it on cluckny databases in a clothing firm in NY State, a seed and grain distribution facility in Minnesota and a bank in Minneapolis. They’re horrible and Yves is right – documentation is completely ABSENT

      Reply
      1. flora

        One teensy little point: COBOL, for many original coders, was thought of as self-documenting (pause for eye rolls) because the variable names could be descriptive of exactly what the variable represented, assuming the coders used properly descriptive names. That worked fine as long as the original system analysts and coders were around. They knew that “X-ytocoverz” meant such-and-such. In that sense, the module code could be seen as self-documenting, as long as the original coders were there to maintain the code. But when the original analysts and coders retired or departed? Aye, there’s the rub. Maybe the clear-as-daylight variable name to the original coders were only clear-as-mud to the next people brought on board. And there was no Rosetta Stone to decipher coder-determined variable names into “common language” if the original coders used shorthands of their own devising.

        Little did the original coders know that code they wrote 30-40 years ago would still be in use today, with a new generation of IT people trying to make sense of what they created then. There was no sense of “carved in stone so make it clear to the world” then, is my guess.

        Reply
        1. flora

          And remember: the longer and more descriptive the variable name used, the more typing the coder had to do. (Coding COBOL will teach you to type, if nothing else. ) So the natural inclination was shorthand variable names to reduce the amount of typing required. Each shop devised its own accepted shorthand.

          Reply
      2. bob

        “a seed and grain distribution facility in Minnesota”

        I saw one end of that, and what a mess it was. Completely reliant on two people knowing the datasets better than the computer. Last name…In farm country, there isn’t much diversity. Living on the same road is no guarantee of good relations.

        Sorting through invoices to make sure the right Bill Smith got the right bill, was very literally, life preserving. You can’t send the wrong one, The Smith vs. Smith wars will start up again.

        Reply
    2. d

      in small business, where every penny counts, they dont see the value in documentation. not even when they get big either

      Reply
  4. Disturbed Voter

    No different than the failure of the public sector to maintain dams, bridges and highways. Basic civil engineering … but our business model never included maintenance nor replacement costs. That is because our business model is accounting fraud.

    I grew up on Fortran, and Cobol isn’t too different, just limited to 2 points past the decimal to the right. I feel so sorry for these code jockies who can’t handle a bit of drudgery, who can’t do squat without a gigabyte routine library to invoke. Those languages as scripting languages … or report writers back in the old days.

    Please hire another million Indian programmers … they don’t mind being poorly paid … or the drudgery. Americans and Europeans are so over-rated. Business always complains they can’t hire the right people … some job requires 2 PhDs and we can’t pay more than $30k, am I right? Business needs slaves, not employees.

    Reply
      1. Mel

        :) Many decades. 6 or 7, at least. It was when the modern language Pascal MT+ finally came up with an S9(15)V9(4) real type that we could finally do business computing with it.

        Reply
      2. Disturbed Voter

        If you use fancy floating point, your accounting can be skimmed a fraction of a cent at a time. To prevent that, is why blocks against using more than two decimal points are put into place. Floating point is necessary in engineering, but we have Fortran for that.

        And I am aware that used-programming-language-salesmen have even come up with object-oriented Cobol. But I wasn’t born yesterday. So no thanks.

        Reply
  5. clarky90

    The ‘Novopay debacle’

    This was a “new payroll” system for school teachers in NZ. It was an ongoing disaster. If something as simple (?) as paying NZ teachers could turn into such a train-wreck, imagine what updating the software of the crooked banks could entail. I bet that there are secret frauds hidden in the ancient software, like the rat mummies and cat skeletons that one finds when lifting the floor of old houses.

    https://en.wikipedia.org/wiki/Novopay

    “Novopay is a web-based payroll system for state and state integrated schools in New Zealand, processing the pay of 110,000 teaching and support staff at 2,457 schools…….. From the outset, the system led to widespread problems with over 8,000 teachers receiving the wrong pay and in some cases no pay at all; within a few months, 90% of schools were affected………..”

    “Many of the errors were described as ‘bizarre’. One teacher was paid for 39 days, instead of 39 hours getting thousands of dollars more than he should have. Another teacher was overpaid by $39,000. She returned the money immediately, but two months later, had not been paid since. A relief teacher was paid for working at two different schools on the same day – one in Upper Hutt and the other in Auckland. Ashburton College principal, Grant McMillan, said the ‘most ludicrous’ problem was when “Novopay took $40,000 directly out of the school bank account to pay a number of teachers who had never worked at the college”.

    Can you imagine this, times 10,000,000????

    Reply
  6. vlade

    “but the huge shortcoming of COBOL is that there are no equivalent of editing programs. Every line of code in a routine must be inspected and changed line by line”
    I’m not sure what you mean by this.

    If you mean that COBOL doesn’t have the new flash IDEs that can do smart things with “syntactic sugar”, then it really depends on the demand. Smart IDEs can be written for pretty much any languages (smart IDEs work by operating on ASTs, which are part and parcel of any compiler. The problem is more of what to do if you have an externalised functions etc, which is for example why it took so long for those smart IDEs to work with C++ and its linking model). The question is whether it pays – and a lot of old COBOL hands eschew anything except for vi (or equivalent) because coding should be done by REAL MEN.

    On the general IT problem. There are three problems, which are sort of related but not.

    The first problem is the interconnectedness of the systems. Especially for a large bank, it’s not often clear where one system ends and the other begins, what are the side-effects of running something (or not running), who exactly produces what outputs and when etc. The complexity is more often at this level than cobol (or any other) line-by-line code.

    The second problem is the IT personell you get. If you’re unlucky, you get coding monkeys, who barely understand _any_ programming language (there was time I didn’t think people like that get hired. I now know better), and have no idea what analytical and algorithmic thinking is. If you’re lucky, you get a bunch of IT geeks, who can discuss the latest technology till cows come home, know the intricate details of what a sequence point in C++ is and how it affects execution, but don’t really care that much about the business. Then you get some possibly even brilliant code, but often also get unnecessary technological artifacts and new technologies just because they are fun – even though a much simpler solution would work just as well if not better. TBH, you can get this from the other side too, someone who understands the business but doesn’t know even basic language techniques, which generally means their code works very well for the business, but is a nightmare to maintain (a typical population of this groups are front office quants).

    If you are incredibily lucky, you get someone who understands the business and happens to know how to code well too. Unfortunately, this is almost a mythical beast, especially since neitehr IT nor the business encourage people to understand each other.

    Which is what gets me to the thirds point – politics of it. And that’s, TBH, is why most projects fail. Because it’s easier to staff a project with 100 developers and then say all that could have been done was done, than get 10 smart people working on it, but risk that if it fails you get told you haven’t spent enough resources. “We are not spending enough money” is paradoxically one of the “problems” I often see here, when the problem really is “we’re not spending money smartly enough”. Because in an organization budget=power. I have yet to see an IT project that would have 100+ developers that would _really_ succeed (as opposed to succeed by redefining what it was to deliver to what was actually delivered).

    Oh, and last point, on the documentation. TBH, documentation of the code is superfluous if a) it’s clear what business problem is being solved b) has a good set of test cases c) the code is reasonably cleanly written (which tends to be the real problem). Documenting code by anything else but example is in my experience just a costly exercise. Mind you, this is entirely different from documenting how systems hang together and how their interfaces work.

    Reply
    1. Yves Smith Post author

      On the last point, I have to tell you I in short succession happened to work not just with O’Connor, but about a year later, with Bankers Trust, then regarded as the other top IT shop on Wall Street. Both CIOs would disagree with you vehemently on your claim re documentation.

      Reply
      1. vlade

        Yes, in 90s there was a great deal of emphasis on code documentation. The problem with that is that the requirements in real world change really quick. Development techniques that worked for sending the man to the moon don’t really work well on short-cycle user driven developments.

        90s was mostly the good old waterfall method (which was really based on the NASA techniques), but even as early as 2000s it started to change a lot. Part of it come from the realization that the “building” metaphor that was the working approach for a lot of that didn’t really work for code.

        When you’re building a bridge, it’s expensive, so you have to spend a lot of time with blueprints etc. When you’re doing code, documenting it in “normal” human world just adds a superfluous step. It’s much more efficient to make sure your code is clean and readable than writing extra documents that tell you what the code does _and_ have to be kept in sync all the time.

        Moreover, bits like pretty pictures showing the code interaction, dependencies and sometimes even more can now be generated automatically from the code, so again, it’s more efficient to do that than to keep two different versions of what should be the same truth.

        Reply
        1. Yves Smith Post author

          With all due respect, O’Connor and Bankers Trust were recognized at top IT shops then PRECISELY because they were the best, bar none, at “short cycle user driven developments.” They were both cutting edge in derivatives because you had to knock out the coding to put new complex derivatives into production.

          Don’t insinuate my clients didn’t know what they were talking about. They were running more difficult coding environments than you’ve ever dealt with even now. The pace of derivative innovation was torrid then and there hasn’t been anything like it since in finance. Ten O’Connor partners made $1 billion on the sale of their firm, and it was entirely based on the IT capabilities. That was an unheard of number back then, 1993, particularly given the scale of the firm (one office in Chicago, about 250 employees).

          Reply
          1. vlade

            Yves,

            I can’t talk about how good/bad your clients were except for generic statements – and the above were generic statements that in 90s MOST companies used waterfall.

            At the same time please do not talk about what programming environments I was in, because you don’t know. That’s assuming it’s even possible to compare coding environments – because quant libraries that first and foremost concentrate on processing data (and I don’t even know it’s what was the majority of your clients code) is a very very different beast from extremely UI complex but computationally trivial project, or something that has both trivial UI and computation but is very database heavy etc. etc.

            I don’t know what specific techniques your clients used. But the fact they WANTED to have more documentation doesn’t mean that having more documentation would ACTUALLY be useful.

            With all due respect, I’ve spent the first half of 00s talking to some of the top IT development methodologists of the time, from the Gang Of Four people to Agile Manifesto chaps, and practicing/leading/implementing SW development methodology across a number of different industries (anything from “pure” waterfall to variants of it to XP).

            The general agreement across the industry was (and I believe still is) that documenting _THE CODE_ (outside of the code) was waste of time (actually it was ranging from any design doc to various levels of design doc, depending on who you were talking to).

            Again, I put emphasis on the code – that is not the same as say having a good whitepaper telling you how the model you’re implementing works, or what the hell the users actually want – i.e. capturing the requirements.

            As an aside – implementation of new derivative payoffs can be actually done in a fairly trivial way, depending on how exactly you model them in the code. I’ve wrote an extensive library that did it, whose whole purpose was to deal with new products and allow them to be incubated quickly and effectively – and that most likely involved doing things that no-one at BT/O’Conner even looked at in early 1990s (because XVA wasn’t even gleam in anyone’s eye at that time).

            Reply
            1. Clive

              Well at my TBTF, where incomprehensible chaos rules, the only thing — and I do mean the only thing — that keeps major disasters averted (perhaps “ameliorated” is putting it better) is where some of the key systems are documented. Most of the core back end is copiously and reasonably well documented and as such can survive a lot of mistreatment at the hands of the current outsourcer de jour.

              But some “lower priority” applications are either poorly documented or not documented at all. And a “low priority” application is only “low priority” until it happens to sit on the critical path. Even now I have half of Bangalore (it seems so, at any rate) sitting there trying to reverse engineer some sparsely documented application — although I suspect there was documentation, it just got “lost” in a succession of handovers — desperate in their attempts to figure out what the application does and how it does it. You can hear the fear in their voices, it is scary stuff, given how crappy-little-VB6-pile-of-rubbish is now the only way to manage a key business process… where there are no useable comments in the code and no other application documentation, you are totally, totally screwed.

              Reply
            2. Skip Intro

              It seems like you guys are talking past each other to some degree. I get the sense that vlade is talking about commenting code, and dismissing the idea of code comments that don’t live with the code. Yves’ former colleagues are probably referring to higher level specifications that describe the functionality, requirements, inputs, and outputs of the various software modules in the system.
              If this is the case, then you’re both right. Even comments in the code can tend to get out of date due to application of bug fixes, and other reasons for ‘drift’ in the code, unless the comments are rigorously maintained along wth the code. Were the code-level descriptions maintained somewhere else, that would be much more difficult and less useful. On the other hand the higher-level specifications are pretty essential for using, testing, and maintaining the software, and would sure be useful for someone trying to replace all or parts of the system.

              Reply
              1. Clive

                In my experience you need a combination of both. There is simply no substitute for a brief line in some ghastly nested if/then procedure that says “this section catches host offline exceptions if the transaction times out and calls the last incremental earmarked funds as a fallback” or what-have-you.

                That sort of thing can save weeks of analysis. It can stop an outage from escalating from a few minutes to hours or even days.

                Reply
      2. ChrisPacific

        Documentation is a complex issue. I would not say it’s superfluous as good documentation can be worth its weight in gold, especially for systems that are old, don’t change much or that current employees don’t know very much about. That said, a document is an artefact just like code is, and needs to be tested, verified and kept up to date in the same way. Every document you create adds to the size, complexity and maintenance overhead of the system. That means that for every document created, you should be able to say:

        1. What it’s for (including what business function it serves to have it)
        2. How and when it will be updated, and who will do it (includes how it will be managed, budgeted and planned)
        3. How you will verify that it’s accurate and correct
        4. How critical it is (if you are running short of time/money, would you cut the documentation in order to go live, or would you delay/cancel the project?)

        If you have been indoctrinated with a mantra like “documentation is important” then the temptation is to create reams and reams of it (the 300 page design document, for example) and many organizations even have processes that require this. Most of it is a waste of time. Documentation should be targeted in order to support a particular function, should be the minimum required in order to do that, and should be factored into the planning from the start – not just the production of the initial document, but its total lifecycle. Bear in mind that nearly all developers hate documentation, so you are relying on either their professionalism (if you are lucky) or employer coercion (if you aren’t) to get it done at all, let alone to a high standard. And that’s all assuming the employer even prioritizes it in the first place, and it hasn’t been one of the things on the chopping block when things get tight, as it very often is.

        Requirements and specification documents are very often important (e.g. in government settings, where low-level coding decisions can quite literally have policy implications, and it’s important for non-technical people to understand the connection). Architecture documents are often useful. Developer’s guides can be useful if they are high level and geared toward explaining the quirks of the particular application and how to work with it (that said, writing a good one is an art, and not many people do it well). Low-level code documentation is very seldom useful unless it’s done in the code itself, and by and large a developer who doesn’t write good code won’t write good documentation either (although I do agree with Clive that it’s not always possible to write self-evident logic in code, and a well-placed comment goes a long way in those cases). Highly detailed or extensive documents of any kind are probably useful only as a snapshot in time unless there is a plan in place to keep them up to date that is clear, well-funded, and supported by business objectives (so that it’s not likely to get cut in future). Snapshot-in-time documentation can be useful, but can also be dangerous if developers aren’t diligent about confirming it against the actual code. Unless the documentation has a business purpose (e.g. policy compliance) then it’s often more efficient to simply go to the code directly and figure out what is really going on.

        Reply
    2. JTFaraday

      “If you are incredibily lucky, you get someone who understands the business and happens to know how to code well too. Unfortunately, this is almost a mythical beast, especially since neitehr IT nor the business encourage people to understand each other.”

      In the 1980s, my mother was hired as a trained programmer then learned insurance over the course of her career and was incentivized to do so. THEN they brought in Tata. With today’s resulting job crapification, you can’t get this. No can haz.

      Reply
  7. Mathiasalexander

    They could try building the new system from scratch as a stand alone and then entering all the data manually.

    Reply
    1. Ivy

      There is some problem-solving/catastrophe-avoiding discussion about setting up a new bank with a clean, updated (i.e., this millennium) IT approach and then merging the old bank into that and decommissioning that old one. Many questions arise about applicable software both in-house and at all those vendor shops that would need some inter-connectivity.

      Legacy systems lurk all over the economy, from banks to utilities to government and education. The O’Connor CIO advice relating to life-cycle costing was probably unheard in many places besides
      The Street.

      Reply
    2. d

      building them from scratch is usually the most likely to be a failure as to many in both IT and business only know parts of the needs. and if a company cant implement a vendor supplied package to do the work, what makes us think they can do it from scratch

      Reply
  8. visitor

    I did learn COBOL when I was at the University more than three decades ago, and at that time it was already decidedly “uncool”. The course, given by an old-timer, was great though. I programmed in COBOL in the beginnings of my professional life (MIS applications, not banking), so I can provide a slightly different take on some of those issues.

    As far as the language itself is concerned, disregard those comments about it being like “assembly”. COBOL already showed its age in the 1980s, but though superannuated it is a high-level language geared at dealing with database records, money amounts (calculations with controlled accuracy), and reports. For that kind of job, it was not that bad.

    The huge shortcoming of COBOL is that there are no equivalent of editing programs.

    While in the old times a simple text editor was the main tool for programming in that language, modern integrated, interactive development environments for COBOL have been available for quite a while — just as there are for Java, C++ or C#.

    And that is a bit of an issue. For, already in my times, a lot, possibly most COBOL was not programmed manually, but generated automatically — typically from pseudo-COBOL annotations or functional extensions inside the code. Want to access a database (say Oracle, DB2, Ingres) from COBOL, or generate a user interface (for 3270 or VT220 terminals in those days), or perform some networking? There were extensions and code generators for that. Nowadays you will also find coding utilities to manipulate XML or interface with routines in other programming languages. All introduce deviations and extensions from the COBOL norm.

    If, tomorrow, I wanted to apply for a job at one of those financial institutions battling with legacy software, my rusty COBOL programming skills would not be the main problem, but my lack of knowledge of the entire development environment. That would mean knowing those additional code generators, development environments, extra COBOL-geared database/UI/networking/reporting modules. In an IBM mainframe environment, this would probably mean knowing things like REXX, IMS or DB2, CICS, etc (my background is DEC VMS and related software, not IBM stuff).

    So those firms are not holding dear onto just COBOL programmers — they are desperately hoarding people who know their way around in mainframe programming environments for which training (in Universities) basically stopped in the early 1990s.

    Furthermore, I suspect that some of those code generators/interfaces might themselves be decaying legacy systems whose original developers went out of business or have been slowly withdrawing from their maintenance. Correcting or adjusting manually the COBOL code generated by such tools in the absence of vendor support is lots of fun (I had to do something like that once, but it actually went smoothly).

    Original programmers rarely wrote handbooks

    My experience is that proper documentation has a good chance to be rigorously enforced when the software being developed is itself a commercial product to be delivered to outside parties. Then, handbooks, reference manuals and even code documentation become actual deliverables that are part of the product sold, and whose production is planned and budgeted for in software development programmes.

    I presume it is difficult to ensure that effort and resources be devoted to document internal software because these are purely cost centers — not profit centers (or at least, do not appear as such directly).

    That is not to say that it is impossible to move off legacy platforms

    So, we knew that banks were too big to fail, too big to jail, and are still too big to bail. Are their software problems too big to nail?

    Reply
    1. d

      actually suspect banks like the rest of business dont really care about their systems, till they are down, as they will find the latest offshore company to do it cheaper.

      Reply
    2. Yves Smith Post author

      Why then have I been told that reviewing code for Y2K for COBOL programs had to be done line by line? Has something changed since then?

      I said documentation, not handbooks. And you are assuming banks hired third parties to do their development. Buying software packages and customizing them, as well as greater use of third party vendors, became a common practice only as of the 1990s.

      Reply
  9. JTMcPhee

    I’m in favor of the “Samson Option” in this area.

    I know it will screw me and people I care about, and “throw the world economy into chaos,” but who effing cares (hint: not me) if the code pile reaches past the limits of its angle of repose, and slumps into some chaotic non-form?

    Maybe a sentiment that gets me some abuse, but hey, is it not the gravamen of the story here that dysfunction and then collapse are very possible, maybe even likely?

    And where are the tools to re-build this Tower of Babel, symbol of arrogant pride? Maybe G_D has once again, per the Biblical story, confounded the tongues of men (and women) to collapse their edifices and reduce them to working the dirt (what’s left of it after centuries of agricultural looting and the current motions toward temperature-driven uninhabitability.)

    Especially interesting that people here are seemingly proud of having taken part successfully in the construction of the whole derivatives thing. Maybe I’m misreading that. But what echoes in my mind in this context is the pride that the people of Pantex, https://en.wikipedia.org/wiki/Pantex_Plant , have in their role in keeping the world right on the ragged edge of nuclear “Game Over.” On the way to Rapture, because they did G_D’s work in preparing Armageddon. http://articles.chicagotribune.com/1986-09-05/features/8603060693_1_pantex-plant-nuclear-weapons-amarillo

    “What a wondrous beast this human is…”

    Reply
  10. ChrisAtRU

    #Memories

    My first job out of uni, I was trained as a MVS/COBOL programmer. After successfully completing the 11-week pass/fire course, I showed up to my 1st work assignment where my boss said to me, “Here’s your UNIX terminal.”

    ;-) – COBOL didn’t strike me as difficult, just arcane and verbose. Converting to SAP is a costly nightmare. That caused to me to leave a job once … had no desire to deal with SAP/ABAP. I’m surprised no one has come up with an acceptable next-gen thing. I remember years ago seeing an ad for Object-Oriented-COBOL in an IT magazine and I almost pissed myself laughing. On the serious side, if it’s still that powerful and well represented in Banking, perhaps someone should look into an upgraded version of the language/concepts and build something easy to lift and shift … COBOL++?

    Wherefore are ye startup godz

    #OnlyHalfKidding

    #MaybeNot

    Reply
  11. Peewee

    This sounds like an opportunity for a worker’s coop, to train their workers in COBOL and to get back at these banks by REALLY exploiting them good and hard.

    Reply
  12. susan the other

    so… is this why no one is willing to advocate regulating derivatives in an accountable way? i almost can’t believe this stuff. i can’t believe that we are functioning at all, financially. 80% of IT projects fail? and if legacy platforms are replaced at great time and expense, years and trillions, what guarantee is there that the new platform will not spin out just as incomprehensibly as COBOL based software evolved, with simplistic patches of other software lost in translation? And maybe many times faster. Did Tuttle do this? I think we need new sophisticated hardware, something even Tuttle can’t mess with.

    Reply
    1. Skip Intro

      I think it is only 80% of ‘large’ it projects fail. I think it says more about the lack of scalability of large software projects, or our (in-) ability to deal with exponential complexity growth…

      Reply
  13. flora

    I stand in awe of this comments section. I’ve nothing useful to add. This problem sounds as difficult to resolve as re-laying a ship’s keel… while the ship is at sea and under way.

    Reply
  14. David McClain

    Honestly, while it (the large COBOL codebase) may be a huge risk, I see banks, worldwide, continuing to ignore the greater immediate risk of software security.

    Banks and Insurance companies were known as “training grounds” way back when I began in the late 1960’s. They hired any breathing body to become programmers, and paid as little as possible for their work. And it seems they haven’t changed much over the following decades. They see fit to spend as little as possible protecting the customers (and themselves) – and this COBOL repository is certainly an example of that failure to protect.

    Just recently, a major bank in Brazil was ripped off by some clever hackers. The banking industry’s lax attitudes have contributed greatly to a determined, and hence well paid, community of organized crime. The pickings are so easy for them, and, as a result, the criminals can certainly afford to invest in their own technologies. The criminals will only be getting better at what they do.

    Reply
  15. Wade Riddick

    I think maybe we’ve missed an obvious possible outcome here. At some point the cost of aging software infrastructure for the banks outweighs the cost of Silicon Valley starting its own physical banks. The banks implode under the hack attacks and software companies/disrupters write their own code and pick up the physical infrastructure for pennies. Maybe there’s a Tesla/Detroit dichotomy coming but for banks coming down the road.

    Reply
    1. bob

      Elon against detroit? It’s a good comparison, but not in the way you conclude.

      Automakers and banks are all about scale. Bigger, to a point, is much, much better. They’re also both capital hungry. Elon makes how many cars a year? less than 1/10 of one percent of the market? “Detroit” has nothing to worry about.

      The SV guys also won’t ever do Banking. They pay lobbiests a lot for these semantics. Paypal is already a bank in most respects, except that it doesn’t want to have to be responsible to customers (depositors). Calling paypal a bank makes a bunch regulations kick in, as well as capital requirement, etc.. Paypal has gone out of it’s way to eschew any regulation, handing over whatever is asked for by law enforcement. Banks can’t do this without a warrant. They’ve also been able to avoid regulation through backroom deals with taxing authorities, and, again here– LE.

      The SV people are already neck deep in ‘banking’, as close to ‘physical’ as is possible without using the Bank name, or assuming any responsibility for ‘deposits’. There’s a reason for that.

      There are very real costs to it, which a lot of people just handwave away in the name of Disrupt!

      Why are they spending so much time and effort to distance themselves from the word “bank”?

      Reply
      1. Wade Riddick

        And maybe it’s another regulated business, like ISPs, which moves in. Think about it. They’ve got a network, they’re used to regulation as a “utility” (bit of a joke these days) and they’ve got more of your financial profile in some ways via your web history than the banks do.

        And I don’t think Detroit is scared so much of Tesla as they are of Google. It’s all about what you do with the data… and banks’ IT problems start at the root. How do they ever intend to compete with higher order applications?

        There is another interesting dichotomy in the shadow banking system. Just as banks have shed many of their functions to hedge funds, so too has the software development center in finance shifted. Private front-running, high-frequency trading algorithms get top dollar investment while the rudimentary fundamentals of banking get crumbs. The banking infrastructure is being deliberately hollowed out towards rent-seeking. A.I. in general poses a set of challenges but they’re going to aggravate the social issues by applying A.I. chiefly to their skimming operations.

        Reply
  16. ed

    My cousin worked for some credit card processing company in Atlanta and said everything was in Assembly I guess that’s preferable to COBOL in a sense but I’m not sure.

    Reply
    1. ChrisPacific

      Now that is frightening. Writing business logic in Assembly is a little like making a cup of tea by assembling it one atom at a time using a particle accelerator. Even if the guy doing it is a genius and makes the best cup of tea you’ve ever had, you should still be worried (what about when he leaves and the café staff have to do it?)

      Reply
  17. Frit

    Are there not issues of supportable hardware for these legacy systems? Don’t hardware maintenance costs go up and up? I assume there are also specialized hardware bits that are becoming unavailable.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *