From time to time, we’ve written about how bank IT is a systemic risk waiting to happen. Major financial firms have legacy code at the core of their systems that they can’t migrate off at acceptable costs and risk (numerous banks have had a go at this issue, and projects wind up being shelved; at best, they can port only some products or customers off the aging systems). Readers, even ones who are in IT but not in banking, sometimes scoff at what we have said.
The disaster at TSB should serve as a big wake up call. The very short version is that a UK bank, TSB, which had been merged into and then many years later was spun out of Lloyds Bank, was bought by the Spanish bank Banco Sabadell in 2015. Lloyds had continued to run the TSB systems and was to transfer them over to Sabadell over the weekend. It’s turned out to be an epic failure, and it’s not clear if and when this can be straightened out.
It is bad enough that bank IT problem had been so severe and protracted a major newspaper, The Guardian, created a live blog for it that has now been running for two days.
The more serious issue is the fact that customers still can’t access online accounts and even more disconcerting, are sometimes being allowed into other people’s accounts, says there are massive problems with data integrity. That’s a nightmare to sort out.
Even worse, the fact that this situation has persisted strongly suggests that Lloyds went ahead with the migration without allowing for a rollback. If true, this is a colossal failure, particularly in combination with the other probable planning failure, that of not remotely adequate debugging (while there was a pilot, it is inconceivable that it could have been deemed to be a success if the testing had been adequate).
Let’s turn the mike over to the Telegraph:
Customers of TSB continued to complain of being unable to access their accounts on Tuesday morning as the bank’s IT fiasco dragged on into its fifth day.
TSB confirmed its 1.9m customers were still facing “intermittent” problems when attempting to log in to online services after a bungled switch over from a system the bank had been renting from its old owner Lloyd’s Banking Group.
Customers had been warned the transfer of 1.3 billion customer records to a new system could affect services from 4pm on Friday to 6pm on Sunday – but the disruption continued overnight and into Monday and Tuesday.
A look at Twitter suggests that “intermittent” is not exactly accurate. These tweets are from this morning:
@TSB Finally get the option to log in and it says my username/password are incorrect! They definitely aren’t ? Is anyone else having this problem or have I been hacked as well as messed about? #TSB #TSBdown
— Lola Moreno LaVey (@Lucky_Lola_) April 24, 2018
#tsbdown this is what im getting, hope im a zillionaire when i finally log in! Its beyond a joke now tho, dont know where i am with my money! pic.twitter.com/bbWB8a9p72
— steve-o (@smellyama) April 24, 2018
TSB this is bloody ridiculous. I’ve got important payments to make today with no online and phone access to my money. Two unwell kids to feed and I can’t pull them out of bed to go shopping ?! #tsbdown #fixit
— Laura Meres (@meres_laura) April 24, 2018
This is the only thing working so far.#tsbdown pic.twitter.com/q1gElV9OBQ
— ?? Eduardo Marin (@emarinuk) April 24, 2018
This one was a mere 15 minutes ago:
It’s my birthday and I have no access to my bank account – ‘well done’, TSB. It’s day number 5 and you are still ‘down for maintenance’… How much longer will that take? :( @TSB #TSB #TSBfail #TSBdown #happybirthday #JokeoftheDay pic.twitter.com/gkj1ZJmnuI
— Sandra Kitlinska (@sandrakitlinska) April 24, 2018
And before you dismiss these tweets as mere upset customer noise, hard core techies see signs of meltdown-level problems:
Are @TSB developers seriously coding live changes in production??♂️I’ve found loads of ugly debugging logs in the code which are being spat out in the browser console. I don’t dare to imagine what mess they have in the back-end. #TSB #tsbdown pic.twitter.com/qHh5sbnUIS
— Martin (@apphancer) April 24, 2018
Before we go into the details, it appears that online banking is not working, or working for so few customers as to be effectively not working. Phone banking is swamped but on top of that, customers are reporting on Twitter that their user IDs and logins aren’t working for phone banking either.
Branch banking or ATMs do not seem to be on tilt, but like the person with the sick kids above, many people aren’t in a position to go to the branch as a backup, and if they did go, the branches are likely to have huge lines. And this tweet a mere seven minutes ago suggests some branches are down:
So @TSB your service is down again and that includes in branch (Leeds). My patience with you has expired! Expect @TheFCA involvement! #tsbdown
— Stephanie Hayden (@flyinglawyer73) April 24, 2018
The overview is that Lloyds Bank, one of the former four British “clearing banks,” made a series of acquisitions. Its first large deal was for TSB. As a result of the crisis, it acquired HBOS, which itself was a merger of Halifax Building Society and the Bank of Scotland.
European banking regulators deemed the combined banks to be too big and concentrated. Rather than sell some products or some regional operations, Lloyds TSB decided to demerge TSB. That was already a bit, erm, gutsy, since TSB would presumably be well integrated from an IT perspective by now and therefore not necessarily so easy to hive off. From a 2013 Guardian story:
Twenty years after disappearing from the high street, the TSB bank will reappear in towns across the UK on Monday when more than 630 branches that were Lloyds units on Friday reopen with a new identity…
Lloyds has been forced to split off and rebrand the TSB branches by the EU as a result of the £20bn of taxpayer money pumped into the bank during the 2008 bailouts. It has pledged to turn TSB back to its heritage as a “local” bank. The 631 branches were scheduled to be sold to the Co-operative Bank but the collapse of that deal earlier this year means it is likely the TSB network will be floated on the stock market as a separate bank.
The TSB is being unveiled a week before an industry-wide current account switching service is launched to reduce the hassle of moving a bank account.
One would also assume that any buyer would make sure that the buyer’s and seller’s systems were sufficiently compatible. In the US going back to the 1990s, many a promising-seeming banking deal was scuppered because the integration issues looked too hairy. So as much as it’s easy to point fingers at Lloyds, Sabadell is the one that should have made the call as to whether they could successfully port the data and needed routines from the Lloyds/TSB systems. As indicated, that has long been a major due diligence issue in US bank mergers.
Sabadell appears to have decided to move the TSB customers onto an entirely new system.
As one of our house IT expert Richard Smith pointed out, based on a Computerworld interview with the Sabadell CIO:
If I read this right (and it’s not just hype) the target system is “brand new”, which is to say, untested (or nearly untested). That’s a whole other dimension of extra risk.
“We are in the process of cutting the rope, and the first step is having a core platform up and running. We are now working with Lloyds Bank on the data migration,” Abarca told Computer Weekly. “We have built a new technology platform for TSB, but this is not just a technology refresh or upgrade of an existing core banking system. It is a brand new core banking system.”
They *did* do a pilot but clearly it wasn’t effective, for reasons we are yet to understand:
Proteo4UK, as the new platform in the UK is known, was rolled out to the bank’s staff in November 2017 with a full range of banking services. It will move to a full roll-out in the first quarter of 2018.
That suggests that the Sabadell knew its IT systems were not compatible with TSB’s and rather than do the sensible and normal thing, which would be to nix the deal, it went ahead based on the naive belief that it could build something new that would work. It’s hard enough to shake down a “new” retail bank IT system work, let alone roll it out by porting a ton of data from an old banking system into it.
We may have a better sense soon, but there are indications not just of data mapping problems, which potentially may be hard to isolate but not necessarily hard to fix once found, but of data corruptions, such as wildly incorrect account information (zero balances, incorrect currencies, massively inflated mortgage amounts, and e-mails saying that there are no records of recent direct debits). If there are indeed problems on the books and records level, and as we suspect, the changes can’t be rolled back, this could produce a world of hurt for customers.
Richard Smith found an example he calls “coughing up blood”:
Just want to see my balance and these guys @tsb think I’m robbing a bean factory with a bomb, jesus pic.twitter.com/VRosgbZoML
— Jack Thomson (@thejackthomson_) April 23, 2018
Implies one or both of
a. there wasn’t enough integration testing (the phase of testing where you check that the intra-system interfaces are working correctly).
b. there are corrupt data items, somewhere deep down, that are making the system behave in completely unexpected ways
Needless to say, we’ll be revisiting this topic once the press has more intel on the nature and severity of the IT mess. And as always, reader sightings and observations are of great help. But at least so far, this looks as if this might not be a gunshot wound, as bad as that would be, but gangrene.
Let me guess:
A deadline was imposed long ago. When things started to slip with the project – inevitable with something of this size – instead of the rational thing, which would be to put back the delivery date, the development teams were pushed to work faster and work longer hours.
Which means more mistakes made. It also means, probably, a shift in attitude. If you are coding this sort of absolutely-must-work stuff, progress should be based on ‘all software is assumed to be broken unless proved otherwise’. But under pressure, this can change to ‘all software is assumed to work unless our test scripts show otherwise’ – which sounds similar but is much, much weaker.
So by the time your code is going out of the door – after a series of 14-hour days by exhausted, half-asleep devs – it may technically pass all tests, but will fall over at the first hint of ‘out of test’ data.
That assumes you have test scripts.
It could be “if your testing passes, its ok”, and oh, we just outsourced our testing to India (who have no clue as to what should be tested because they don’t understand the system).
Yep. A few years ago I worked on a Dodd-Frank regulatory reporting system for a TBTF bank that’s been in the news recently. The core system that processed the transactions through the decision tree that decided whether a trade needed to be reported and to whom (which regulatory bodies) was written in a rush (lots of overnights on no sleep) with no documentation and no tests. By the time I got there the db schema I had to code against was so unfit for purpose it made a nightmare out of what should’ve been easy work writing a trade report monitoring tool. Classic case of getting it done instead of getting it done right. To this day I don’t understand why such calamitous stupidity goes on in IT. There is no respect for the design, architecture and testing (and potential re-architecting) processes in many IT groups. Fast, cheap and out of control, to steal a line from Errol Morris.
The introduction of ‘Agile Development’ (should be ‘fragile’) as the current software engineering fetish has basically made a virtue of the ‘market necessity’ of skipping architecture, documentation, and other ‘artifacts’ in favor of just shipping something and letting the users iteratively improve the application.
On top of this, systems sold by evangelists for this development style, generate bogus metrics with fancy graphs to allow an extra layer of managers to confabulate reports of progress, suitability, stability, etc.
“It is never almost finished until it ships.”
As someone who has been with Agile since about 2000, I disagree.
The problem is that Agile became a fad – a silver bullet it you will. I’d really like to write a book called ‘There ain’t no silver bullet’ taking apart one fad after the other, showing that things that work with a set of assumptions were taken by management consultants (or others who make money of selling crud) and sold to clients as working always.
The base assumption of agile is that you have, or extremely diligently work to have, a highly self-disciplined team. My comparison is that agile teams are like special forces, while normal development teams are your rank-and-file army.
It’s impossible to turn a 1m strong army into special forces. It just can’t and never will happen.
But you can have the markings and trappings of the special forces (hell, a lot of civilians like to pretend they are SF). But it alone never makes it so. Similarly, a lot of IT depts adopted the trappings of agile, without understanding why it’s there and that it’s (in most cases), really just a means to express the self-discipline that drives it all.
Arguably, most dev teams _could_ become agile, but it goes extremely strongly against most corporate cultures, as it means devolving decision making, trusting your team implicitly (although they have to work to earn that), and, most importantly, keeping the team together and letting it gel. Oh, and it also means willingnes to fire people who are smart, but don’t fit the team, while retaining maybe not so smart but who work as the team – again, in the same way as a special forces team can’t afford to carry around lone wolves who want to be Stalones, Arnies etc.. How many of the above are workable in most of the existing organizations?
For example, in a true agile develoment, the TSB debacle would never become public, as the automated tests (high coverage automated tests are one thing which I don’t consider a trapping with agile) would fail left right and centre, so it would never build and deploy – even into a test, not to mention prod. No way that so much of a debugging code would make it to prod (amongst others, it’s a potential massive security breach, as it exposes the inner workings of the code). Etc. etc.
Unfortunately, someone wrote it before you (I suspect you already know, but others may not have read it):
No Silver Bullet
One of the problems with Agile is that teams in the beginning as far as I understood it, were meant to go to Agile once they had jelled and worked together for a long time, having already earned the trust from outsiders and got rid of any “dross” inside the team.
The corporate perception seems to be that people can be thrown together in bunch (called a team or whatnot) and be immediately 100% productive and Agile compliant without any impact on the existing policies and processes and without taking individuals into account is, as you mention , somewhat optimistic.
I would argue, however, that we will need to take this into account going forward and not argue that Agile can not failed, only be failed. Agile is a tool, a methodology often misapplied and one that in my opinion can not be applied by most companies. Therefore it shouldn’t be applied by (or to) them. While I will not go so far as to go back to the Waterfall methodology (no matter that most large companies seem to follow that one for most actual decision making), it may be time to assign costs to not following the Agile parts (not having customer available to team, having stakeholders without actual influence, having stakeholders without accountability etc) and roll with it. Of course, I don’t expect that, as most of the costs would mean that the answer to a lot of questions becomes “don’t know” (is it ready? how much will it cost? when will it be done? can we just do this teeny tiny change the day before we go live, no it won’t affect anything, promise! but don’t mention that I wanted it done!).
As an aside, I am getting leery of high coverage automated tests, mainly because they so easily becomes a siren lure, making team members generate coverage rather than functionality due to the risk of breaking something (sometimes the high coverage automated tests) and then having to fix it or worse, having to admit they broke it. High coverage can be good, but should come out of having good tests. I’m getting sidetracked, though, so best cut this short.
I meant to write it not just IT (that would be just a sub-section ;) ).
I absolutely agree with you on teams – that was my point of the self-discipline. The various artifacts of agile are usually tools that someone used to get there – and I’d argue, that once they got there, they could even drop it w/o any problems. Unfortunately, that’s where it breaks down, and to be fair, on both sides. Corporates think that the trappings are the result, and a lot of the agile evangelist think that what worked for them will work for everyone, while IMO everyone has to find their own way (and it’s not easy)
I also agree on the coverage tests, but again, it’s a tool (not a trapping, a difference), and one has to find where the tool fits (i.e. not a “hammer, will smash”). That said, if I had a choice of no tests and high-coverage tests, I know what I’d choose :). Another point there is that people tend to put faith in automation, but automation is again but a tool – w/o understanding it, it’s worthless.
yea I’ve seen the not so smart (slow on the uptake indeed but capable enough I guess) kept on in agile teams. Sucks for the smart person who has to look for a job and explain why they have a firing black mark due to “team fit” though.
‘No true scotsman…’
Your points are well made, but cast the necessary conditions for ‘true agile development’ as so rare and magical, that it is not a methodology that can simply be adopted. The fact that it is so widely adopted, and fails so often is a weakness of the concept, not merely a problem with implementation. With a good team of good developers, trusted by management and in continuous communication with stake holders, most methodologies would succeed.
Agile Fetishists do not reveal that Agile (a.k.a. Scrum, an unintentionally apt analogy, taken from the part of rugby where 2 teams huddle together to fight in the mud for a ball, fists and elbows hidden by the chaotic crowd) has little chance to work in any given real-world situation. As a result, it is widely adopted (or aped), and software is produced without design, architecture, documentation, or long-term planning.
It may not be sold as a silver bullet, but no one who sells it admits you need a stable of unicorns for it to work.
I disagree. If you think Scrum is the only agile, then you need to update your knowledge. There was a large number of various approaches (XP was very in at some time.. ). Scrum seems to be popular now, as it was one that most people could get a reasonably good grasp of w/o going to the extremes such as XP.
Agile, in a way is how to get to that place where you say “team of good developers, trusted by management and in continuous communication with stake holders”. It does not miraculously get you there just by adopting it. It’s a hard work, and even staying there requires hard work. Being very good in your field of work requires hard work.
As I wrote, you cannot have 1m army of special-forces-like soldiers. That’s why it’s called special forces – it requires commitment way beyond even most professional soldiers are willing to give.
Agile is a bit better than that, but it still requires a high level of commitment on all sides. Which is rare.
Saying that agile is mythical is the same as saying that if I tell you most people can run a marathon under 4 hours, but it will be extremely painful to get there, it is mythical, because it requires most people to have the level of fitness they just don’t have. Now, if you’d say that most people do not wish or cannot afford (timewise or other) the effort to do that, that’s a different story.
Which gets me back again to the point that you cannot have a field army full of spec-forces level soldiers.
I love Agile and I think it’s fundamentally better than waterfall when done well. However, it’s not without its flaws. I like to say that waterfall is a good match for how business works while Agile is a good match for how software development works. They each do a poor job of modelling processes in the other space. Agile is a bit better in that if you can get the business to adapt and operate in the way that Agile requires, it can work very well. Waterfall in contrast requires you to believe a bunch of things that flat out aren’t true (requirements can be fully known in detail up front and will never change, you can create a detailed design before writing a line of code and expect it to be correct, and so on). But some of the things Agile requires businesses to do are almost as bad, and can often be equally incompatible with reality depending on the constraints of the business in question. The Agile answer is that you’re doing it wrong and need to change, but that’s often impossible or infeasible. If you’re not careful you can end up trying to run an Agile project but with waterfall deliverables due to client approval and process requirements, which ends up being the worst of both worlds. A lot of organizations that claim that Agile is no good turn out to be operating in this space.
Yes, and your armed forces analogy is apropos. The tactical and operational plans are sound in appropriate contexts, but at the strategic level the policy makers glom onto a particular plan as a cure-all, which nothing is. Think about the Dilbert cartoon where Pointy-Haired Boss hands Dilbert a 3.5″ floppy and tells him to download the Internet onto it. Think about the policy decisions that led to the Challenger explosion. Think about Vietnam, Afghanistan, Iraq, etc., etc., etc. The mistakes are made at the top (with ignorance and secrecy) and then covered up after (with malicious intent and collusion) so that it looks like the cause of the failure was at the execution level.
There are software project that agile is appropriate for and those for which it is not.
Agile is for people who don’t like to do project plans.
Agile is for people who think they can have it all (cheap, good, and fast), but who think no project plans and no meetings = fast but still good.
Or, to cannibalize an old joke, agile is for people who think that if one woman can have one baby in nine months, that nine women can have one baby in one month.
there are probably industries in which it would be fine but it’s more developing a facebook or the latest app. Banking would pretty much never be an obvious industry for it.
I have successfully done significant projects in banks with agile, and a number of them to my knowledge operate even now, seven years since I handed over the last one.
The last one had, for about seven years since it was productionised till I left (and I handed it over three years in), three production incidents during that time, with 14 days release cycle. It was project of a few 100k sloc IIRC – not huge, but a good size one, and was critical to running a business that was worth 40-50 millions pounds of revenue a year. Oh, and it was subject to regulatory reporting, as it was producing numbers that the regulator required (among other things).
“Agile” is a tool, nothing more nothing less. To blame the methodology for failures is like blaming your screwdriver for not being a good fit to drive a nail when you use it like a hammer. Enterprise-size orgs can adopt the methodology and get value but only if they adopt the culture change and paradigm shift concurrently. If you only do the first part or the second part you fail. Every. Time.
As for how this got screwed up…this is true command and control, waterfall leadership. Set a date and ship regardless of the health of the solution. Ignore warning signs, damn the torpedoes, we got a plan and we’re sticking to it. “Everyone has a plan until they get punched in the face.”
Because playing the IT-game is like buying an option: It is cheap to buy and are almost Zero consequences and infinite (profit) opportunities when the stars are right.
In many “real” engineering professions there are licensing and professional qualifications required, some fields prone to explosion and fire – like electrical engineering, f.ex. – in addition have liability insurance requirements.
With IT … Anyone can participate and everyone gets at least paid for some months no matter how useless their efforts are. “Nobody” understands code enough to see if it is good or bad, like they do with physical plant.
With IT …. The expensive IT professionals, who knows what they are doing and why, rarely comes into a project “up front”, usually, they come into the trailing edge, having to clean up the brown trail left by the cowboys demented spawn and somehow stitch that pile of garbage together just enough so that it will run for a while.
With IT …. Rarely does anyone deal with the IT-garbage in the proper way – bag it, bin it, and off to the incinerator; Instead either the new team of happy cowboys or the real professionals hired on the usual short-time “Make work NOW”-contract will wrap it in good code (the pro’s) or garbage (the cowboys). Entropy Increases Always in IT.
IT is basically not a real profession yet!
That actually more like how its managed,not how the work is done. cause we all know management works to cost and to when it has to be done. Not how well.
Wow, look at that “BeanCreationException” – that’s one giveaway right there. For an extensive look at that kind of exception look at:
That link goes into what it could be. And with that kind of framework, it should be a RARE exception. I used to test this very stuff. And that means that these guys did NOT test this handover properly.
I’ve seen the early bad testing that used to come out of India, but you woulda thunk that by now Indian developers would want their stuff to pass muster.
Then again, as you said below, vlade, there’s really “not option to roll-back” and “‘if your testing passes, its ok’, and oh, we just outsourced our testing…”.
And sure enough, YankeeFrank, there’s “no documentation and no tests” at least for a massive handover of data and system like TSB.
Yowzers. It makes me wonder what responsibility TSB has to maintain back up records. Perhaps if ATMs and most branches work, the customer data will largely be fine and its just the online interface that is calling from legacy systems that is the problem.
TSB confirmed its 1.9m customers were still facing “intermittent” problems when attempting to log in to online services after a bungled switch over from a system the bank had been renting from its old owner Lloyd’s Banking Group.
Customers had been warned the transfer of 1.3 billion customer records to a new system could affect services from 4pm on Friday to 6pm on Sunday – but the disruption continued overnight and into Monday and Tuesday.
What’s a few extra zeros to a bank anyhow? We are the foam on the runway.
customers != customer records
The record are for example all your transactions, for each product you have. That number gives ratio of about 1.4k record per client, which is perfectly believable. I easily generate about 500 transactions a year on my accounts, and have way longer history than 3 years with my banks.
Friendly reminder, vlade, that not everybody reading this knows that “!=” means “not equal”. Thanks for your informative comments, though.
Thanks, Angie. I was one of the ignoramuses – thought it meant “emphatically equals.”
But anyone reading this gets used to jargon. some of the financial stuff is far more impenetrable.
We all know that here, actually. This is a savvy bunch. Just a friendly reminder.
Good post and agree. A few pertinent points based on first hand past experience:
RBS had a legendary outage 4-5 years back that sent an army of regulators into the British banks to examine change, conversion and every flavor of DR processes. The wake up call was then.
Its a bad day (s) for everyone in British banking IT. Every politician, regulator, board member and fleets of executives will be asking 999 out of 1000 impertinent questions of IT staff.
This is why banks have legacy technology. No one wants to knowingly take on these major conversions by choice.
Your point on US M&A is spot on. I worked on DD and M&A teams for a decade in a top 5 TBTF. The entire M&A methodology centered on a 2 year migration process to single core systems. much more advanced than the Brits who I worked for years as well. I guess the benefits of having the bigger empire to consolidate. Cross country regulations and multi language / custom is another complexity to deal with in Europe as well.
Oh, and legit lol at “coughing up blood”
RBS nixed selling W&G, after it spent >1bln on a brand new IT system for it as to facilitate the problems Yves wrote above (although I have heard that the IT system was a good sacrificial goat for this, and it was for different reasons RBS nixed it – not sure how good source that was though).
Some informed comments scrolling by on The Guardian story.
Seems like it is too late to roll back so the way out is to fix forward. It’s exciting!
And there’s this:
Best case is it is just the customer website that is toast and the backend systems are more-or-less sane. The website security/authentication problems may be fixable but definitely not the kind of thing that can/should be fixed in any hurry i.e. without more testing than has been done so far. So the website may just be AWOL for some time and it will be back to the precious few humans at the branches to deal with everything. Queues.
That roach Peston (the CEO) is doing the media rounds (having been forced to take to the airwaves and social media by politicians getting harrumphy) and saying something interesting albeit revealing if you stop and think about it rather than parsing it as PR-speak.
Which is that customers who have issues with their account details should contact TSB who will rectify the problem.
Ah-hem. No. For one thing, you will have severe difficulties getting through on the telephone. And branches will be busy and/or similarly affected by system availability woes.
But the bigger picture which is obscured by the “helping us to help you” advice from the CEO is — excuse me — WTF is a bank which cannot maintain an accurate ledger or record of customer product holdings? And Pester’s statement is tantamount to an admission that, for some correcting entries, they are going to have to rely on a customer telling them what they (TSB) need to go away and put right rather than the bank being able to reconstruct a valid account history, apply misposted transactions or reunite orphaned product holdings with their rightful owners.
I for one have a rough idea about what should or should not have been posted to my account — but I don’t keep a comprehensive shadow set of records. If TSB has so broken its CRM and accounting data that it needs customers to tell it what’s missing then customers will have lost money for sure — or else have credits on their accounts or be assigned ownership of products which aren’t really theirs which they might inadvertently draw on only to find out much later down the line they have to reimburse the bank because they weren’t entitled to the funds.
> Best case is it is just the customer website that is toast and the backend systems are more-or-less sane.
Assuming some clever opportunist isn’t using the front-end issues to hack into the back-end…
Which they very well could be. The error messages we’re seeing reported are exposing a great deal more of the internal workings than any security audit worth the name would be comfortable signing off on.
One would hope that is the case. As they say: “I say that we take off and nuke the entire site from orbit – It’s the only way to be sure.” :)
I have been involved in wholesale (read derivatives and inter-bank payments) systems for a decade or so in my previous lives. Their complexities are different from retail, some easier, some worse (wholesale systems in my experience may have less data, but often more complex and live in much more complex ecosystems than retail, but YMMV).
Just a version migration of the same system was easily a year/two programme. A new system introduction to replace a legacy system was 3-5 years easily (and my employer at the time, for the fun of it, migrated the legacy system not just to one, but to two different new system.. fun!)
I was also involved in post-sale migration at the time, and it was also a 18months + programme, even if it was rather simple (as in maybe a few thousands clients, and few tens of thousands transactions were being moved).
I’ve also seen a replacement of the retail system done (but wasn’t directly involved), and again, it was a pretty few years worth. Customer base would have been about the same size as TSB.
It can be done, but it’s a major major pain. It costs a lot, and cannot, I repeat, cannot, be hurried along – that’s just asking for a disaster.
The further problem with the TSB debacle is that it sounds to me like the project was a death march (i.e. some smart cookie set a hard deadline, which the team knew was unachievable)
Which, together with the current situation puts enormous pressure on the teams’ morale, which means the teams leaking people like a sieve. The best people are likely to leak fastest, as they have the best resumes etc. In turn, that makes a bad situation worse.
If TBS really has not option to roll-back (which becomes less and less viable as time goes by, as transactions are made, so what might have been viable yesterday morning may not be viable today COB anymore), they are done for – and depending on the structure, possibly their parent too.
One more thing happens in situations like this. When rushed some employees are so tied up in helping get this done ASAP, a few will end up sick and in the hospital, unable to help at all. I have seen this first hand.
Some things cannot be rushed as has been said earlier.
I think that means some projects will never be finished, which is the other nightmare. We’ve seen major IT projects fail outright: “Sorry, can’t be done,” which is the reason so many are still using legacy systems.
Some times there is a reason for not migrating one is size, you get to size that just conversion would run days or weeks, or even months to do. And considering the likelihood of no system test (dry run with all the data being converted) leaving the migration to be a nightmare, even if it completed on time, and it worked. but no system test and dry run means the disaster was going to happen. Not if. and going to a new system that for the most was untested with the data is was going have. Now i am guessing that some one high up at the new parent wanted it done (to reduce the cost that loyds was charging for the its work to support tsb). plus it was an opportunity to prove to their it staff that it could be done (likely this project done a consultant who actually sold the project to decider in chief) now tat it has failed, chief decider is trying to figure how to blame some one, like the CIO
As bad as project cancellation sounds, it’s usually orders of magnitude less serious than going live with a solution that’s not fit for purpose. As I suspect TSB is about to find out.
But very, very expensive and a lawyer’s wet dream.
The underlying question:
>The overview is that Lloyds Bank, one of the former four British “clearing banks,” made a series of acquisitions. Its first large deal was for TSB. As a result of the crisis, it acquired HBOS, which itself was a merger of Halifax Building Society and the Bank of Scotland
…is simply “Why?” And we know the answer, people that run things like this are basically incapable of leaving anything well enough alone. There is little advantage to most (yeah it’s easier for a high-flying yuppie to get money in Mongolia than it was in my day) of this consolidation, even, as we see, for the organizations themselves. It’s just the mindless “we do what we do because we have to do stuff” trajectory of financial capitalism. And yeah, the bigger the pyramid the more self-impressed you can be from the view from your top. Robert doesn’t want to see that he’s eye-level with Jaques when they are both sitting on their pyramids, thus he does whatever he can to make his bigger.
A pox on all these people.
“Growth for growth’s sake is the same ideology as cancer.”
Wow, what a clusterfsck. As it happens I was involved in a UK database upgrade project just recently and the very first thing we planned was the rollback strategy and how long we could go before rollback wouldn’t be feasible (just a few hours). We also did numerous test runs of course. What the hell was TSB doing here?
I had an account with them decades ago when they were indeed the Trustee Savings Bank and known for good customer care. Then they were gobbled up as described and what was a nice little TSB was destroyed by greed and Big Banking.
Off to read the Grauniad’s live blog :)
Its not the only British bank seemingly having these problems. Ulster Bank, part of RBS, is suffering from vanishing accounts.
“Lloyds went ahead with the migration without allowing for a rollback”. What the hell, man. I don’t know how they do things in IT but I would have thought that the best way would be to create a mirror site with the new systems and new code to test, import the raw data fields to test to destruction, then switch over systems from the old legacy site to the new validated site after you have made sure that there are no problems.
Did TSB renege on their promised bonuses to their BOFH or something?
More likely the new owner didn’t want to pay the old service charges for them servicing the accounts, and also didn’t want to pay full size testing. also wouldnt surprise that a consultant sold the deal and a contractor did the work. and the new shinny system has never been used before either. if they are luck and it’s just the frontend that’s bad, they will be able to fix it, but if it’s a shinny new back end (in the cloud?). then they are toast
When IT goes bad it gets very bad indeed. See Phoenix system.
Yep…some worse than others.
Then there was that one in Australia-
And let us not forget the Obamacare roll-out debacle.
I’ve mentioned this one before; it’s from a book called “Fleecing the Lambs,” about Wall St. decades ago, about the period when financial IT was being introduced. One brokerage (which shall remain nameless because I don’t remember) was convinced to put everything in electronics and….throw away the paper. You know, “the paperless office.” And you already know the rest: the system crashed and the brokerage ceased to exist. Not sure what that cost the “lambs.”
So this stuff’s been going on for a long time.
Currently still (15:00 Tuesday afternoon UK time) the lead news story on BBC so doesn’t appear to be adhering to the getting-back-to-normal spin from the bank.
“Online banking” is as much of an oxymoron as “online dentistry” or “online gardening”.
When there’s no one in line, I love to exchange gossip and recipes with the ladies at the teller windows while depositing checks and withdrawing cash which is the only thing I spend with local merchants and service providers.
Proud to say that I have never used online services or an ATM card in my life. Paper statements in a three ring binder are the antidote to online “outtages” and “systems closed for maintenance.”
Don’t give a crap about “efficiency”-“convenience” or bank profits. No-fee paid off every month credit cards are used to pay corporate bills.
Am teaching a tag team life-skills course in a private high school and am hopefully convincing the kids to do the same as above and shove it to the Overworld.
How people store and spend their money is the something they can control and affect the world around them.
I ALWAYS make deposits at the counter, because I want that piece of paper that commits the bank (credit Union, in my case). Fortunately, the tellers are mostly very nice people, so it’s a pleasant transaction, and the CU is retail-oriented, so the lines are rarely excessive (late Fri. afternoon is not a good time to bank). ATM’s are find for withdrawals, not so much for deposits. And the nature of my business is that I deposit a lot of checks.
Moi aussi, and I always tell them that I am depositing in person b/c “humans should stick up for one another”. I get knowing and appreciative looks, although there is often a furtive look around (not useful, as there are CC cameras everywhere — it is a bank, after all).
I phrase it a little differently:
“You know the corporate managers would love to fire you to save money. That’s why I’m here.” To hell with furtive, I make sure the bank manager hears me and so do any nearby customers.
How to make friends and allies fast.
I’m surprised they haven’t blamed it on Russia. I’m sure HMG will help TBS roll out that accusation.
Banco Sabadell successfully completes TSB technology migration [Sabadell press release, Archived version]
When bad things happen to good MBA word salad.
Sadly, unlike Karl Rove, Mr. Oliu was unable to create his own reality.
I know the reason this done the way, TSB parent wrote the new system.
Wasn’t IT supposed to make this sort of thing EASIER? Instead, it creates huge new barriers.
One possible factor, as seen from the outside: non-standardization. On-paper bank accounting, after centuries, must have been pretty standardized, so one bank’s system easily translated into another’s. Obviously this is not true of IT; they’re all different, and not compatible.
The obvious solution is to standardize banking software, probably by fiat from the Fed (in the US). However, I can imagine the problems THAT IT project would run into, and the things that could go wrong – like massive backdoors. Fun times. Another good reason to move your account to a local credit union.
if you looked at the paper systems of banks, you would likely find none of them were all that a like. which is why you see so much variations of how business operates, and how the electronic ones are so different today. and business never operates the same, some times to your benefit, but usually not
easy solution to systemic risk.
no bank in America or the EU can have more than $200 billion in assets (only 10 banks or so exceed $200b in the US). Otherwise it gets slapped a surcharge on assets.
not holding my breath though. banks have friends in high places and the public would rather argue about the latest identity politics flub at starbucks
This post and Clive’s earlier comments are correct. This is more widespread than is made public. At my bank, BB&T, the computer system went down for two days a couple of months ago. My check for the electric bill bounced even though there was plenty of money in the account. I didn’t know about it until PEPCO threatened to cut off my electricity. After ticked-off telephone calls, the electricity stayed on but I still was charged a late fee and a stop payment fee that both companies said would be free. Since my pension is automatically deposited, I can’t change banks easily. The alternatives aren’t that great i.e. Wells Fargo.
De-regulation is coming home to roost.
Well when your banking relies on a platform, you have no banking ……
The 3 release software delivery system:
Release 1has errors, and is incomplete
Release 2 fixed the errors, but is still incomplete.
Release 3 works and has 90% of required functions,
and from the world of migrating complex systems:
One change and at atime
Flash cuts always fail.
For something completely different.
Imagine looking at a blue print for something that does not yet exist, you know it will change in due course, whilst basing your tender on imaginary linear and square meters e.g. as every imaginary measurement is the same.
At the same time throw a bunch of imaginary people sourced ad hoc into a development group premised on the same theory of efficiency and then to top it all off wrap in a bunch of financial demands – wheeeeeeeee….
Anywho as an old friend used to say during the Boulder Colorado tech boom in the 90s – you don’t make the big bucks writing code, you do by fixing it for others.
Wish I could say this is only an IT thingy, sadly I see it in heaps of critical infrastructure and building sectors. Oh and I don’t thing the scrum analogy is the best, Clive might pip in, methinks ‘The Wall’ would better suit.
“You make it, (the big bucks,) fixing it (code) for others.” That is if the originators of the project value functionality past a certain point in time. This looks like the natural end game of an IBGYBG scenario.
Several years ago I approached one of the local plumbing shops in an effort to procure employment. I’m a decently skillful problem solver in my trade. When I asked the company ‘decider’ why she was offering an insultingly low wage for repair work, she replied: “Because we can get young guys cheap to do this.” I retorted: “Yeah, and do it badly.” Her reply: “As long as the check clears, who cares?” That company is now at the bottom of the local ‘public trust’ heap. (This from a number of voluntary comments from people ‘on the street’ and acquaintances.) The decider is still there, driving her BMW around town. The company still has trucks on the road, full of young guys smoking cigarettes and gesturing wildly as they argue with someone on the phone.
Gresham should be made a Saint in some Systems Theory Cult, somewhere.
Oh yes. “Hitting the Wall” about sums it up.
On a “Tinfoil Hatt” note; if this is the result of an inadvertent screw up, what happens when some bright young things decide to purposely mess up the IT? The veneer of civilization, as we know it, is pretty d–n thin.
Oh boy are you in for a treat…
Eton Wall Game (1956)
Um on the other stuff I would say income flows due to current market dynamics shape the work and not the other way around.
Looking at the every-bank-has-its-own-setup issue Yves describes, it’s 100% clear to me that at some point – sooner or later – every bank is going to have what in essence amounts to two options:
1. Bite the bullet and spend the very large amounts of time and money required to migrate everything to a modern software infrastructure.
2. Keep putting on band-aids until something inevitably breaks so badly that you face an existential crisis.
I haven’t heard of a single major bank going for Door #1, so to me this cries out for government intervention/support in order to force the issue. It’s also the kind of thing that strikes me as something that should be done globally, via a consortium of the world’s developed nations, organizing the world’s best bank-IT minds. The ultimate aim should be the creation of a common bank-IT infrastructure, a kind of “open source standard for banking”, which, like, an open-source OS, should be designed with robustness, data security and continual upgradability in mind. At least to my admittedly-non-bank-IT-expert thinking, while various kinds of banks have varying specialties, it’s not like there’s a bazillion completely different kinds of things they are engaged in. Basic retail banking is all pretty much the same, as is commercial lending. Even with regard to crafting of bespoke financial products, my guess is that the number of different ‘menu items’ going into those is not huge. The end result should be a “if you wish to participate in the global banking system, you must use this common software infrastructure” migration, not all at once, but with a few ‘test canary’ banks trying things out first. Obvious requirements include:
o An open-source API and codebase, maintained by expert staff paid from system-usage ‘utility’ fees, but open to inspection by anyone;
o Protection of customer privacy [to the level required/permitted by local law] and data;
o Able to easily accomodate agreed-upon crypto protocols;
o Flexible enough to accommodate different local-law regulatory setups and privacy/data-protection standards;
o Able to accommodate a reasonable range of custom financial products, and extensible enough to allow for evolution of same.
o Banks would be subject to regular software audits to make sure they are not misusing the system, creating custom software forks, using an out-of-date version, etc. Participants would also be subject to minimal-hardware requirements. No 30-year-old compute infrastructure allowed, and upgrading one’s HW should be an ongoing no-downtime thing, just as every major modern server farm manages to do.
But I’m a neophyte w.r.to to bank IT, so I’d like to ask the experts in that around here – is such a thing at all realistic?
And as with open-source OSes like Linux not being a magic bullet there are of course major concerns, e.g. “if everyone is using the same OS and someone finds an exploit…”. To which I would point out two mitigating factors:
1. Open-source OSes like Linux, developed and maintained by a global community, have a far better track record as far as vulnerability to software exploits is concerned, than walled-garden systems like Windows;
2. Sophisticated data backup (including verified-compliant offsite storage) and data recovery protocols would be built in from Day 1.
If there are sound reasons why such a thing is a pipe dream, don’t worry, I won’t take it personally to have it pointed out that “you are full of shite, sir!” :)
And nifty little NSA backdoors.
You thought they were NOT doing commercial espionage? Why were they spying on the Brazilian state oil company?
I want to go back to your point #1.
A long time ago, one of our bank IT producing readers (maybe Brooklyn Bridge) said that it would cost all of a bank’s profits for 3 years to do a migration. And that charitably assumes it actually gets done. The failure rate for large IT projects is over 50%.
Yes, that’s why it isn’t being done (TSB’s present experience being the other reason). His proposal is to have an overarching agency do the development, so all the banks share that initial cost. Or is that JUST for the migration, without the cost to develop the system. I’m getting confused.
I proposed the same thing, much less thoughtfully, up above, then immediately thought of some of the pitfalls.
Then what is the solution? It’s only going to get worse, I assume, without expensive, time consuming effort, until a really TBTF bank goes poof.
You could simplify it a heck of a lot by focusing on the situation where you’re starting from scratch. If we could start new banks providing all the services that people needed and scale them up rapidly, using all the techniques you describe, the legacy problems of the incumbents would be much less of a concern. Either they’d survive or they wouldn’t. Either way, we’d have banking services.
Of course the incumbents have many powerful tools at their disposal to block insurgent competition of this type, but most of those go well beyond IT (interconnectedness, the revolving door, the shadow banking system…) and are probably best addressed together as a category.
Not very often. Netscape (anyone heard of Netscape?) blew its first-mover position by becoming a poster child for this situation. See: https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
Back in the naughties, I did an internal presentation on the merits of just using the crufty TCP/IP stack in Linux versus rewriting it for similar reasons — The problem is that almost every piece of “old cruft” and strange code in “legacy projects” represent a bug, that someone found, characterised and worked around.
Software like the Linux TCP/IP stack and I would assume, banking software, has accumulated many thousands of fixes and “saves” for hundreds of odd-ball interpretation of the (IETF) standards that some systems that one must be compliant with, will never, ever fix.
Rewriting the code erases all of that knowledge and then it is has to be recreated from scratch – except now it is your own, proprietary, project and you are all alone against the universe.
Banks could do well to follow the semi-open “telecom software development model”, where they have a consortium of teams from competing companies internally sharing and testing common infrastructure components against each others implementations and quite aggressively contribute code “up-stream” to “Open Source” (so they don’t have to maintain all “their” infrastructure code on their own).
When I did this, teams would meet every three months to have a 2-3 days “interoperability workshops”.
After all, nobody cares for “core services” when they work. It’s the little “extras” where the margins are.
Doesn’t has to be “government”; In my opinion “government” has too many political hang-ups to be facilitating collaboration. Just imagine: “You mean them eye-ranians can use our code?”.
We have international bodies that handle common infrastructure. In Telecoms there is the ITU and IETF who defines the standards and protocols to use so that we have the miracle of a single computer of any brand or an entire telephone network will just work almost everywhere, with every thing that connects to it (Japan is special, the US too.)
As you say, Banking could do the same. SWIFT is one example that it is possible and the global credit card networks another. The banks probably haven’t realised that there are valuable networking effects in having standardised services and interfaces. Maybe they – like the telecoms before the internet – believe that if they can build some walls high enough around their services, then no-one can leave and no competition can take them on?
During the next 20 years, I think we will see this and a lot of “custom micro-banking”-shops riding on top of rented and managed banking services provided by some of the “big 5”, like we have with mobile telephony.
So you think the banks should all follow what TSB did with a new core banking system? I suspect that after hearing about this they will be even less likely to change this decade.
Nobody wants to make the news where customers can find out how much money they have in the bank, and where deposits disappear, and checks get paid, they may or may not be theirs
I for one will not make such an accusation. If that , “you are a…” were true, then you would be a ‘shyster,’ in the original sense of the word. That attribution I firmly deny.
What is fun is to realize that all major ‘reforms’ in human social relations and systems were originally labeled as “pipe dreams,” if only by those whose rice bowls were going to be broken by the said ‘reforms.’
I’m afraid that this is a canary in the coal mine moment. “Creative destruction” is making itself known. To get to our ‘shining bank on a hill,’ we will have to see the downfall of the present chaotic systems. Clear away the wreckage, and build anew. Pitchforks, torches, angry mobs and guillotines will play their parts.
well, there is the IT nightmare and then there is the no electricity, no cell phones,
no water, no cash, no banking at all, no card transactions, no escape by plane or boat, mayor drunk and disorderly, cannot phone the police, cannot phone the ambulance, pitch black at night, no news of any kind, no contact with family, no contact with any central government agencies, no gasoline, no propane, no diesel, hospital destroyed,
emergency management center destroyed, hundreds of residences destroyed in a population of 8000 persons, and then 90 days of torrential rains just for extra aggravation. Think about that. It happens. It happened to us just 7 months ago. And we are far from any war zones.
Now think about life in Syria and Yemen. Yes, IT failure is something you want to avoid. But have you considered hurricanes, earthquakes, Carrington events, tsunamis,
and of course, the dread Bolide Events. When those disasters occur, there will not be anywhere to go tweet about them. All that is left is preindustrial existence. If you are lucky and not resting in peace. Yet, beyond all that is the outbreak of hostilities.
Missiles, bombs, invasions, cholera. All quite possible and all much worse than IT failure. Always look on the bright side.
My takeaway from the ‘Puerto Rico’ scenario you describe is that while the overwhelming majority of modern banking is using bits and bytes (and no going back there), there will *always* be a need for some physical cash, which is portable (in personal-needs amounts), self-validating (with reasonable built-in anticounterfeiting measures), low-tech (except for the getting from-an-ATM aspect) and entirely private.
But when we talk about bank IT we are talking about the bits-and-bytes part.
A thought that occurred to me in reading the comments here. When Brexit finally happens, will the banks in the UK have to make any major changes to their systems at all such as reporting or whatever because of this? I have no idea how tightly the UK banks are meshed with the EU and whether transactions can continue as normal or whether adjustments will have to be made.
That debugging code dumps and Java* errors are making it through to the user is really, really bad. Needless to say, the system cannot have been tested properly (or they saw those errors in testing, and said “What the heck! Let’s go for it!”)
* Of course…..
What have you got against Java, Lambert?
Java has some ancient evil baked into it that, with enough exposure, taints the souls of even competent developers and compels them to summon monstrosities.
It is a “consultants 1’st language”, a generator of fees while promising results all the way till the project is canceled. Like communism, its all very fine in theory and yet fails on execution everywhere it is tried :)
I have, right here on my work computer in front of me, a brand-new PLM system with 1990’s-style client-side Java clicking up to about 1 GB memory requirements, needing an ancient FireFox of a specific version to run and which can actually only upload, download, approve and version files while sporting 25 different views of these 4 operations and colourful graphs with statistic on them. Like I, anyone, actually, give a shit on bumf like how many files are revised per month.
Only with Java would it be possible for a team of up to 15 people to work on this for five years and be achieving so little in terms of results (apart from early retirement in southern France on the consultancy fees, well I Hope, I hate to see waste!)
If Oracle killed all of the, “oh so 1980’s, binary compatibility & code reuse for classes”-garbage, they could also kill most of the secure class loader stuff that nobody can get to work properly anyway.
What is left, in the end probably only the Java grammar itself and a small runtime, might be quite neat to use and people might even want to port some legacy Java code into that space, crowded as it is:
Python, C# and later Rust and Clojure all learned from Java, “grabbed the good parts” and ran in the opposite direction as fast as possible.
I hate Java, BTW.
Dear God. I’m sorry you have to endure that. That said, I question whether Java is the problem there and not your developers – nobody in this day and age has any business using client side Java for a purpose like that, and any halfway decent developer ought to know that. In fact it’s a design issue as the developers should never have been asked to do it in the first place. In brief, you need better consultants (or at least to fire the ones you have).
Disclosure: I am a consultant and have been working with Java for most of my career. I take your point on memory leaks and VM model issues, but I’ve learned to work with it and I like to think I have a number of counterexamples I could offer to your “fails everywhere it is tried” point. Perhaps if I had more experience in the other languages you mention I’d agree with you, but most of the clients I work with are wedded to Java for the foreseeable future, so knowing how to do things well in Java and deal with its limitations is useful. If nothing else I hope I can help keep the world a bit safer from monstrosities like the one you describe.
Yves you mentioned that its hard for banks to move to legacy systems. Can one of your experts friends please elaborate this in article form if they have the time?
Also, is this a case for governments to establish from the ground up some kind of smaller banks/payment institutions based on newer technologies and design methodologies? It looks as if these banks are too old or bloated to adopt newer systems but are too important to be allowed to fail even from an IT perspective! And it seems they don’t even have redundant systems that customers could be switched to in case of failures! It seems its high time governments started seriously working on alternatives to current banks.
Well guessing in some case size makes difficult if not impossible, consider some of tbtf banks have 100 of millions of customers, plus lots workers that would have to be retrained. Consider just moving the accounts for that many, with say 50000 or more additional records per account, just how long would it take to transfer all that data? Would it be several months maybe? Then retraining their entire staff? Plus all of the add on systems that work off the core systems? Like online banking and the atm network. Then you would have to all the loan processing systems and all of the related systems to those, plus credit cards, investments etc.
Read long ago:
Whenever launching a major new computerized system, keep whatever old system was doing the job for two months and run them both in parallel. Do not cold switch.
If you can’t afford to do that, do not launch the new system. You risk your company entire.
Robert Townsend, a successful executive with no computer expertise, wrote that (paraphrased) in “Up The Organization” – in 1969.
I’m not an Engineer but I work in the industry. As someone who is based in the USA it’s unfathomable that a major bank would be allowed to be offline for 7 hours, let alone 7 days. If BoA, Wells Fargo, US Bank, Chase, etc. were offline hell would be unleashed by regulators, politicians, etc. That the UK authorities have been relatively quiet on this is amazing. This is a failure of epic proportions and I’m struggling to recall a similar meltdown of this scale.
The root cause won’t be known for awhile but after reading posts, tweets, error messages reported by users, LinkedIn profiles & joining a couple of dots I can’t imagine that using IBM – aka I’m By Myself – to fix this isn’t going to help. This smells like a failure of architecture/design & having the wrong developer expertise.
Below is TSB’s new cloudy stack (from a LinkedIn profile of a TIBCO employee who is assigned to Sabadell). I suspect this project lacked Engineers with heavy-duty Microservices expertise and, instead, they used old-school J2EE guys who looked at the cloud architecture through their old-school monolithic eyes. I also suspect they did not employ true SDETs for testing purposes and used traditional QA’ers. i.e. QTP & Selenium playback QA monkeys when for something of this scale they needed Amazon/Netflix/Google quality SDETs.
“Component of the BancSabadell Architecture team in the TSB project (TSB Bank in UK acquired by BancSabadell group) for the definition and implementation of a new banking platform based on the latest technologies and methodologies and oriented to a hybrid infrastructure between on-premises and public cloud
-PaaS (TIBCO SilverFabric)
-Micro services (Spring Cloud Netflix)
-SOA (TIBCO AMX Service Grid, TIBCO BusinessWorks, TIBCO API Exchange Gateway)
-Single Page Application (AngularJS)
-Asynchronous Messaging (TIBCO EMS)
-APM (Application Performance Monitoring)
-Distributed Search & Analytics (ElasticSearch)
It has also been posted elsewhere that Sabadell tried to port the existing code for their Spanish bank. Said code was garbage with hard-coded values for server IP addresses. They used Netflix OSS Microservices from GitHub where the copyright header was changed but not references to Netflix in their error messages. Also claimed that upper management decided that load testing assuming 500 simultaneous users was sufficient.
I can’t see how TSB can recover from this. They’re toast.