This post is the second half of Part III in our Bank of America foreclosure review whistleblower series. Part III focuses on how the confusion and high cost of the foreclosure reviews weren’t simply the result of overly ambitious targets and poor design, oversight, and implementation of the reviews. These reviews never could have been done properly due to significant gaps and inaccuracies in the borrower records at Bank of America. That meant the only possible course of action was a cover-up.
Here we’ll discuss:
“Garbage in-garbage out” problem of unintegrated, unreliable records
“Fire, aim, ready” approach to launching the tests
“Garbage in-garbage out” problem of unintegrated, unreliable records
The foreclosure review revealed one of the root problems of the foreclosure crisis: unreliable, difficult to use, and in too many cases incomplete records.
Let’s start by understanding the difficulty of the task even if everything had been in good order. We’ve taken this snapshot from the Excel training model for the E and F tests, which were on fees (see here to access the full model on ScribD or scroll down to the embedded version later in this post). This shows the top part of the computer screen reviewers would use to perform their work.
Each of the blue boxes is a separate system. To complete any of the B through G tests, a reviewer typically needed to access seven to ten of these systems. There were no subsets of borrowers for whom all the information could be retrieved via one system.
Having to go across so many systems to get basic information about a borrower is a sign of poorly integrated and managed computer systems. This problem alone made it costly and difficult to do the reviews, and greatly increased the odds that important information would be overlooked or not captured properly from one of the systems and entered into CaseTracker, a separate package used to perform the reviews.
The former Countrywide systems were the backbone of the Bank of America servicing platform. There were often gaps, even gaping chasms, in the information available on borrowers who did not have Countrywide-originated loans. For instance, comparatively little information could be retrieved on some mortgages Bank of America was servicing due to having acquired servicers, such as First Franklin (via Merrill Lynch), Wilshire*, and Nationspoint. As one reviewer stated, “on these files, proper review definitely could not be done.” That meant that Promontory and Bank of America would reject their allegations of harm, since they would not be able to find relevant information.
Even Bank of America legacy loans had important information that could not be retrieved via the systems shown above. The data from old Bank of America borrowers had been uploaded into the Countrywide systems in early October 2008, a few months after the Countrywide acquisition closed. As one reviewer described it:
Prior to the merger we were not able to access Bank of America servicing information. We could request a payment history from the old system, MSP, but we didn’t get everything and we could not verify it.
This was the same for the companies that were acquired by Bank of America. There was a large amount of information missing. There were times with these files, legacy Bank of America, Wilshire, First Franklin, etc. where we could not get servicing information, modification, invoices from attorneys and at times we could not get the HUD1 settlement statement that was used to close the loan, or the mortgage.
A third reviewer described how these information gaps undermined his work:
This created major problems in determining timelines. If a foreclosure started in 2008 I could not determine if procedures were followed at all. If modifications started prior to 2009 I was just out of luck to know if documents were received, signed, how and what modifications were offered. I had one case in which my notes actually started at the rescission of the sale and ended with the sale. I couldn’t determine anything about what had happened. Eventually that file was removed from my queue, because I had requested all notes from customer service, all notes on mods, all documents prior to the foreclosure sales deed and so on. The request became overwhelming.
That is not to say that reviewers were always fatally stymied when they requested information from the old Bank of America system, MSP. It could take anywhere from a day or two to over a month to get information back when it did come back. The output was pdfs, meaning images, from a DOS-prompt system. This sort of output was hard to work with and increased the odds of reviewer error.
Even where the records were complete and understandable, it’s not clear they had integrity. And it is important to understand that because the position Bank of America took in the reviews was that the major Countrywide system, AS400 (the same data was also in a more user-friendly system called LAMP) was above question. As a reviewer described it:
We were told that AS400 was the system of record and it trumped everything. I had one file with a modification the borrower agreed to and made payments on but it was not in the system so I spoke to my manager at Bank of America.
I was told “That’s the system of record. That’s what you use.”
I asked: “You are telling me if I have the promissory note and it shows a $100,000 mortgage and a 6% interest rate, and I can see it was entered into the system as $150,000 at an 8% rate, I’m supposed to use that?”
“Yes, that’s the system of record.”
Incorrect or questionable information was not a hypothetical issue. The information in AS400 was data that had been input manually. Not only did not contain images of the records from which the data entries had been made, reviewers could not locate images of documents stored or created by Countrywide elsewhere.
One major issue that illustrates the problems with relying on the data in the systems was reclassification of fees. Individuals working on tests on fees attest that on every single file they saw, fees were reclassified. Much of the time it was multiple reclassifications, from the borrower to the investor and back again, and on many files, it involved more than one fee. The reclassifications were so widespread that the training materials included sections on how to handle them, as the example below illustrates (click to enlarge):
One of the reviewers commented on the training example:
Not all reclassed fees were that simple. Some were reclassed and reapplied 4 or 5 times so they were very difficult to trail. (Borrower owed -> Investor -> Non Borrower -> Back to Borrower, etc.) We had to learn how to trail the fees for Test F because any fee that was owed by the borrower more than once in that trail would look like a duplicate due to backdating when it actually wasn’t.
Important information was also difficult to interpret:
Reviewer A: Well they simply gave us the notes that were in the system, which were often incomplete, sporadic – I mean, there were notes that came from India and incredibly difficult to understand, there were system notes from Countrywide, and it was incredibly difficult to decipher a lot of times simply because of the shorthand people were using – the Bank of America employees and the Countrywide employees were making up their own shorthand and sometimes it was very difficult to interpret exactly what was going on.
Yves Smith: Mmhmm.
RA: We would sometimes get together in little groups and say, “Hey, you read this and tell me what you think it says.” So. The notes came from everywhere and the systems were kind of a jumble.
Notice also that Bank of America relied on the Lender Processing Services Desktop system for its attorney information. A consultant to the foreclosure reviews said via e-mail:
At least at the B of A review, they went through the charade of assessing permissible/non-permissible fees…..at other reviewed servicers they didn’t even bother….reviewers were asked to accept LPS system generated invoices (as input by conflicted LPS network attorneys) at face value and accept them as legit with no questions asked. No coincidence that settlement negotiations ramped up about the same time (November) that reviewers with integrity were calling to review support behind inflated third party non-legal cost billings (service of process, title charges, publications costs, etc.), not to mention evaluating the attorney fees themselves against state and GSE limitations.
It was commonplace to see ”third party” (not) legal costs billed through the LPS system at multiples of market rates given the blatant conflicts of interest that existed between servicers, LPS, their network attorneys, and the entities providing the “non-legal” foreclosure related services.
Why do you think David Stern and others were eventually delisted by Freddie Mac? And it should not go unnoticed that the excessive charges were borne by far more than those foreclosed upon…… they were borne by those who had their loans modified, reinstated, or may even have paid them off…. or, by taxpayers when sale proceeds at foreclosure did not exceed GSE loan blaances. All of it, swept under the rug with the settlement, courtesy of our bank captured OCC. So just why is there an OCC? Main Street needs to demand an end to an agency that has long since ignored its interests.
Finally, one reviewer had the person sitting next to her add a note to a Countrywide borrower record by mistake. Even though that was likely an isolated case, it should have been impossible for that to occur, again indicating major system deficiencies.**
When Bank of America acquired Countrywide, it was considered to have the best systems of all major servicers. And as stunning as it sounds, Bank of America apparently comported itself better than most of its peers did in these reviews. Consider what the reviewers at the other banks must have encountered.
“Fire, aim, ready” approach to launching the tests
The implementation of the foreclosure reviews was even more confused than the underlying systems. Promontory and Bank of America ramped up well before the tests were ready to into production, which meant the temps sat around, after being trained, processing borrower records in a training mode, for weeks and for some, for months before the tests went into production.*** From a reviewer who joined at the end of January:
During the initial training week we were informed of tests A, B, C, D, E, F, and G. In that period we actually trained on the systems using test A to get us used to maneuvering in some of the systems. We were told that B and C were in the beta testing phase and would be ready for us to work shortly after we got to the floor. For the first few weeks we trained and then were certified on Test A. Then we Level IIIs began training on Test C and Level IIs then trained on Test B. Once we were certified on C we went live, so by early April I was live on C.
This was about as efficient as it ever got, and even this example was not that efficient. With five weeks maximum to train and certify a reviewer on a set of tests, this end of January hire could have been processing files in early March but wasn’t until early April. He was in the first group to work on Test C, so other Level IIIs hired with or before him had even more unproductive time.**** In addition, some of the Level II reviewers were moved back and forth between the B and D tests (one set of assignments) and the E and F tests (another set) which meant they wound up being trained on all four, resulting in two to three weeks of superfluous training and certification.
The Level IIs tasked to E and F test faced even more makework. From a different contractor hired at the end of January:
Yves Smith: When did they add the new tests? They added the E and F tests roughly when?
Reviewer B: I want to say about April.
RB: They – it was already in the system but none of us really got a chance to see it. We were told that the resource guides that we were to follow by the book, question by question, every single time we opened it, we were told that one for E and F had not been created yet. And then when it was released it was very vague and was not even something that we could use that would give us any guidance.
RB: So, you know, we, I sat from probab– well, I didn’t actually get to become an alive environment until August. So I literally sat for seven months waiting, you know, for these resource guides to be completed and waiting for the tests to be approved, I guess from everyone, so that we could actually go live.
YS: What ____ [crosstalk]…What were you doing for those seven months?
RB: Looked at – well, we could still pull loans in CaseTracker, we were just in a training environment, and so whatever work you did, every night at midnight would just be wiped out and disappear when you were in a training environment. So we would have to pull loans and kind of go through the test questions and get familiar with, you know, what type of questions were being asked. And that’s really about it. They never came around to say, you know, to provide more training –
YS: So you did no useful work for seven months?
RB: Yes. It was miserable.
One of the reasons E and F tests were so slow to be launched was that they were revised numerous times. These questions from E Test are part of the tenth iteration; see the last two worksheet tabs of the Excel model workbook below:
G test, which was on modifications, one of the most likely sources of valid homeowner complaints, given the widespread reports of HAMP modification abuses, also did not go into production until August. From a Level III reviewer (recall that Level IIIs worked only on Tests C and G):
C [test for certain dual tracking and certain default-to-sale timeline violations of the mortgage] was in production first. As for test G, it went live in August. Up to that point, once C was basically over, everyone was either doing busy work and completely worthless spread sheets supposedly for G test. Everyone with any underwriting experience knew these spread sheets had no relevance to the task at hand. But it kept everyone busy.
Another reason the tests were delayed considerably was that the reviewers, who were apparently more diligent than the bank anticipated, asked for guidance on the often inexact questions and deficient resource guides. These queries were escalated first to Bank of America (problematic, particularly since the unit leaders were from Countrywide) and sometimes to Promontory. Since several reviewers often raise the same issue separately, they could and often did get conflicting answers to basic questions, such as “Does a modification cure default?” In other cases, questions led to further changes in the tests or test instructions.
And these problems were common, as one reviewer recounted:
The tests were delayed for a couple of reasons, one was some questions were ambiguous. For example: Was the borrower in a permanent modification at the time of foreclosure?
While that question does not sound ambiguous, the instructions for answering the question made it that way. Example: the original instructions told us that a payment on the modification had to have been made to be a valid modification. However, different modification program documents said different things about what made a modification valid. Some said that a payment must be sent with the signed and notarized modification agreement. Others simply said for the modification to be in effect signed and notarized documents had to be returned. The instructions changed constantly. At one point it was that a payment needed to have been received on time, then it became a payment had to be received and accepted,
then it became fully executed (borrower and bank) and a payment, then it was an offer had to have been made (that may not be chronological) so the instructions made the question ambiguous.
Other questions that became ‘ambiguous’ were question regarding publication dates. That is, newspaper publications of sale information and then the postponement of a sale. Some states required repeated publication of sale information after a postponement, others required on the notice just before sale, some required only a filing of notice of sale with the court. So in determining a foreclosure timeline the question could lead to all kinds of rabbit trails (as well it should) as to whether the time line was proper. Again, instructions changed repeatedly and more often than not the PC [proficiency coach] simply decided whether or not they wanted you on a particular rabbit trail and what was a proper timeline or not.
Another issue for the questions was that made a question ‘ambiguous’ were state and program specifics. A question in G and I am paraphrasing asked if the modification offered was one that the borrower was actually eligible for based on the specifics of his/her loan type. That question led to state specific information [e.g.: Hardest Hit Programs] since Michigan had certain criteria but so did all the other states so that question became very time consuming since we needed to know what Fannie Mae/Freddie/FHA/VA/conventional/state issues had to be covered but we did not necessarily have all the resources [resource guides] to cover that.
Moreover, it appears that completed reviews were not reopened in light of problems exposed later. A whistleblower explains:
The question of rescore/correction was a big issue. This was especially true of questions on Test C regarding trial mods/permanent mods/forbearance. The instructions changed often and many times we complained that our answers on
previous cases were now wrong. I cannot remember ever getting a test back because the details on how to answer the question had now changed and were clarified, rendering our previous determinations on that question incorrect. We were simply told not to worry about it.
Consider what these sorry accounts show:
For many of the foreclosures included in the review, records were woefully incomplete, had critical information that was scattered and hard to interpret and integrate, and some of the information, such as attorney charges from LPS, was suspect.
Bank of America and Promontory routinely had trouble providing directions and coming up with consistent answers to questions about the loans, sometimes extremely basic ones like “does a modification cure default?”
The implications of these failings go well beyond the aborted foreclosure reviews. They demonstrate how the most important asset most families ever own, a home that they acquire using a mortgage, winds up in the hands of servicers whose records are all too often inaccurate and incomplete, and where the servicers, even years after the fact, are unsure of how they should have operated to satisfy regulatory and contractual requirements. That’s before we get to how the servicers game systems. We discuss that issue in the context of the foreclosure reviews in our next offering, Part IV, on how Bank of America sought to minimize evidence of damage to borrowers.
*Wilshire was acquired by IBM on March 1, 2010. However, Wilshire was not a party to the OCC consent orders but Bank of America was. So if a borrower was foreclosed on in 2009, which was while Wilshire was owned by Bank of America, and asked Bank of America for a review, Bank of America would have had to obtain the records from IBM. The reviewers in contact with the members of the “non-I-series” team tasked to these problem children were not aware of measures to obtain information from third parties.
** The reviewers in Tampa Bay were handling only complaints involving completed foreclosures, so even if this lapse was material, it would undermine the integrity of the review but not result in other harm to the borrower. We did not hear of any gaffes that affected payment records. If this sort of mistake was possible and took place, meaning reviewers were accidentally adding credits or charges to a post-sale account, it would be a trailing charge or credit to a GSE or private label trust.
*** The reviewers speculated that some of their work in training mode may have been used to refine the tests. Even so, any debugging could easily have been accomplished with the 120 to 150 Bank of America employees from the shuttered correspondent lending department in Tampa Bay who were also assigned to this effort.
**** We refer to all reviewers in this series as male irrespective of gender. In some interviews, another interviewer participated with YS; for simplicity, and at the request of the other interviewer, all interviewers are referred to as YS.