In this case alleging that New York City’s affordable housing program has a disparate impact on racial minorities, the plaintiffs challenged the City’s implementation of court-ordered technology-assisted review (TAR). The court disagreed that the overall TAR training process had flaws. Even so, it ordered the City to provide random samples of irrelevant nonprivileged documents to validate the results.
The plaintiffs, Janell Winfield, Tracey Stewart and Shauna Noel, sued New York City for policies that they claimed “perpetuate racial segregation in the City” in violation of the Fair Housing Act. The plaintiffs “sought wide-ranging discovery, which the City has resisted vigorously.” The court eventually ordered the City to use TAR software to speed up the discovery process.
In this motion, the plaintiffs argued that the City was too liberal in designating privilege. They also claimed that the City applied an “impermissibly narrow” responsiveness standard. The plaintiffs contended that these deficiencies were so severe that the City must have “improperly trained” its TAR software.
As to privilege, the court ordered the City to produce a privilege log for a sample of 80 documents it had marked as privileged. In responding, the City determined that only 20 of those documents were privileged.
The court noted that “traditionally, courts have not micromanaged parties’ internal review processes.” This is in part because “attorneys, as officers of the court, are expected to comply with” the federal rules in providing complete discovery. Furthermore, they don’t require perfection in e-discovery. Instead, “a producing party must take reasonable steps to identify and produce relevant documents.” Therefore, the court focused on narrowing the plaintiffs’ requests to “information that is most relevant to this litigation” and “proportional to the needs of the case.”
The court reviewed the City’s “predictive coding process and training” in camera. It concluded that “the City appropriately trained and utilized” its review software. The court found nothing “inherently defective” in the City’s TAR process. Therefore, it concluded that “human error in categorizing a small subset of documents” caused the defects. The court noted again that its guiding principles must be “reasonableness and proportionality, not perfection and scorched-earth.”
After debating “the degree of transparency required by the producing party as to its predictive coding process,” the court determined that a validation set would be appropriate. Allowing the plaintiffs to review a sample set of documents that the City characterized as nonprivileged and nonresponsive would “increase transparency,” which would be “not unreasonable” in light of the volume of documents and the examples that the plaintiffs had identified.
Takeaways on using TAR for ediscovery
Don’t insist on nitpicky perfection in discovery. If you believe there is a persistent pattern of incomplete production, frame your argument in terms of reasonableness and proportionality rather than on the technical completeness of a data set.