Technology Assisted Review

On October 1, 2018, a new Rule (specifically, a new subdivision to existing Rule 11-e) of the Commercial Division Rules, will go into effect. 

Rule 11-e governs Responses and Objections to Document Requests.  The new subdivision, promulgated by administrative Order of Chief Administrative Judge Lawrence K. Marks, governs the use of technology-assisted review (“TAR”) in the discovery process. 

The new subdivision (f) states:

The parties are encouraged to use the most efficient means to review documents, including electronically stored information (“ESI”), that is consistent with the parties’ disclosure obligations under Article 31 of the CPLR and proportional to the needs of the case.  Such means may include technology-assisted review, including predictive coding, in appropriate cases…

In addition to implicitly recognizing the cost attendant to e-discovery, the rule promotes cooperation by encouraging parties in commercial cases “to confer, at the outset of discovery and as needed throughout the discovery period, about [TAR] mechanisms they intend to use in document review and production.”  And so, the new Commercial Division Rule appears to bring New York State Commercial Division expectations closer in line with those set forth in the Federal Rules, specifically Rule 26(f), which encourages litigants (with an eye toward proportionality) to discuss preservation and production of ESI.

Questions about technology assisted review?  Please contact kcole@farrellfritz.com.

Traditional document review can be one of the most variable and expensive aspects of the discovery process.  The good news is that there are innumerable analytic tools available to empower attorneys to work smarter, whereby reducing discovery costs and allowing attorneys to focus sooner on the data most relevant to the litigation.   And, while various vendors have “proprietary” tools with catchy names, the tools available all seek to achieve the same results:  smarter, more cost effective review in a way that is defensible and strategic.

Today’s blog post discusses one of those various tools – predictive coding.  The next few blog posts will focus on other tools such as email threading, clustering, conceptual analytics, and keyword expansion.

Predictive Coding

Predictive coding is a machine learning process that uses software to take keyword searches / logic, entered by people, for the purpose of finding responsive documents, and applies it to much larger datasets to reduce the number of irrelevant and non-responsive documents that need to be reviewed manually.  While each predictive algorithm will vary in its actual methodology, the process at a very simplistic level involves the following steps:

  1.  Data most likely relevant to the litigation is collected. Traditional filtering and de-duplication is applied.  Then, human reviewers will identify a representative cross-section of documents, known as a “seed set,” from the remaining (de-duplicated) population of documents that need to be reviewed.   The number of documents in that seed set will vary, but it should be sufficiently representative of the overall document population.
  2.  Attorneys most familiar with the substantive aspects of the litigation code each document in the seed set responsive or non-responsive as appropriate. Mind you, many of the predictive coding software available allows users to perform classification for multiple issues simultaneously (i.e., responsiveness and confidentiality).  These coding results will then be input into the predictive coding software.
  3.  The predictive coding software analyzes the seed set and creates an internal algorithm to predict the responsiveness of other documents in the broader population.  It is critically important after this step that the review team who coded the seed set spend time sampling the results of the algorithm on additional documents and refine the algorithm by continually coding and inputting sample documents until desired results are achieved.  This “active learning” is important to achieve optimal results.  Simply stated, active learning is an iterative process whereby the seed set is repeatedly augmented by additional documents chosen by the algorithm and manually coded by a human reviewer. (This differs from “passive learning,” which is an iterative process that uses totally random document samples to train the machine until optimal results are achieved).

Once the team is comfortable with the results being returned, the software applies the refined algorithm to the entire review set and codes all remaining documents as responsive or unresponsive.

 

 

 

 

A little more than three years ago, federal Magistrate Judge Andrew J. Peck (SDNY), issued a seminal decision in Da Silva Moore v. Publicis Groupe & MSL Group, 11 Civ. 1279 (February 24, 2012).  Indeed, in that ruling, Judge Peck sent a message that predictive coding and computer assisted review is an appropriate tool that should be “seriously considered for use” in large data-volume cases and attorneys “no longer have to worry about being the ‘first’ or ‘guinea pig’ for judicial acceptance of computer-assisted review.”    Judge Peck went on to encourage parties to cooperate with one another and to consider disclosing the initial “seed” sets of documents.  In doing so, he recognized that sharing of seed sets is often frowned upon by counselors who argue that these sets often contain information wholly unrelated to the action, much of which may be confidential or sensitive.  Specifically Judge Peck stated: “This Court highly recommends that counsel in future cases be willing to at least discuss, if not agree to, such transparency [with seed sets] in the computer-assisted review process.”

Since Da Silva,  many cases have successfully employed various forms of technology assisted review (“TAR”) to limit the scope of documents actually reviewed by attorneys.  It is well-embraced that the upside of utilizing TAR is to make document review a more manageable and affordable task.  Moreover, Courts routinely embrace TAR for document review  See, e.g., Rio Tinto PLC v. Vale S.A., S.D.N.Y. No. 14 Civ. 3042 (RMB)(AJP) (March 3, 2015) (“the case law has developed to the point that it is now black letter law that where the producing party wants to utilize TAR for document review, courts will permit it”).

In Rio Tinto, Judge Peck revisited his DaSilva decision. And, while most of Rio Tinto discusses the merits of transparency and cooperation in the development of seed sets, Judge Peck notes there is no definitive answer on the extent of transparency and cooperation required.   Citing to his opinion in DaSilva and other cases, Judge Peck makes clear that he “generally believe[s] in cooperation” in connection with seed set development. Nevertheless, Judge Peck notes there is no absolute requirement of transparent cooperation.  Rather, “requesting parties can insure that training and review was done appropriately by other means, such as statistical estimation of recall at the conclusion of the review as well as by whether there are gaps in the production, and quality control review of samples from the documents categorized as now responsive.” (emphasis added)

The decision goes on to emphasize that courts and litigants should not hold predictive coding to a so-called “higher standard” than keyword searches or linear review. Such a standard could very well dissuade counsel and clients from using predictive coding, which would be a step backward for discovery practice overall.