Event

Webinar: How to Evaluate NLP Tools

April 30, 2020 •  Online, Worldwide

[Apple | Organization] and [Oranges | Fruit]

How to Evaluate NLP Tools for Entity Extraction

You have some documents, and you want to extract information, but how do you decide which NLP tool or library to use? Comparing NLP tools is not a straightforward exercise. Each tool may extract a different set of entity types, classify the same entity a little differently, or provide output in different formats, thus preventing direct comparison.

Basis Technology often performs evaluations of disparate NLP tools. We want to show you how we overcome these challenges to produce meaningful scores that enable us to compare tools.
This webinar will also introduce best practices for annotating a test data set and selecting a gold standard, as well as common ways to measure both the accuracy of the annotation and the extraction.

A Q&A session will follow the 40-minute presentation.

April 30, 2020 at 11:30 am EDT 
8:30 am: PST
11:30 am: EDT
4:30 pm: London
6:30 pm: Tel Aviv

REGISTER

Speakers

Gil Irizarry

VP Engineering, Text Analytics

Basis Technology

Gil leads the engineering team responsible for text analytics including existing products and new technology initiatives. He has nearly 30 years of experience in developing software and leading engineering teams, including work done at Curl (now part of Sumitomo Corporation), GTECH (now part of IGT PLC) and Constant Contact. Gil holds a BS in computer science from Cornell University, an MA in Liberal Arts from Harvard University and a certificate in management from MIT’s Sloan School of Management.