Skip to content

Labeling the Relation between Callout and Target

Notifications You must be signed in to change notification settings

darcula1993/LRCT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

LRCT

This project contains code to extract training examples from raw data and train classifier stack on ELMO,ULMFit,Google universal sentence encoder, and BERT.

The BERT model was forked from BERT original repo. The specified fine-tune process is defined in the run_classifier.py.

The other transfer learning framework are developed in train_elmo_umifit.py.

@InProceedings{wacholder2014, author = {Wacholder, Nina and Muresan, Smaranda and Ghosh, Debanjan and Aakhus, Mark}, title = {Annotating Multiparty Discourse: Challenges for Agreement Metrics}, booktitle = {Proceedings of the 8th Linguistic Annotation Workshop}, month = {August}, year = {2014}, }

@InProceedings{ghosh-EtAl:2014:W14-21, author = {Ghosh, Debanjan and Muresan, Smaranda and Wacholder, Nina and Aakhus, Mark and Mitsui, Matthew}, title = {Analyzing Argumentative Discourse Units in Online Interactions}, booktitle = {Proceedings of the First Workshop on Argumentation Mining}, month = {June}, year = {2014}, address = {Baltimore, Maryland}, publisher = {Association for Computational Linguistics}, pages = {39--48}, url = {http://www.aclweb.org/anthology/W14-2106} }

About

Labeling the Relation between Callout and Target

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published