FAIR: Filtering of Automatically Induced Rules
Divya Jyoti Bajpai, Ayush Maheshwari, Manjesh Kumar Hanawal, Ganesh Ramakrishnan
Main: Efficient Low-resource methods in NLP Oral Paper
Session 3: Efficient Low-resource methods in NLP (Oral)
Conference Room: Carlson
Conference Time: March 18, 14:00-15:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-73-Oral
channel in a separate window.
Abstract:
Availability of large annotated data can be a critical bottleneck in training machine learning algorithms successfully, especially when applied to diverse domains. Weak supervision offers a promising alternative by accelerating the creation of labeled training data using domain-specific rules. However, it requires users to write a diverse set of high-quality rules to assign labels to the unlabeled data (eg., Snorkel~\cite{bach2019snorkel}). Automatic Rule Induction (ARI) approaches such as Snuba \cite{snuba} circumvent this problem by automatically creating rules from features on a small labeled set and filtering a final set of rules from them. In the ARI approach, the crucial step is to filter out a set of a high-quality useful subset of rules from the large set of automatically created rules. In this paper, we propose an algorithm FAIR (Filtering of Automatically Induced Rules) to filter rules from a large number of automatically induced rules using submodular objective functions that account for the collective precision, coverage, and conflicts of the rule set. We experiment with three ARI approaches and five text classification datasets to validate the superior performance of our algorithm with respect to several semi-supervised label aggregation approaches. We show that our approach achieves statistically significant results in comparison to existing rule-filtering approaches. The anonymized source code is available at \url{https://anonymous.4open.science/r/FAIR-LF-Induction-9B60}.