The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT.

Mitesh M. Khapra , Pratyush Kumar , Preksha Nema , Madhura Pande
arXiv: Computation and Language

7
2021
Untangle: Critiquing Disentangled Recommendations

Filip Radlinski , Preksha Nema , Alexandros Karatzoglou

2021
Diversity driven Attention Model for Query-based Abstractive Summarization

Preksha Nema , Mitesh M. Khapra , Anirban Laha , Balaraman Ravindran
meeting of the association for computational linguistics 1 1063 -1072

158
2017
A Mixed Hierarchical Attention based Encoder-Decoder Approach for Standard Table Summarization

Parag Jain , Anirban Laha , Karthik Sankaranarayanan , Preksha Nema
north american chapter of the association for computational linguistics 2 622 -627

24
2018
Towards a Better Metric for Evaluating Question Generation Systems

Preksha Nema , Mitesh M. Khapra
empirical methods in natural language processing 3950 -3959

80
2018
Analyzing user perspectives on mobile app privacy at scale

Preksha Nema , Pauline Anthonysamy , Nina Taft , Sai Teja Peddinti
Smpte Journal 112 -124

1
2022
On the importance of local information in transformer based models

Madhura Pande , Aakriti Budhraja , Preksha Nema , Pratyush Kumar
arXiv preprint arXiv:2008.05828

1
2020
Towards transparent and explainable attention models

Akash Kumar Mohankumar , Preksha Nema , Sharan Narasimhan , Mitesh M Khapra
arXiv preprint arXiv:2004.14243

88
2020
Let's ask again: Refine network for automatic question generation

Preksha Nema , Akash Kumar Mohankumar , Mitesh M Khapra , Balaji Vasan Srinivasan
arXiv preprint arXiv:1909.05355

58
2019
Generating descriptions from structured data using a bifocal attention mechanism and gated orthogonalization

Preksha Nema , Shreyas Shetty , Parag Jain , Anirban Laha
arXiv preprint arXiv:1804.07789

37
2018
Eliminet: A model for eliminating options for reading comprehension with multiple choice questions

Soham Parikh , Ananya B Sai , Preksha Nema , Mitesh M Khapra
arXiv preprint arXiv:1904.02651

28
2019
Towards interpreting BERT for reading comprehension based QA

Sahana Ramnath , Preksha Nema , Deep Sahni , Mitesh M Khapra
arXiv preprint arXiv:2010.08983

21
2020
A Framework for Rationale Extraction for Deep QA models

Sahana Ramnath , Preksha Nema , Deep Sahni , Mitesh M Khapra
arXiv preprint arXiv:2110.04620

2021
Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples

Soham Parikh , Ananya B Sai , Preksha Nema , Mitesh M Khapra
arXiv preprint arXiv:1904.02665

2019
ReTAG: Reasoning Aware Table to Analytic Text Generation

Deepanway Ghosal , Preksha Nema , Aravindan Raghuveer
The 2023 Conference on Empirical Methods in Natural Language Processing

2023
T-STAR: Truthful style transfer using AMR graph as intermediate representation

Anubhav Jangra , Preksha Nema , Aravindan Raghuveer
arXiv preprint arXiv:2212.01667

4
2022
Disentangling Preference Representations for Recommendation Critiquing with β-VAE

Preksha Nema , Alexandros Karatzoglou , Filip Radlinski
30th ACM International Conference on Information and Knowledge Management 9 -9

31
2021
On the weak link between importance and prunability of attention heads

Aakriti Budhraja , Madhura Pande , Preksha Nema , Pratyush Kumar
EMNLP, 2020 6 -6

9
2020
STOAT: Structured Data to Analytical Text With Controls.

Deepanway Ghosal , Preksha Nema , Aravindan Raghuveer
CoRR

2023