PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
-
Updated
Apr 2, 2018 - Python
PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
Transliteration using Sequence to Sequence transduction using Hard Monotonic Attention, based on our EMNLP 2018 paper
📝 Streamline text processing in Arabic and English with ChunkWise, a library offering 31 chunking strategies for NLP and RAG systems.
Pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction" https://arxiv.org/abs/1609.08194
Add a description, image, and links to the monotonic-attention topic page so that developers can more easily learn about it.
To associate your repository with the monotonic-attention topic, visit your repo's landing page and select "manage topics."