InsNet (documentation) is a powerful neural network library aiming at building instance-dependent computation graphs. It is designed to support padding-free dynamic batching, thus allow users to focus on building the model for a single instance. This design has at least four advantages as follows:
To summarize, we believe that Padding-free Dynamic Batching is the feature that NLPers will dive into but is surprisingly not supported by today's deep learning libraries.
Besides, InsNet has the following features:
Studies using InsNet are listed as follows, and we are looking forward to enriching this list:
InsNet uses Apache 2.0 license allowing you to use it in any project. But if you use InsNet for research, please cite this paper as follows and declare it as an early version of InsNet since the paper of InsNet is not completed yet:
@article{wang2019n3ldg, title={N3LDG: A Lightweight Neural Network Library for Natural Language Processing}, author={Wang, Qiansheng and Yu, Nan and Zhang, Meishan and Han, Zijia and Fu, Guohong}, journal={Beijing Da Xue Xue Bao}, volume={55}, number={1}, pages={113--119}, year={2019}, publisher={Acta Scientiarum Naturalium Universitatis Pekinenis} }
Due to incorrect Git operations, the very early history of InsNet is erased, but you can see it in another repo.
If you have any question about InsNet, feel free to post an issue or send me an email: chncwang@gmail.com
See the documentation for more details.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。