1 Star 0 Fork 0

TheodoreYoung / InsNet

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

InsNet documentation

InsNet (documentation) is a powerful neural network library aiming at building instance-dependent computation graphs. It is designed to support padding-free dynamic batching, thus allow users to focus on building the model for a single instance. This design has at least four advantages as follows:

  1. It can batch not only operators in a mini-batch but also operators in the same instance. For example, it can batch two parallel transformers from the same instance.
  2. It makes it super easy to build NLP models with instance-dependent computation graphs and execute them in batch, such as tree-LSTM and hierarchical Transformers.
  3. It reduces users' intellectual burden of manual batching, as InsNet can efficiently take over all batching procedures. As such, users even need not know the concept of tensor, but only the matrix and vector (which is a one-column matrix), neither the concept of padding.
  4. It significantly reduces memory usage since no padding is needed and lazy execution can release useless tensors immediately.

To summarize, we believe that Padding-free Dynamic Batching is the feature that NLPers will dive into but is surprisingly not supported by today's deep learning libraries.

Besides, InsNet has the following features:

  1. It is written in C++ 14 and is built as a static library.
  2. For GPU computation, we write almost all CUDA kernels by hand, allowing efficient parallel computation for matrices of unaligned shapes.
  3. Both lazy and eager execution is supported, with the former allowing for automatic batching and the latter facilitating users' debugging.
  4. For the moment, it provides more than twenty operators with both GPU and CPU implementations, supporting building modern NLP models for sentence classification, sequence tagging, and language generation. It furthermore provides NLP modules such as attention, RNNs, and the Transformer, built with the aforementioned operators.

Studies using InsNet are listed as follows, and we are looking forward to enriching this list:

InsNet uses Apache 2.0 license allowing you to use it in any project. But if you use InsNet for research, please cite this paper as follows and declare it as an early version of InsNet since the paper of InsNet is not completed yet:

@article{wang2019n3ldg,
title={N3LDG: A Lightweight Neural Network Library for Natural Language Processing},
author={Wang, Qiansheng and Yu, Nan and Zhang, Meishan and Han, Zijia and Fu, Guohong},
journal={Beijing Da Xue Xue Bao},
volume={55},
number={1},
pages={113--119},
year={2019},
publisher={Acta Scientiarum Naturalium Universitatis Pekinenis}
}

Due to incorrect Git operations, the very early history of InsNet is erased, but you can see it in another repo.

If you have any question about InsNet, feel free to post an issue or send me an email: chncwang@gmail.com

See the documentation for more details.

空文件

简介

InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching. 展开 收起
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/yangdonghe/InsNet.git
git@gitee.com:yangdonghe/InsNet.git
yangdonghe
InsNet
InsNet
master

搜索帮助