这是我第一次参加天池的大赛，半决赛的代码开源在了final_commit文件夹里面包含了填鸭的代码，第一版的填鸭，计算了patch块的相似度。第二版的我们对小目标（1-4）类的随机3-5倍的放大。 1.半决赛的Rank：34/100 2.线下的map：60%左右，线上的map：40%左右
This is the first time I participated in the Tianchi competition. The semi-final code is open sourced. The final_commit folder contains the code for duck filling. In the second version, we zoomed in randomly 3-5 times on the small target (1-4).
Rank of semi-final: 34/100
Offline Map: about 60%, online map: about 40%
开始做的时候，我们是先出了一个baseline的结果，开始我是自己一个人玩，直接上faster-rcnn-r101 map：26% 有点沮丧 毕竟这时候还在忙着写（水）论文，后来论坛里 开源了一个cascade-rcnn-r50的模型。初赛52%map。 根据这个baseline，我换了backbone r101 居然：54%map，嘻嘻...这里直接就进了60多。
** Selection model **
** Training and Tuning **
** Submission Results **
**When we started, we first produced a baseline result. At the beginning, I played by myself, and went directly to the faster-rcnn-r101 map: 26% a bit frustrated ** **After all, at this time, I was still busy writing a (water) dissertation. Later, a cascade-rcnn-r50 model was open sourced in the forum. 52% map for the preliminary round. ** According to this baseline, I changed the backbone r101: 54% map, ... more than 60 directly entered here .
-The design of the anchor is very important. The default [mm, detection] of mmdetection is generally difficult to meet the characteristics of the data, so here is a point to improve
fpn layer dcn (slot, too much video memory, because a lot of offsets are used)
Discovery of OHEM online difficult samples
Soft-nms does not improve much, about one point (large probability is 0. several% ha ~)
-TTA old version of mmdet does not have TTA multi-scale test, new version
Duck filling, use of normal samples. This is actually a bit like what I wrote in the paper, from the paper with small sample enhancement, but there is a problem that it is easy to introduce structured noise. Here you need to calculate the similarity of the patch blocks-so it ’s (Baidu (google) A little bit), using ready-made
GN + ws (group Normalize)
分类的模型，这时候就做的比较晚，用了 20层的 只有40多的Acc
-I plan to cut the picture. Cascade-r50 tests on 512 * 512 images. It seems to be a bit effective for 1-4 classes. It may be a visual illusion.
-The classification model, which was done relatively late at this time, used 20 layers and only had more than 40 Acc+
Fellow the mmdetection install.md
In my experiment COCO, VOC changed by yourself!
COCO pretrained model transfer
The transfer code in checkpoints num_class should modify your class. Notice!!! scale and ratios changed, I use a simple cat method for suitable model struct param avoid parameters initialize problem!
python3 configs/cascade_rcnn_r101_fpn_1x_with_coco.py --gpus 1 work_dir XXXXXX(your path to save model and train log)
Thanks TianChi for holding this competition https://tianchi.aliyun.com/
Thanks Openbyes provide computing power support https://openbayes.com/
Thanks Team member Chen and Li give valuable advice