同步操作将从 lyc/Tianchi_Fabric_defects_detection 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
这是我第一次参加天池的大赛,半决赛的代码开源在了final_commit文件夹里面包含了填鸭的代码,第一版的填鸭,计算了patch块的相似度。第二版的我们对小目标(1-4)类的随机3-5倍的放大。 1.半决赛的Rank:34/100 2.线下的map:60%左右,线上的map:40%左右
This is the first time I participated in the Tianchi competition. The semi-final code is open sourced. The final_commit folder contains the code for duck filling. In the second version, we zoomed in randomly 3-5 times on the small target (1-4).
Rank of semi-final: 34/100
Offline Map: about 60%, online map: about 40%
特征工程
选用模型
训练,调参
提交结果
开始做的时候,我们是先出了一个baseline的结果,开始我是自己一个人玩,直接上faster-rcnn-r101 map:26% 有点沮丧 毕竟这时候还在忙着写(水)论文,后来论坛里 开源了一个cascade-rcnn-r50的模型。初赛52%map。 根据这个baseline,我换了backbone r101 居然:54%map,嘻嘻...这里直接就进了60多。
Feature engineering
** Selection model **
** Training and Tuning **
** Submission Results **
**When we started, we first produced a baseline result. At the beginning, I played by myself, and went directly to the faster-rcnn-r101 map: 26% a bit frustrated ** **After all, at this time, I was still busy writing a (water) dissertation. Later, a cascade-rcnn-r50 model was open sourced in the forum. 52% map for the preliminary round. ** According to this baseline, I changed the backbone r101: 54% map, ... more than 60 directly entered here .
-The design of the anchor is very important. The default [mm, detection] of mmdetection is generally difficult to meet the characteristics of the data, so here is a point to improve
fpn layer dcn (slot, too much video memory, because a lot of offsets are used)
Discovery of OHEM online difficult samples
Soft-nms does not improve much, about one point (large probability is 0. several% ha ~)
-TTA old version of mmdet does not have TTA multi-scale test, new version
Duck filling, use of normal samples. This is actually a bit like what I wrote in the paper, from the paper with small sample enhancement, but there is a problem that it is easy to introduce structured noise. Here you need to calculate the similarity of the patch blocks-so it ’s (Baidu (google) A little bit), using ready-made
GN + ws (group Normalize)
rpn tuning
打算切图做的,cascade-r50对512*512的图像来测试,对于1-4好像有点效果,可能是可视化的错觉吧
分类的模型,这时候就做的比较晚,用了 20层的 只有40多的Acc
-I plan to cut the picture. Cascade-r50 tests on 512 * 512 images. It seems to be a bit effective for 1-4 classes. It may be a visual illusion.
-The classification model, which was done relatively late at this time, used 20 layers and only had more than 40 Acc+
Install
Fellow the mmdetection install.md
Data format
In my experiment COCO, VOC changed by yourself!
COCO pretrained model transfer
The transfer code in checkpoints num_class should modify your class. Notice!!! scale and ratios changed, I use a simple cat method for suitable model struct param avoid parameters initialize problem!
python3 configs/cascade_rcnn_r101_fpn_1x_with_coco.py --gpus 1 work_dir XXXXXX(your path to save model and train log)
python3 cascade_rcnn_r101_fpn_1x_test_coco.py
Thanks TianChi for holding this competition https://tianchi.aliyun.com/
Thanks Openbyes provide computing power support https://openbayes.com/
Thanks Team member Chen and Li give valuable advice
Email:iloveitre@gmail.com
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。