1 Star 0 Fork 825

djkko_club / easyAi

forked from dromara / easyAi 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0

easyAi

本包说明:

  • 本包原名imageMarket,因为开始加入自然语言模块,所以之后更名为easyAi
  • 本包对物体在图像中进行训练及识别,切割,定位的轻量级,面向小白的框架,功能在逐渐扩展中
  • 本包对中文输入语句,对输入语句的类别进行分类,功能在逐渐扩展中
  • 若有想扩充的功能请进群提意见,若是通用场景我会陆续补充,技术交流群:561433236

详细视频教程地址:

测试素材下载

链接:https://pan.baidu.com/s/1Vzwn3iMPBI-FAXBDrCSglg
密码:7juj

框架效果演示结果:

  • 因为是框架没有图像化界面,演示结果就是控制台输出的数据,只能用视频展示,想看演示结果请看教学视频

控制台输出演示

  • 演示都是输出结果,详情请看具体视频教程,上有链接,这里就是展示一下控制台输出截图。
  • 识别测试训练素材:点击查看
  • 中文语言分类效果:点击查看

目前拥有的功能是

  • 对单张图片单物体进行识别
  • 对单张图片多物体进行识别与定位
  • 对中文语言进行分类语义识别,判断用户说话的语义是什么,要做什么
  • 若有想扩充的功能请进群提意见,若是通用场景我会陆续补充,技术交流群:561433236

目的是

  • 低硬件成本,CPU可快速学习运行,面向jAVA开发的程序员,经过简单API调用就可实现物体在图像中的识别,定位及中文语言分类等功能
  • 努力为中小企业提供AI场景解决技术方案
  • 技术交流群:561433236

特点是

入手门槛低,简单配置,快速上手

为什么做这个包

  • 低门槛化: 现在随着人工智能技术的兴起,很多场景需要开发人员添加相应的功能,但是在二三线城市算法人才匮乏。 并且大多是JAVA开发程序员,业务做的更多,因为作者本人就是三线城市程序员,所以深知这一点。 所以我本人认为需要一款部署简单,不需要学习任何算法知识, 只通过最简单的API调用,就可以实现部分人工智能应用,并面向覆盖面最广的JAVA程序员使用的,且 能满足大部分AI业务场景实现的技术包。
  • 面向用户:广大没接触过算法知识,人才相对匮乏的二三线JAVA业务开发程序员,实现人工智能应用
  • 部署简单: 本包所有底层函数及数学库都是作者JAVA手写,不依赖任何第三方库,所以开发者只需要将本包下载到本地后,打成JAR包 引入到自己的POM文件中,就可以独立使用所有功能。
  • 功能还在扩展: 本包现在的功能还在逐步扩展中
  • 抛错捕获暂时还没有做全,若有抛错请进群交流:561433236,我来做一下错误定位
  • 若您有相对复杂的人工智能业务(开源功能无法满足的,包括但不限于图像识别,自然语言)请联系作者 vx:thenk008 进行基于easyAi定制化业务情景开发(即java人工智能开发)

HELLO WORLD说明:

  • 以下为最简API文档,所有非必设参数都使用本引擎默认值
  • 要注意的是使用最简API,及参数默认值准确度远不能达到最佳状态

图像学习部分最简API 说明:

       训练过程
       Picture picture = new Picture();//图片解析类
        Config config = new Config();//配置文件
        config.setTypeNub(2);//设置训练种类数
        config.setBoxSize(125);//设置物体大致大小 单位像素 即 125*125 的矩形
        config.setPictureNumber(5);//设置每个种类训练图片数量 某个类别有几张照片,注意所有种类照片数量要保持一致
        config.setPth(0.7);//设置可信概率,只有超过可信概率阈值,得出的结果才是可信的 数值为0-1之间
        config.setShowLog(true);//输出学习时打印数据
        Distinguish distinguish = new Distinguish(config);//创建识别类
        distinguish.setBackGround(picture.getThreeMatrix("E:\\ls\\fp15\\back.jpg"));//设置识别的背景图片(该api为固定背景)
        List<FoodPicture> foodPictures = new ArrayList<>();//创建训练模板集合
        for (int i = 1; i < 3; i++) {
            FoodPicture foodPicture = new FoodPicture();//创建每一类图片的训练模板类
            foodPictures.add(foodPicture);//将该类模板加入集合
            List<PicturePosition> picturePositionList = new ArrayList<>();//创建该类模板的训练集合类
            foodPicture.setId(i + 1);//设置该图片类别id
            foodPicture.setPicturePositionList(picturePositionList);
            for (int j = 1; j < 6; j++) {//训练图片数量为 每种五张 注意跟config 中的 pictureNumber 要一致
                String name;
                if (i == 1) {//加载图片url地址名称
                    name = "a";
                } else {
                    name = "b";
                }
                PicturePosition picturePosition = new PicturePosition();
                picturePosition.setUrl("E:\\ls\\fp15\\" + name + i + ".jpg");//加载该类别图片地址
                picturePosition.setNeedCut(false);//是否需要剪切,若训练素材为充满全图图片,则充满全图不需要剪切 写false
                picturePositionList.add(picturePosition);//加载
            }
        }
        distinguish.studyImage(foodPictures);//进行学习
        System.out.println(JSON.toJSONString(distinguish.getModel()));//输出模型保存,将模型实体类序列化为json保存
       ///////////////////////////////////////////////////////////////////////
       初始化过程
        Picture picture = new Picture();//图片解析类
        Config config = new Config();//配置文件
        config.setTypeNub(2);//设置类别数量
        config.setBoxSize(125);//设置物体大小 单位像素
        config.setPictureNumber(5);//设置每个种类训练图片数量
        config.setPth(0.7);//设置可信概率,只有超过可信概率阈值,得出的结果才是可信的
        config.setShowLog(true);//输出学习时打印数据
        Distinguish distinguish = new Distinguish(config);//识别类
        distinguish.insertModel(JSONObject.parseObject(ModelData.DATA, Model.class));//将之前训练时保存的训练模型反序列化为实体类后,注入模型
        完成后请单例Distinguish类即完成系统启动时初始化过程
        ///////////////////////////////////////////////////////////////////////
        识别过程
        Distinguish distinguish; 此识别类为系统启动时已经初始化的 单例distinguish识别过程请不要 "new" 这个类
         for (int i = 1; i < 8; i++) {
            System.out.println("i====" + i);
            ThreeChannelMatrix t = picture.getThreeMatrix("E:\\ls\\fp15\\t" + i + ".jpg");//将识别图片转化为矩阵
            Map<Integer, Double> map = distinguish.distinguish(t);//识别结果
            for (Map.Entry<Integer, Double> entry : map.entrySet()) {
                System.out.println(entry.getKey() + ":" + entry.getValue());//识别结果打印
            }
        }
        ////////////////////////////////////////////////////////////////////////////////////
        识别结果打印说明(此为本包提供的测试图片样本的 输出结果说明在之前的训练中橘子设置的id为2苹果为3)
        i====1//第一张图 结果为 橘子,出现2:代表类别。:0.8874306751020916,带表该类别权重,权重越高说明该类别的物品在当前 图片中数量越多或者面积越大。
        2:0.8874306751020916 说明图1有橘子权重为0.8874306751020916
        i====2
        2:0.8878192183606407
        i====3
        3:0.7233916245920673说明图3有苹果权重为0.7233916245920673
        i====4
        2:0.9335699571468958说明图4有橘子权重为0.9335699571468958
        3:0.7750825597199661说明图4有苹果权重为0.7750825597199661
        i====5
        3:0.8481590575557582
        i====6
        2:0.7971025523095067
        i====7
        2:1.5584968376080388图7有橘子权重为1.5584968376080388
        3:0.8754957897385587图7有苹果权重为0.8754957897385587
        本演示样例代码位置在 src/test/java/org/wlld/ImageTest.java
        本演示训练素材位置在 src/test/image
        注意以上图片识别代码样例为训练素材为物品全图充满图片(自己看能看到橘子训练图片为全图充满苹果也是).自行开发时用以上代码样例时请也使用全图充满训练物品的图片来做训练非全图充满训练素材图训练api有变化

自然语言分类最简API 说明:

        //创建模板读取类
        TemplateReader templateReader = new TemplateReader();
        //读取语言模版,第一个参数是模版地址,第二个参数是文本编码方式
        //同时也是学习过程
        templateReader.read("/Users/lidapeng/Desktop/myDocment/a1.txt", "UTF-8");
        //学习结束获取模型参数
        //WordModel wordModel = WordTemple.get().getModel();
        //不用学习注入模型参数
        //WordTemple.get().insertModel(wordModel);
        Talk talk = new Talk();
        //输入语句进行识别,若有标点符号会形成LIST中的每个元素
        //返回的集合中每个值代表了输入语句,每个标点符号前语句的分类
        List<Integer> list = talk.talk("帮我配把锁");
        System.out.println(list);
        //这里做一个特别说明,语义分类的分类id不要使用"0",本框架约定如果类别返回数字0,则意味不能理解该语义,即分类失败
        //通常原因是模板量不足,或者用户说的话的语义,不在你的语义分类训练范围内

神经网络最简API说明

     //创建一个DNN神经网络管理器
     NerveManager nerveManager = new NerveManager(...);
     //构造参数
     //sensoryNerveNub 感知神经元数量 即输入特征数量
     //hiddenNerverNub  每一层隐层神经元的数量
     //outNerveNub 输出神经元数量 即分类的类别
     //hiddenDepth 隐层神经元深度,即学习深度
     //activeFunction 激活函数
     //isDynamic 是否启用动态神经元数量(没有特殊需求建议为静态,动态需要专业知识)
     public NerveManager(int sensoryNerveNub, int hiddenNerverNub, int outNerveNub, int hiddenDepth, ActiveFunction activeFunction, boolean isDynamic)
     nerveManager.getSensoryNerves()获取感知神经元集合
     //eventId:事件ID
     //parameter:输入特征值
     //isStudy:是否是学习
     //E:特征标注
     //OutBack 回调类
     SensoryNerv.postMessage(long eventId, double parameter, boolean isStudy, Map<Integer, Double> E, OutBack outBack)
     //每一次输出结果都会返回给回调类,通过回调类拿取输出结果,并通过eventId来对应事件

随机森林最简API说明

        //创建一个内存中的数据表
        DataTable dataTable = new DataTable(column);
        //构造参数是列名集合
        public DataTable(Set<String> key)
        //指定主列名集合中该表的主键
        dataTable.setKey("point");
        //创建一片随机森林
        RandomForest randomForest = new RandomForest(7);
        //构造参数为森林里的树木数量
        public RandomForest(int treeNub)
        //唤醒随机森林里的树木
        randomForest.init(dataTable);
        //将加入数据的实体类一条条插入森林中
        randomForest.insert(Object object);
        //森林进行学习
        randomForest.study();
        //插入特征数据,森林对该数据的最终分类结果进行判断
        randomForest.forest(Object objcet);

如何提升精准度

  • 将模板类设置为不忽略精度
 TempleConfig templeConfig = new TempleConfig(false, true);

使用TempleConfig()有参构造,第一个参数目前依然固定传false(因为有功能还没完成),第二个参数是一个布尔值是否忽略精度。
众所周知JAVA计算浮点是有精度损失的,如果选择true则使用大数计算类即无精度损失(默认为FALSE),但是这是以速度变慢十倍以上为代价的。
忽略精度和使用精度准确度上差距,平均在5-6个百分点之间,所以用户按照自己的需求来判断是否使用精度计算。

优点:准确度提升5-6个百分点,配合DNN分类器可达99%以上的准确率

缺点:运算速度变慢十倍以上,从百毫秒变为一至两秒

  • 根据业务情景选择使用分类器
 TempleConfig templeConfig = new TempleConfig(false, true);
 templeConfig.setClassifier(Classifier.DNN);

在TempleConfig类调用init()方法前选择使用的分类器setClassifier()

public class Classifier {//分类器
    public static final int LVQ = 1;//LVQ分类
    public static final int DNN = 2; //使用DNN分类
    public static final int VAvg = 3;//使用特征向量均值分类
}

目前easyAi提供三种分类器,即用户选择一种参数设置进配置模板类

这三种分类器的特点

  1. LVQ:若用户训练图片量较少,比如一个种类只有一两百张图片,则使用此分类器达到当前条件下最大识别成功率
  2. VAvg:若用户训练图片较少,同时分类数量也很少,比如只训练两三种物体识别,则VAvg达到当前最大识别成功率
  3. DNN:若用户训练图片较多,每种分类1500张训练图片,则使用此分类器,准确率98%+。
  4. 若不设置分类器,则框架默认使用VAvg分类器

使用DNN的话,API略有区别

 //创建图片解析类
 Picture picture = new Picture();
 //创建一个静态单例配置模板
 static TempleConfig templeConfig = new TempleConfig();
 //使用DNN分类器
 templeConfig.setClassifier(Classifier.DNN);
 //第三个参数和第四个参数分别是训练图片的宽和高,为保证训练的稳定性请保证训练图片大小的一致性
 templeConfig.init(StudyPattern.Accuracy_Pattern, true, 640, 640, 2);
 //将配置模板类作为构造塞入计算类
 Operation operation = new Operation(templeConfig);
 //一阶段 循环读取不同的图片
 for (int i = 1; i < 1900; i++) {
 //读取本地URL地址图片,并转化成矩阵
 Matrix a = picture.getImageMatrixByLocal("/Users/lidapeng/Desktop/myDocment/picture/a" + i + ".jpg");
 Matrix c = picture.getImageMatrixByLocal("/Users/lidapeng/Desktop/myDocment/picture/c" + i + ".jpg");
 //矩阵塞入运算类进行学习,第一个参数是图片矩阵,第二个参数是图片分类标注ID,第三个参数是第一次学习固定false
 operation.learning(a, 1, false);
 operation.learning(c, 2, false);
 }
 for (int i = 1; i < 1900; i++) {
 Matrix a = picture.getImageMatrixByLocal("D:\\share\\picture/a" + i + ".jpg");
 Matrix c = picture.getImageMatrixByLocal("D:\\share\\picture/c" + i + ".jpg");
 operation.normalization(a, templeConfig.getConvolutionNerveManager());
 operation.normalization(c, templeConfig.getConvolutionNerveManager());
 }
 templeConfig.getNormalization().avg();
 for (int i = 1; i < 1900; i++) {
 //读取本地URL地址图片,并转化成矩阵
 Matrix a = picture.getImageMatrixByLocal("D:\\share\\picture/a" + i + ".jpg");
 Matrix c = picture.getImageMatrixByLocal("D:\\share\\picture/c" + i + ".jpg");
 //将图像矩阵和标注加入进行学习,Accuracy_Pattern 模式 进行第二次学习
 //第二次学习的时候,第三个参数必须是 true
 operation.learning(a, 1, true);
 operation.learning(c, 2, true);
 }

大家可以看到如果使用DNN,在一阶段学习和二阶段学习之间多了一段代码即:

for (int i = 1; i < 1900; i++) {
 Matrix a = picture.getImageMatrixByLocal("D:\\share\\picture/a" + i + ".jpg");
 Matrix c = picture.getImageMatrixByLocal("D:\\share\\picture/c" + i + ".jpg");
 operation.normalization(a, templeConfig.getConvolutionNerveManager());
 operation.normalization(c, templeConfig.getConvolutionNerveManager());
 }
 templeConfig.getNormalization().avg();
  • 增加DNN神经网络深度
templeConfig.setDeep(int deep);

若不设置默认深度为2,设置的深度越深则意味着准确率也越高,但同时也意味着你需要更多的训练图片去训练
增加一层深度,训练图片的数量至少乘以3,否则准确度不仅不会增加,反而会下降

优点:更深的深度准确率可以无限接近100%,我们只要训练量足够大,我们就可以更深,越深越无敌

缺点:增加深度意味着成几何倍数提升的训练量,同时也意味着过拟合的风险

  • 修改可调参数
        //选择分类器
        templeConfig.setClassifier(Classifier.DNN);
        //选择期望矩阵宽度
        templeConfig.setMatrixWidth(5);
        //选择正则化模式
        templeConfig.setRzType(RZ.L2);
        //选择DNN 深度
        templeConfig.setDeep(1);
        //设置学习率
        templeConfig.setStudyPoint(0.05);
        //设置DNN隐层宽度
        templeConfig.setHiddenNerveNub(6);
        //设置正则系数
        templeConfig.setlParam(0.015);//0.015

什么样的图片会严重影响准确率

  • 训练图片或者识别图片有,比较大片的遮盖,暗光,阴影,或者使图片模糊的情况,识别和训练都会有严重的干扰
  • 训练图片要求会比较高,要求若物体无背景则扣干净,有背景的话使用统一背景不要变动,不要一个训练一个物体有很多种背景
  • 训练物体图片若有背景最好垫张白纸,纯白和纯色的背景干扰度最小,纯白最优
  • 识别图片的拍摄设备最好有补光灯,不要让拍摄物体有大片阴暗。除非你同时也训练阴暗图片,当然那样的话训练量太大不推荐,也没必要
  • 若使用定位与多物体识别功能,请将训练物体的图片背景扣干净,或保证RGB纯白色号背景即0XFFFFFF
  • 某两种物体的图片太过相似即长的几乎一模一样,区别点特别小或者微弱,识别不出来,就好比同卵双胞胎你也很难做出区分

常见抛错

  • error:Wrong size setting of image in templateConfig
  • 原因是模版配置类图片宽高设置相差太大 templeConfig.init(StudyPattern.Accuracy_Pattern, true, width, height, 1);
最终说明
  • TempleConfig():配置模版类,一定要静态在内存中长期持有,检测的时候不要每次都NEW, 一直就使用一个配置类就可以了。
  • Operation():运算类,除了学习可以使用一个以外,用户每检测一次都要NEW一次。 因为学习是单线程无所谓,而检测是多线程,如果使用一个运算类,可能会造成线程安全问题
精准模式和速度模式的优劣
  • 速度模式学习很快,但是检测速度慢,双核i3检测单张图片(1200万像素)单物体检测速度约800ms. 学习1200万像素的照片物体,1000张需耗时1-2小时。
  • 精准模式学习很慢,但是检测速度快,双核i3检测单张图片(1200万像素)单物体检测速度约100ms. 学习1200万像素的照片物体,1000张需耗时5-7个小时。
本包为性能优化而对AI算法的修改
  • 本包的自然语言是通过内置分词器进行语句分词,然后再通过不同分词的时序进行编号成离散特征,最后进入随机森林分类
  • 本包对图像AI算法进行了修改,为应对CPU部署。
  • 卷积层不再使用权重做最终输出,而是将特征矩阵作出明显分层的结果。
  • 卷积神经网络后的全连接层直接替换成了LVQ算法进行特征向量量化学习聚类,通过卷积结果与LVQ原型向量欧式距离来进行判定。
  • 物体的边框检测通过卷积后的特征向量进行多元线性回归获得,检测边框的候选区并没有使用图像分割(cpu对图像分割算法真是超慢), 而是通过Frame类让用户自定义先验图框大小和先验图框每次移动的检测步长,然后再通过多次检测的IOU来确定是否为同一物体。
  • 所以添加定位模式,用户要确定Frame的大小和步长,来替代基于图像分割的候选区推荐算法。
  • 速度模式是使用固定的边缘算子进行多次卷积核,然后使用BP的多层神经网络进行强行拟合给出的结果(它之所以学习快,就是因为速度模式学习的是 全连接层的权重及阈值,而没有对卷积核进行学习)
  • 本包检测使用的是灰度单通道,即对RGB进行降纬变成灰度图像来进行检测(RGB三通道都算的话,CPU有些吃不住)。
  • 若使用本包还有疑问可自行看测试包内的HelloWorld测试案例类,或者进群交流:561433236,提出问题
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

简介

java傻瓜ai框架,无需任何算法知识,通过简单的api调用就可以实现 常用的图像内物体的识别,定位等图像ai服务,及自然语言分类处理服务。面向java开发程序员,不依赖任何第三方库,第三方接口,独立包。视频教程地址如下 展开 收起
Java
Apache-2.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
Java
1
https://gitee.com/djkko/easyAi.git
git@gitee.com:djkko/easyAi.git
djkko
easyAi
easyAi
master

搜索帮助

14c37bed 8189591 565d56ea 8189591