1 Star 0 Fork 4

cena / elasticsearch-api

forked from sol / elasticsearch-api 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0

一.Elasticsearch

1.Elasticsearch介绍

img

1.1. 概述Elasticsearch

The Elastic Stack, 包括 Elasticsearch、Kibana和 Logstash(也称为 ELK Stack)。 能够安全可靠地获取任何来源、任何格式的数据,然后实时地对数据进行搜索、分析和可视 化。Elaticsearch,简称为 ES,ES 是一个开源的高扩展的分布式全文搜索引擎,是整个 Elastic Stack 技术栈的核心。它可以近乎实时的存储、检索数据;本身扩展性很好,可以扩展到上 百台服务器,处理 PB(1024TB) 级别的数据。

停顿词:
英文:a、an、the、of等
中文:的、了、着、是 、标点符号等

img

1.2. 全文搜索引擎

​ Google,百度类的网站搜索,它们都是根据网页中的关键字生成索引,我们在搜索的时 候输入关键字,它们会将该关键字即索引匹配到的所有网页返回;还有常见的项目中应用日志的搜索等等。对于这些非结构化的数据文本,关系型数据库搜索不是能很好的支持。 一般传统数据库,全文检索都实现的很鸡肋,因为一般也没人用数据库存文本字段。进行全文检索需要扫描整个表,如果数据量大的话即使对 SQL 的语法优化,也收效甚微。建 立了索引,但是维护起来也很麻烦,对于 insert 和 update 操作都会重新构建索引。 基于以上原因可以分析得出,在一些生产环境中,使用常规的搜索方式,性能是非常差的:

​ 基于以上原因可以分析得出,在一些生产环境中,使用常规的搜索方式,性能是非常差的:

  1. 搜索的数据对象是大量的非结构化的文本数据。
  2. 文件记录量达到数十万或数百万个甚至更多。
  3. 支持大量基于交互式文本的查询。
  4. 需求非常灵活的全文搜索查询。
  5. 对高度相关的搜索结果的有特殊需求,但是没有可用的关系数据库可以满足。
  6. 对不同记录类型、非文本数据操作或安全事务处理的需求相对较少的情况。

1.3. Elasticsearch 应用案例

  1. GitHub: 2013 年初,抛弃了 Solr,采取 Elasticsearch 来做 PB 级的搜索。“GitHub 使用 Elasticsearch 搜索 20TB 的数据,包括 13 亿文件和 1300 亿行代码”。
  2. 维基百科:启动以 Elasticsearch 为基础的核心搜索架构
  3. SoundCloud:“SoundCloud 使用 Elasticsearch 为 1.8 亿用户提供即时而精准的音乐搜索 服务”。
  4. 百度:目前广泛使用 Elasticsearch 作为文本数据分析,采集百度所有服务器上的各类指 标数据及用户自定义数据,通过对各种数据进行多维分析展示,辅助定位分析实例异常 或业务层面异常。目前覆盖百度内部 20 多个业务线(包括云分析、网盟、预测、文库、 直达号、钱包、风控等),单集群最大 100 台机器,200 个 ES 节点,每天导入 30TB+ 数据。
  5. 新浪:使用 Elasticsearch 分析处理 32 亿条实时日志。
  6. 阿里:使用 Elasticsearch 构建日志采集和分析体系。
  7. Stack Overflow:解决 Bug 问题的网站,全英文,编程人员交流的网站

2.Elasticsearch 安装

2.1. 下载软件

Elasticsearch 的官方地址:https://www.elastic.co/cn/

EKL相关软件中国下载地址:https://elasticsearch.cn/download/

使用版本:Centos7 + Java 8 + Elasticsearch 7.8.0

2.2. 安装Elasticsearch

2.2.1.创建ES用户
# 注意【ES不能以root用户身份启动必须创建普通用户】
#1.创建用户组在linux系统中创建新的组
groupadd es


#2.创建新的用户es并将es用户放入es组中
useradd es -g es 
#3.修改es用户密码
passwd es
#4.授权es 用户
chown -R es:es /home/elk/
#5.切换 es 用户
# 将 elasticsearch-7.8.0-linux-x86_64.tar.gz 上传  /home/elk 
# 在 /home/elk目录下进行解压
tar -zxvf elasticsearch-7.8.0-linux-x86_64.tar.gz
2.2.2. 进入解压完后的目录
#进入bin目录 执行 ./elasticsearch 进行启动
- bin          可执行的二进制文件的目录
- config       配置文件的目录
- lib          运行时依赖的库
- logs         运行时日志文件
- modules	   运行时依赖的模块
- plugins      可以安装官方以及第三方插件

# 执行如下命令测试客户端操作
# 注意:9300 端口为 Elasticsearch 集群间组件的通信端口,9200 端口为浏览器访问的 http
# 协议  RESTful 端口。
- curl http://localhost:9200

image-20210408232354748

2.2.3.开启远程访问权限
#1.本机服务器采用root用户直接关闭防火墙 || 云服务器开启9200,9300安全组端口
- systemctl disable firewalld.service
- systemctl status firewalld.service
2.开启ES远程访问
- vim elasticsearch.yml 将原来network修改为以下配置:
  network.host: 0.0.0.0
3.进入bin目录./elasticsearch 进行启动	
4.输入 http://服务器ip:9200/ 看到如下结果 安装成功!

image-20210409000341274

2.2.4.异常1

image-20210408233046951

解决方案:

# 在最后面追加下面内容	
# limits.conf系统资源的使用限制
- vim /etc/security/limits.conf	
  *   soft    nofile          65536
  *   hard    nofile          65536
  *   soft    nproc           4096
  *   hard    nproc           4096
#重新登录后检测配置是否生效 执行下面四条命令
- ulimit -Hn
- ulimit -Sn
- ulimit -Hu
- limit -Su

image-20210408234502991

2.2.5.异常2

image-20210408234657593

解决方案:

vim /etc/security/limits.d/20-nproc.conf 
修改内容:启动用户名 nproc 4096

原文件:

image-20210408234823117

修改后:

image-20210408234938922

2.2.6.异常3

image-20210408235238697

解决方案:

#使用root用户修改系统配置 在文件末尾添加
#文件包含限制一个进程可以拥有的VMA(虚拟内存区域)的数量
- vim /etc/sysctl.conf 
 vm.max_map_count=655360
# 检测是否配置生效
sysctl -p

image-20210408235513464

2.2.7.异常4

image-20210408235900920

解决方案:

vim config/elasticsearch.yml
 
#添加配置
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]

image-20210409000110773

3.Elasticsearch 相关概念

ES 里的 Index 可以看做一个库,而 Types 相当于表,Documents 则相当于表的行。 这里 Types 的概念已经被逐渐弱化,Elasticsearch 6.X 中,一个 index 下已经只能包含一个 type,Elasticsearch 7.X 中, Type 的概念已经被移除了。

3.1. 倒排索引

Elasticsearch 使用一种称为倒排索引的结构,它适用于快速的全文搜索

相关推荐:https://blog.csdn.net/qq_36551991/article/details/108889571

漫画理解:https://www.cnblogs.com/shwang/p/12074470.html

正排索引

在 ES 中采用的是一种名叫倒排索引的数据结构;在正式讲倒排索引之前先来聊聊和他相反的正排索引。肯定会对应有正向索引。所谓的正向索引,就是搜索引擎会将待搜索的文件都对应一个文件 ID,搜索时将这个 ID 和搜索关键字进行对应。

InnoDB存储引擎就是用B+Tree实现其索引结构

  1. 非叶子节点只存储键值信息。
  2. 所有叶子节点之间都有一个链指针。
  3. 数据记录都存放在叶子节点中。

img

倒排索引

img

优缺点对比:

在这里插入图片描述

3.2. 字段类型

- 官网参考地址 https://www.elastic.co/guide/en/elasticsearch/reference/7.x/mapping-types.html
1. 核心数据类型
(1)字符串类型: text, keyword
(2)数字类型:long, integer, short, byte, double, float, half_float, scaled_float
(3)日期:date
(4)日期 纳秒:date_nanos
(5)布尔型:boolean
(6)Binary:binary
(7)Range: integer_range, float_range, long_range, double_range, date_range

2. 复杂数据类型
(1)Object: object(for single JSON objects)
(2)Nested: nested (for arrays of JSON objects)

(1)Geo-point: geo_point (for lat/lon points)
(2)Geo-shape: geo_shape (for complex shapes like polygons)

4. 特殊数据类型
(1)IP:  ip (IPv4 和 IPv6 地址)
(2)Completion类型:completion (to provide auto-complete suggestions)
(3)Token count:token_count (to count the number of tokens in a string)
(4)mapper-murmur3:murmur3(to compute hashes of values at index-time and store them in the index)
(5)mapper-annotated-text:annotated-text (to index text containing special markup (typically used for identifying named entities))
(6)Percolator:(Accepts queries from the query-dsl)
(7)Join:(Defines parent/child relation for documents within the same index)
(8)Alias:(Defines an alias to an existing field.)
(9)Rank feature:(Record numeric feature to boost hits at query time.)
(10)Rank features:(Record numeric features to boost hits at query time.)
(11)Dense vector:(Record dense vectors of float values.)
(12)Sparse vector:(Record sparse vectors of float values.)
(13)Search-as-you-type:(A text-like field optimized for queries to implement as-you-type completion)
5.数组类型
        在Elasticsearch中,数组不需要一个特定的数据类型,任何字段都默认可以包含一个或多个值,当然,这多个值都必须是字段指定的数据类型。
6.Multi-fields
        Multi-fields 通常用来以不同的方式或目的索引同一个字段。比如,一个字符串类型字段可以同时被映射为 text 类型以用于全文检索、 keyword字段用于排序或聚合。又或者,你可以用standard分析器、english分析器和french分析器来索引同一个 text字段。

3.3. 索引、类型、映射、文档

​ Elasticsearch 是面向文档型数据库,一条数据在这里就是一个文档。为了方便大家理解, 我们将 Elasticsearch 里存储文档数据和关系型数据库 MySQL 存储数据的概念进行一个类比。

image-20210410144344625

3.3.1. 索引

一个索引就是一个拥有几分相似特征的文档的集合。比如说,你可以有一个客户数据的索引,另一个产品目录的索引,还有一个订单数据的索引。一个索引由一个名字来标识(必须全部是小写字母的)并且当我们要对这个索引中的文档进行索引、搜索、更新和删除的时候,都要使用到这个名字索引类似于关系型数据库中Database 的概念。在一个集群中,如果你想,可以定义任意多的索引。

3.3.2. 类型

在一个索引中,你可以定义一种或多种类型。一个类型是你的索引的一个逻辑上的分类/分区,其语义完全由你来定。通常,会为具有一组共同字段的文档定义一个类型。比如说,我们假设你运营一个博客平台并且将你所有的数 据存储到一个索引中。在这个索引中,你可以为用户数据定义一个类型,为博客数据定义另一个类型,当然,也可 以为评论数据定义另一个类型。类型类似于关系型数据库中Table的概念

3.3.3. 映射

Mapping是ES中的一个很重要的内容,它类似于传统关系型数据中table的schema,用于定义一个索引(index)中的类型(type)的数据的结构。 在ES中,我们可以手动创建type(相当于table)和mapping(相关与schema),也可以采用默认创建方式。在默认配置下,ES可以根据插入的数据自动地创建type及其mapping。 mapping中主要包括字段名、字段数据类型和字段索引类型

3.3.4. 文档

​ **一个文档是一个可被索引的基础信息单元,类似于表中的一条记录。**比如,你可以拥有某一个员工的文档,也可以拥有某个商品的一个文档。文档以采用了轻量级的数据交换格式JSON(Javascript Object Notation)来表示。

3.4. 索引操作

3.4.1 查询索引
GET /_cat/indices?v 
3.4.2 创建索引
PUT /dangdang/   
3.4.3 删除索引
DELETE /indexName
3.4.4 创建索引并指定映射

PUT /test_index             
{
  "mappings": {
      "properties": {
        "id":{"type": "long"},
        	"title":    { "type": "text"  },
        	"name":     { "type": "text"  },
       		"age":      { "type": "integer" },
        	"createTime":  {
         		 "type":   "date"
        		}
      		}
    	}
}

3.5. 文档操作

3.5.1. 新增文档
PUT /ems/_doc/1   #/索引/类型/id
{
  "name":"赵小六",
  "age":23,
  "bir":"2012-12-12",
  "content":"这是一个好一点的员工"
}
3.5.2. 删除文档
DELETE /ems/emp/1
3.5.3. 更新文档
 POST /sys_menu/_update/1339550467939639303
 {
      "doc": { "menu_name": "Jane Doe" }
}	
3.5.4. 根据ID进行查询
GET /sys_menu/_doc/1339550467939639303

3.6. 扩展分词器

常用为中文分词器(IK Analyzer)与拼音分词器(Pinyin Analyzer)

image-20210414230051940

将下载的分词器压缩包解压后放入elasticsearch\plugins目录下 重启elasticsearch 即可

image-20210414230156452

3.6.1. IK Analyzer

IK分词器提供了两种mapping类型用来做文档的分词分别是 ik_max_wordik_smart

GET /_analyze
{
  "analyzer": "ik_smart",
  "text": "中华人民共和国"
}

image-20210414230654112

GET /_analyze
{
  "analyzer": "ik_max_word",
  "text": "中华人民共和国"
}

image-20210414230754724

3.6.2. IK Analyzer 扩展词与停用词

​ IK支持自定义扩展词典停用词典,所谓**扩展词典就是有些词并不是关键词,但是也希望被ES用来作为检索的关键词,可以将这些词加入扩展词典。停用词典**就是有些词是关键词,但是出于业务场景不想使用这些关键词被检索到,可以将这些词放入停用词典

如何定义扩展词典和停用词典可以修改IK分词器中config目录中IKAnalyzer.cfg.xml这个文件。

词典的编码必须为UTF-8,否则无法生效

1. 修改vim IKAnalyzer.cfg.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
    <properties>
        <comment>IK Analyzer 扩展配置</comment>
        <!--用户可以在这里配置自己的扩展字典 -->
        <entry key="ext_dict">ext_dict.dic;extra_single_word.dic</entry>
         <!--用户可以在这里配置自己的扩展停止词字典-->
        <entry key="ext_stopwords">ext_stopword.dic</entry>
    </properties>

2. 在ik分词器目录下config目录中创建ext_dict.dic文件   编码一定要为UTF-8才能生效
	vim ext_dict.dic 加入扩展词即可

3. 在ik分词器目录下config目录中创建ext_stopword.dic文件 
	vim ext_stopword.dic 加入停用词即可

4.重启es生效
<!----------------------------  优雅的分割线  ----------------------------------------->
5.在实际生产中可以使用远程服务器文件,这种方案可以支持热更新 (更新服务器文件,ik分词词库也会重新加载) 

6. 修改vim /myselfFile/elasticsearch/plugins/ik/config/IKAnalyzer.cfg.xml 
这里以ngnix为例,(搭建 nginx 环境过程自行咨询小度):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
    <comment>IK Analyzer 扩展配置</comment>
    <!--用户可以在这里配置自己的扩展字典 -->
        <entry key="ext_dict"></entry>
         <!--用户可以在这里配置自己的扩展停止词字典-->
        <entry key="ext_stopwords"></entry>
     <!--用户可以在这里配置远程扩展字典 -->
    <entry key="remote_ext_dict"></entry>
     <!--用户可以在这里配置远程扩展停止词字典-->
    <entry key="remote_ext_stopwords">http://ip:port/ik/ik.txt</entry>
</properties>

7. 创建ik分词词库文件
cd /yourLocalPath/nginx/html
mkdir ik
cd ik
vim ik.txt
3.6.3. pinyin Analyzer
GET /_analyze
{
  "analyzer": "pinyin",
  "text": "刘德华"
}

image-20210414231240948

二.Kibana

1. Kibana介绍

​ Kibana 是一款开源的数据分析和可视化平台,它是 Elastic Stack 成员之一,设计用于和 Elasticsearch 协作。您可以使用 Kibana 对 Elasticsearch 索引中的数据进行搜索、查看、交互操作。您可以很方便的利用图表、表格及地图对数据进行多元化的分析和呈现。

Kibana 可以使大数据通俗易懂。它很简单,基于浏览器的界面便于您快速创建和分享动态数据仪表板来追踪 Elasticsearch 的实时数据变化。

搭建 Kibana 非常简单。您可以分分钟完成 Kibana 的安装并开始探索 Elasticsearch 的索引数据 — 没有代码、不需要额外的基础设施。

2. Kibana 安装

# 1.解压tar包
tar -zxvf  kibana-7.8.0-linux-x86_64.tar.gz
# 2.修改kibana 配置文件
vim ./kibana/config/kibana.yml
# 3.修改内容为
- server.host: "192.168.202.200"                		 #ES服务器主机地址
- elasticsearch.hosts: ["http://192.168.202.200:9200"]   #ES服务器地址
- i18n.locale: "zh-CN"                                   #修改国际化为中文
#4.启动 kibana 启动后 exit 退出进行后台运行
./kibana &  
# 访问 http://ip:5601

三.Logstash

1. Logstash 介绍

Logstash 是一款强大的数据处理工具,它可以实现数据传输,格式处理,格式化输出,还有强大的插件功能,常用于日志处理。

img

优点:

  1. 可伸缩性 节拍应该在一组Logstash节点之间进行负载平衡。 建议至少使用两个Logstash节点以实现高可用性。 每个Logstash节点只部署一个Beats输入是很常见的,但每个Logstash节点也可以部署多个Beats输入,以便为不同的数据源公开独立的端点。
  2. 弹性 Logstash持久队列提供跨节点故障的保护。对于Logstash中的磁盘级弹性,确保磁盘冗余非常重要。对于内部部署,建议您配置RAID。在云或容器化环境中运行时,建议您使用具有反映数据SLA的复制策略的永久磁盘。
  3. 可过滤 对事件字段执行常规转换。您可以重命名,删除,替换和修改事件中的字段。
  4. 可扩展插件生态系统,提供超过200个插件,以及创建和贡献自己的灵活性。

缺点:

Logstash耗资源较大,运行占用CPU和内存高。另外没有消息队列缓存,存在数据丢失隐患。

2. Logstash 安装

下载地址:https://artifacts.elastic.co/downloads/logstash/logstash-7.8.0.tar.gz

tar -zxvf logstash-7.8.0.tar.gz

3. Logstash 同步场景

3.1. mysql => Elasticsearch

  1. 上传Mysql驱动

  2. 编写Elasticsearch mappings

    {
    	"template": "api_news_tpl",
    	"settings": {
    		"index.refresh_interval": "1s"
    	},
    	"index_patterns": [
    		"api_news"
    	],
    	"mappings": {
    		"properties": {
    			"id": {
    				"type": "string"
    			},
    			"title": {
    				"type": "text",
    				"analyzer": "standard",
    				"fields": {
    					"max_word": {
    						"type": "text",
    						"analyzer": "ik_max_word"
    					},
    					"suggest": {
    						"type": "completion",
    						"analyzer": "ik_max_word"
    					},
    					"standard": {
    						"type": "completion",
    						"analyzer": "standard"
    					},
    					"pinyin": {
    						"type": "completion",
    						"analyzer": "pinyin"
    					}
    				}
    			},
    			"category": {
    				"type": "text",
    				"analyzer": "standard",
    				"fields": {
    					"max_word": {
    						"type": "text",
    						"analyzer": "ik_max_word"
    					},
    					"suggest": {
    						"type": "completion",
    						"analyzer": "ik_max_word"
    					},
    					"standard": {
    						"type": "completion",
    						"analyzer": "standard"
    					},
    					"pinyin": {
    						"type": "completion",
    						"analyzer": "pinyin"
    					}
    				}
    			},
    			"author_name": {
    				"type": "text",
    				"analyzer": "standard",
    				"fields": {
    					"max_word": {
    						"type": "text",
    						"analyzer": "ik_max_word"
    					},
    					"suggest": {
    						"type": "completion",
    						"analyzer": "ik_max_word"
    					},
    					"standard": {
    						"type": "completion",
    						"analyzer": "standard"
    					},
    					"pinyin": {
    						"type": "completion",
    						"analyzer": "pinyin"
    					}
    				}
    			},
    			"news_url": {
    				"type": "keyword"
    			},
    			"thumbnail_pic": {
    				"type": "keyword"
    			},
    			"is_content": {
    				"type": "keyword"
    			},
    			"date": {
    				"type": "date"
    			},
    			"create_time": {
    				"type": "date"
    			},
    			"update_time": {
    				"type": "date"
    			},
    			"is_delete": {
    				"type": "integer"
    			}
    		}
    	}
    }

    3.编写同步配置文件

    input {
      jdbc {
        # mysql相关jdbc配置
        jdbc_connection_string => "jdbc:mysql://localhost:3306/elasticsearch-api?autoReconnect=true&useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=CONVERT_TO_NULL&useSSL=false&serverTimezone=Asia/Shanghai&nullCatalogMeansCurrent=true"
        jdbc_user => "root"
        jdbc_password => "root"
        # jdbc连接mysql驱动的文件目录,可去官网下载:https://dev.mysql.com/downloads/connector/j/
        jdbc_driver_library => "D:\DevZip\logstash-7.9.2\config\mysql-connector-java-8.0.21.jar"
        # the name of the driver class for mysql
        jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
        #是否启动分页
        jdbc_paging_enabled => true
        jdbc_page_size => "50000"
        jdbc_default_timezone =>"Asia/Shanghai"
    
        # 可以直接写SQL语句在此处,如下:同步边界"sql_last_value"
        statement => "SELECT id,title,date,category,author_name,news_url,thumbnail_pic,is_content,create_time, update_time,is_delete  FROM  api_news where update_time >= :sql_last_value"
        # 也可以将SQL定义在文件中,如下:
        #statement_filepath => "./config/jdbc.sql"
    
        # 这里类似crontab,可以定制定时操作,比如每分钟执行一次同步(分 时 天 月 年)
        schedule => "* * * * *"
    
        # 是否记录上次执行结果, 如果为真,将会把上次执行到的 tracking_column 字段的值记录下来,保存到 last_run_metadata_path 指定的文件中
        record_last_run => true
    
        # 是否需要记录某个column 的值,如果record_last_run为真,可以自定义我们需要 track 的 column 名称,此时该参数就要为 true. 否则默认 track 的是 timestamp 的值.
        use_column_value => true
    
        # 如果 use_column_value 为真,需配置此参数. track 的数据库 column 名,该 column 必须是递增的. 一般是mysql主键
        tracking_column => "update_time"
        
        tracking_column_type => "timestamp"
    	#默认从 1970 01-01 08:00:00开始
        last_run_metadata_path => "./api_news_last_id"
    
        # 是否清除 last_run_metadata_path 的记录,如果为真那么每次都相当于从头开始查询所有的数据库记录
        clean_run => false
        #是否将 字段(column) 名称转小写
        lowercase_column_names => false
    	plugin_timezone => "local"
      }
    }
    
    
    filter {
    	ruby { 
    		code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" 
    	}
    	ruby {
    		code => "event.set('@timestamp',event.get('timestamp'))"
    	}
        ruby {
            code => "event.set('date', event.get('date').time.localtime + 8*60*60)" 
        }
        ruby {
            code => "event.set('update_time', event.get('update_time').time.localtime + 8*60*60)" 
        }
        ruby {
            code => "event.set('create_time', event.get('create_time').time.localtime + 8*60*60)" 
        } 	
    	mutate {
    		remove_field => ["timestamp"]
    	}
    }
    
    output {
      elasticsearch {
        hosts => "localhost:9200"
        index => "api_news"
        # 数据库中数据与ES中document数据关联的字段,此处代表数据库中的id字段和ES中的document的id关联。
        document_id => "%{id}"
        template => "D:\DevZip\logstash-7.9.2\template\api_news_mappings.json"
        template_name => "api_news_tpl"
        template_overwrite => true
      }
    
      # 这里输出调试,正式运行时可以注释掉
      stdout {
          codec => json_lines
      } 
    }
    

    4.启动:sh logstah -f "config_path"

Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

简介

封装RestHighLevelClient操作 Elasticsearch 展开 收起
Apache-2.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/cena009/elasticsearch-api.git
git@gitee.com:cena009/elasticsearch-api.git
cena009
elasticsearch-api
elasticsearch-api
master

搜索帮助

53164aa7 5694891 3bd8fe86 5694891