1 Star 0 Fork 0

刘元涛 / liuyuantao

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
search.xml 609.27 KB
一键复制 编辑 原始数据 按行查看 历史
刘元涛 提交于 2021-11-23 18:56 . Site updated: 2021-11-23 18:56:04
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title>Nginx日志按天自动切割</title>
<link href="/2021/11/19/4e544b17c3ac.html"/>
<url>/2021/11/19/4e544b17c3ac.html</url>
<content type="html"><![CDATA[<blockquote><p>使用Nginx做反向代理,代理了3台服务器,由于所有的访问都经过Nginx,发现access日志增长的很快,时间长了,日志文件难免会变得很大,打开查看很不方便,而且会影响服务器性能,因此有必要对nginx日志进行每日切割保存,减小文件大小,同时也更方便查看日志。</p></blockquote><h1 id="实现思路"><a href="#实现思路" class="headerlink" title="实现思路"></a>实现思路</h1><p>shell脚本+定时任务+nginx信号控制,完成日志定时切割。</p><h1 id="具体步骤"><a href="#具体步骤" class="headerlink" title="具体步骤"></a>具体步骤</h1><p>在需要保存日志或者是其他目录,新建一个shell脚本。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">vim /usr/local/openresty/nginx/logs/log_rotate.sh</span><br></pre></td></tr></table></figure><p>编辑内容如下:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line">#安装目录下日志文件</span><br><span class="line">base_log_path=&#x27;/usr/local/openresty/nginx/logs/access.log&#x27;</span><br><span class="line">base_error_path=&#x27;/usr/local/openresty/nginx/logs/error.log&#x27;</span><br><span class="line"></span><br><span class="line">#需要保存的目录位置</span><br><span class="line">log_path=&#x27;/data_lytdev_dir/nginx/logs/&#x27;</span><br><span class="line"></span><br><span class="line">#获取月份</span><br><span class="line">log_month=$(date -d yesterday +&quot;%Y%m&quot;)</span><br><span class="line"></span><br><span class="line">#获取前一天日期 (第二天凌晨备份,即保存的日志就是当天时间的日志)</span><br><span class="line">log_day=$(date -d yesterday +&quot;%d&quot;)</span><br><span class="line"></span><br><span class="line">#在指定位置创建文件夹</span><br><span class="line">mkdir -p $log_path/$log_month</span><br><span class="line"></span><br><span class="line">#将安装目录下的日志文件,移动到指定存储位置</span><br><span class="line">mv $base_log_path $log_path/$log_month/access_$log_day.log</span><br><span class="line">mv $base_error_path $log_path/$log_month/error_$log_day.log</span><br><span class="line"></span><br><span class="line">#再使用信号控制切割日志</span><br><span class="line">#USR1 表示nginx信号控制,切割日志</span><br><span class="line">kill -USR1 `cat /usr/local/openresty/nginx/logs/nginx.pid`</span><br><span class="line"></span><br><span class="line"></span><br><span class="line">#每天凌晨1点切割日志</span><br><span class="line">* 1 * * * /usr/local/openresty/nginx/logs/log_rotate.sh</span><br></pre></td></tr></table></figure><p>赋予脚本可执行权限:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">chmod u+x log_rotate.sh</span><br></pre></td></tr></table></figure><p>执行测试:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">./log_rotate.sh</span><br></pre></td></tr></table></figure><p>发现备份的目录有日志文件,并且原来的access.log变的很小,说明脚本执行成功。</p><p>图片</p><p>设置定时任务:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">crontab -e</span><br></pre></td></tr></table></figure><p>添加如下内容:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#每天凌晨1点切割日志</span><br><span class="line">* 1 * * * /usr/local/openresty/nginx/logs/log_rotate.sh</span><br></pre></td></tr></table></figure><p>大功告成,日志按天自动切割的功能就完成了!</p>]]></content>
<categories>
<category> 运维 </category>
</categories>
<tags>
<tag> Nginx </tag>
<tag> 运维 </tag>
</tags>
</entry>
<entry>
<title>ClickHouse新特性之SQL化用户配置</title>
<link href="/2021/11/18/b4485f4970ce.html"/>
<url>/2021/11/18/b4485f4970ce.html</url>
<content type="html"><![CDATA[<p>在之前的ClickHouse版本中,我们只能通过修改users.xml文件来配置用户及相关的参数(权限、资源限制、查询配额等),不是很方便。好在从20.5版本起,ClickHouse开始支持SQL化的用户配置(如同MySQL、PostgreSQL一样),易用性增强了很多。</p><p>SQL化用户配置默认是关闭的,要启用它,需要在users.xml中的一个用户(一般就是默认的default)下添加:</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">access_management</span>&gt;</span>1<span class="tag">&lt;/<span class="name">access_management</span>&gt;</span></span><br></pre></td></tr></table></figure><p>另外还需要在config.xml中配置权限管理数据的存储位置:</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">access_control_path</span>&gt;</span>/data1/clickhouse/access/<span class="tag">&lt;/<span class="name">access_control_path</span>&gt;</span></span><br></pre></td></tr></table></figure><p>这样我们就可以用default用户管理其他用户了。ClickHouse采用与MySQL近似的RBAC(Role-based access control,基于角色的访问控制)机制,所以语法也很像。下面简要列举几个操作,更加详细的内容可以参见官方文档:<a href="https://clickhouse.tech/docs/en/operations/access-rights/%E3%80%82">https://clickhouse.tech/docs/en/operations/access-rights/。</a></p><ul><li>创建一个单次查询只允许使用4个线程、4G内存的只读profile。</li></ul><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">CREATE</span> SETTINGS PROFILE low_mem_readonly SETTINGS max_threads <span class="operator">=</span> <span class="number">4</span>, max_memory_usage <span class="operator">=</span> <span class="number">4000000000</span> READONLY;</span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 基于上述profile创建一个名为accountant的角色,并授予该角色查询account_db数据库中所有表的权限。</li></ul><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">CREATE</span> ROLE accountant SETTINGS PROFILE <span class="string">&#x27;low_mem_readonly&#x27;</span>;</span><br><span class="line"><span class="keyword">GRANT</span> <span class="keyword">SELECT</span> <span class="keyword">ON</span> account_db.<span class="operator">*</span> <span class="keyword">TO</span> accountant;</span><br></pre></td></tr></table></figure><ul><li>创建一个允许内网访问的用户lmagic,设定密码,并指定它的默认角色为accountant,lmagic用户就具有了accountant角色的权限。</li></ul><p>注意新创建的用户没有任何权限,我们也可以通过GRANT语句直接给用户授予权限。 </p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">CREATE USER lmagic@&#x27;192.168.%.%&#x27; IDENTIFIED WITH sha256_password BY &#x27;lm2020&#x27; DEFAULT ROLE accountant;</span><br><span class="line"></span><br><span class="line">GRANT OPTIMIZE ON datasets.hits_v1 TO lmagic@&#x27;192.168.%.%&#x27;;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>观察上面设定的access_control_path目录,可以发现创建的用户配置都被保存了下来。</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">-rw-r----- 1 clickhouse clickhouse 135 Sep 4 21:51 034f455a-aeec-f9d0-3f76-b026318de1bd.sql</span><br><span class="line">-rw-r----- 1 clickhouse clickhouse 107 Sep 4 21:45 934198fc-bd87-90bb-2062-f0c165f54cae.sql</span><br><span class="line">-rw-r----- 1 clickhouse clickhouse 293 Sep 4 21:58 b643aee4-c4d4-5dd8-2e63-2dd13bf42e90.sql</span><br><span class="line">-rw-r----- 1 clickhouse clickhouse 1 Aug 23 01:14 quotas.list</span><br><span class="line">-rw-r----- 1 clickhouse clickhouse 48 Sep 4 21:52 roles.list</span><br><span class="line">-rw-r----- 1 clickhouse clickhouse 1 Aug 23 01:14 row_policies.list</span><br><span class="line">-rw-r----- 1 clickhouse clickhouse 54 Sep 4 21:46 settings_profiles.list</span><br><span class="line">-rw-r----- 1 clickhouse clickhouse 56 Sep 4 21:59 users.list</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>如果需要更改用户配置,使用ALTER、DROP、REVOKE语法即可。例如:</p><ul><li>修改lmagic用户的密码,并改为在任何地址都可访问。</li></ul><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">ALTER</span> <span class="keyword">USER</span> lmagic@<span class="string">&#x27;192.168.%.%&#x27;</span> RENAME <span class="keyword">TO</span> lmagic@<span class="string">&#x27;%&#x27;</span> IDENTIFIED <span class="keyword">WITH</span> sha256_password <span class="keyword">BY</span> <span class="string">&#x27;lm2021&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li>删除刚才创建的profile。 </li></ul><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">DROP</span> SETTINGS PROFILE low_mem_readonly;</span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li>撤销lmagic用户的OPTIMIZE权限。 </li></ul><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">REVOKE</span> OPTIMIZE <span class="keyword">ON</span> datasets.hits_v1 <span class="keyword">FROM</span> lmagic@<span class="string">&#x27;%&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>最后需要注意,上述配置语句的作用域都是在单个节点上的。如果想要在集群所有节点上都生效的话,使用老生常谈的ON CLUSTER语法即可。例如:</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">CREATE</span> <span class="keyword">USER</span> lmagic@<span class="string">&#x27;192.168.%.%&#x27;</span> <span class="keyword">ON</span> CLUSTER sht_ck_cluster_pro1 IDENTIFIED WITH......</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> ClickHouse </tag>
</tags>
</entry>
<entry>
<title>MergeTree引擎的固定/自适应索引粒度</title>
<link href="/2021/11/18/6d7187a575ff.html"/>
<url>/2021/11/18/6d7187a575ff.html</url>
<content type="html"><![CDATA[<h1 id="前言"><a href="#前言" class="headerlink" title="前言"></a>前言</h1><blockquote><p>我们在刚开始学习ClickHouse的MergeTree引擎时,建表语句的末尾总会有<code>SETTINGS index_granularity = 8192</code>这句话(其实不写也可以),表示索引粒度为8192。在每个data part中,索引粒度参数的含义有二:</p></blockquote><ul><li>每隔index_granularity行对主键组的数据进行采样,形成稀疏索引,并存储在primary.idx文件中;</li><li>每隔index_granularity行对每一列的压缩数据([column].bin)进行采样,形成数据标记,并存储在[column].mrk文件中。</li></ul><p>index_granularity、primary.idx、[column].bin/mrk之间的关系可以用下面的简图来表示。</p><p><img src="https://oscimg.oschina.net/oscnet/up-8479e16ac3738d1e8243838c11c9e96aba9.png"></p><p>但是早在一年前的ClickHouse 19.11.8版本,社区就引入了自适应(adaptive)索引粒度的特性,并且在当前的版本中是默认开启的。也就是说,主键索引和数据标记生成的间隔可以不再固定,更加灵活。下面通过实例来讲解固定索引粒度和自适应索引粒度之间的不同之处。</p><h1 id="固定索引粒度"><a href="#固定索引粒度" class="headerlink" title="固定索引粒度"></a>固定索引粒度</h1><p>利用Yandex.Metrica提供的hits_v1测试数据集,创建如下的表。</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">CREATE</span> <span class="keyword">TABLE</span> datasets.hits_v1_fixed</span><br><span class="line">(</span><br><span class="line"> `WatchID` UInt64,</span><br><span class="line"> `JavaEnable` UInt8,</span><br><span class="line"> `Title` String,</span><br><span class="line"> <span class="comment">-- A lot more columns...</span></span><br><span class="line">)</span><br><span class="line">ENGINE <span class="operator">=</span> MergeTree()</span><br><span class="line"><span class="keyword">PARTITION</span> <span class="keyword">BY</span> toYYYYMM(EventDate)</span><br><span class="line"><span class="keyword">ORDER</span> <span class="keyword">BY</span> (CounterID, EventDate, intHash32(UserID))</span><br><span class="line">SAMPLE <span class="keyword">BY</span> intHash32(UserID)</span><br><span class="line">SETTINGS index_granularity <span class="operator">=</span> <span class="number">8192</span>, </span><br><span class="line"> index_granularity_bytes <span class="operator">=</span> <span class="number">0</span>; <span class="comment">-- Disable adaptive index granularity</span></span><br></pre></td></tr></table></figure><p>注意使用<code>SETTINGS index_granularity_bytes = 0</code>取消自适应索引粒度。将测试数据导入之后,执行<code>OPTIMIZE TABLE</code>语句触发merge,以方便观察索引和标记数据。</p><p>来到merge完成后的数据part目录中——笔者这里是<code>201403_1_32_3</code>,并利用od(octal dump)命令观察primary.idx中的内容。注意索引列一共有3列,Counter和intHash32(UserID)都是32位整形,EventDate是16位整形(Date类型存储的是距离1970-01-01的天数)。</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 0 -N 4 primary.idx </span></span><br><span class="line"> 57 <span class="comment"># Counter[0]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -d -j 4 -N 2 primary.idx </span></span><br><span class="line"> 16146 <span class="comment"># EventDate[0]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 6 -N 4 primary.idx </span></span><br><span class="line"> 78076527 <span class="comment"># intHash32(UserID)[0]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 10 -N 4 primary.idx </span></span><br><span class="line"> 1635 <span class="comment"># Counter[1]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -d -j 14 -N 2 primary.idx </span></span><br><span class="line"> 16149 <span class="comment"># EventDate[1]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 16 -N 4 primary.idx </span></span><br><span class="line"> 1562260480 <span class="comment"># intHash32(UserID)[1]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 20 -N 4 primary.idx </span></span><br><span class="line"> 3266 <span class="comment"># Counter[2]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -d -j 24 -N 2 primary.idx </span></span><br><span class="line"> 16148 <span class="comment"># EventDate[2]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 26 -N 4 primary.idx </span></span><br><span class="line"> 490736209 <span class="comment"># intHash32(UserID)[2]</span></span><br></pre></td></tr></table></figure><p>能够看出ORDER BY的第一关键字Counter确实是递增的,但是不足以体现出index_granularity的影响。因此再观察一下标记文件的内容,以8位整形的Age列为例,比较简单。</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -l -j 0 -N 320 Age.mrk</span></span><br><span class="line"> 0 0</span><br><span class="line"> 0 8192</span><br><span class="line"> 0 16384</span><br><span class="line"> 0 24576</span><br><span class="line"> 0 32768</span><br><span class="line"> 0 40960</span><br><span class="line"> 0 49152</span><br><span class="line"> 0 57344</span><br><span class="line"> 19423 0</span><br><span class="line"> 19423 8192</span><br><span class="line"> 19423 16384</span><br><span class="line"> 19423 24576</span><br><span class="line"> 19423 32768</span><br><span class="line"> 19423 40960</span><br><span class="line"> 19423 49152</span><br><span class="line"> 19423 57344</span><br><span class="line"> 45658 0</span><br><span class="line"> 45658 8192</span><br><span class="line"> 45658 16384</span><br><span class="line"> 45658 24576</span><br></pre></td></tr></table></figure><p>上面打印出了两列数据,表示被选为标记的行的两个属性:第一个属性为该行所处的压缩数据块在对应bin文件中的起始偏移量,第二个属性为该行在数据块解压后,在块内部所处的偏移量,单位均为字节。由于一条Age数据在解压的情况下正好占用1字节,所以能够证明数据标记是按照固定index_granularity的规则生成的。</p><h1 id="自适应索引粒度"><a href="#自适应索引粒度" class="headerlink" title="自适应索引粒度"></a>自适应索引粒度</h1><p>创建同样结构的表,写入相同的测试数据,但是将index_granularity_bytes设为1MB(为了方便看出差异而已,默认值是10MB),以启用自适应索引粒度。</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">CREATE</span> <span class="keyword">TABLE</span> datasets.hits_v1_adaptive</span><br><span class="line">(</span><br><span class="line"> `WatchID` UInt64,</span><br><span class="line"> `JavaEnable` UInt8,</span><br><span class="line"> `Title` String,</span><br><span class="line"> <span class="comment">-- A lot more columns...</span></span><br><span class="line">)</span><br><span class="line">ENGINE <span class="operator">=</span> MergeTree()</span><br><span class="line"><span class="keyword">PARTITION</span> <span class="keyword">BY</span> toYYYYMM(EventDate)</span><br><span class="line"><span class="keyword">ORDER</span> <span class="keyword">BY</span> (CounterID, EventDate, intHash32(UserID))</span><br><span class="line">SAMPLE <span class="keyword">BY</span> intHash32(UserID)</span><br><span class="line">SETTINGS index_granularity <span class="operator">=</span> <span class="number">8192</span>, </span><br><span class="line"> index_granularity_bytes <span class="operator">=</span> <span class="number">1048576</span>; <span class="comment">-- Enable adaptive index granularity</span></span><br></pre></td></tr></table></figure><p>index_granularity_bytes表示每隔表中数据的大小来生成索引和标记,且与index_granularity共同作用,只要满足两个条件之一即生成。</p><p>触发merge之后,观察primary.idx的数据。</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 0 -N 4 primary.idx </span></span><br><span class="line"> 57 <span class="comment"># Counter[0]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -d -j 4 -N 2 primary.idx </span></span><br><span class="line"> 16146 <span class="comment"># EventDate[0]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 6 -N 4 primary.idx </span></span><br><span class="line"> 78076527 <span class="comment"># intHash32(UserID)[0]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 10 -N 4 primary.idx</span></span><br><span class="line"> 61 <span class="comment"># Counter[1]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -d -j 14 -N 2 primary.idx</span></span><br><span class="line"> 16151 <span class="comment"># EventDate[1]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 16 -N 4 primary.idx</span></span><br><span class="line"> 1579769176 <span class="comment"># intHash32(UserID)[1]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 20 -N 4 primary.idx</span></span><br><span class="line"> 63 <span class="comment"># Counter[2]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -d -j 24 -N 2 primary.idx</span></span><br><span class="line"> 16148 <span class="comment"># EventDate[2]</span></span><br><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -i -j 26 -N 4 primary.idx</span></span><br><span class="line"> 2037061113 <span class="comment"># intHash32(UserID)[2]</span></span><br></pre></td></tr></table></figure><p>通过Counter列的数据可见,主键索引明显地变密集了,说明index_granularity_bytes的设定生效了。接下来仍然以Age列为例观察标记文件,注意文件扩展名变成了mrk2,说明启用了自适应索引粒度。</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@ck-test001 201403_1_32_3]<span class="comment"># od -An -l -j 0 -N 2048 --width=24 Age.mrk2</span></span><br><span class="line"> 0 0 1120</span><br><span class="line"> 0 1120 1120</span><br><span class="line"> 0 2240 1120</span><br><span class="line"> 0 3360 1120</span><br><span class="line"> 0 4480 1120</span><br><span class="line"> 0 5600 1120</span><br><span class="line"> 0 6720 1120</span><br><span class="line"> 0 7840 352</span><br><span class="line"> 0 8192 1111</span><br><span class="line"> 0 9303 1111</span><br><span class="line"> 0 10414 1111</span><br><span class="line"> 0 11525 1111</span><br><span class="line"> 0 12636 1111</span><br><span class="line"> 0 13747 1111</span><br><span class="line"> 0 14858 1111</span><br><span class="line"> 0 15969 415</span><br><span class="line"> 0 16384 1096</span><br><span class="line"><span class="comment"># 略去一些</span></span><br><span class="line"> 17694 0 1102</span><br><span class="line"> 17694 1102 1102</span><br><span class="line"> 17694 2204 1102</span><br><span class="line"> 17694 3306 1102</span><br><span class="line"> 17694 4408 1102</span><br><span class="line"> 17694 5510 1102</span><br><span class="line"> 17694 6612 956</span><br><span class="line"> 17694 7568 1104</span><br><span class="line"><span class="comment"># ......</span></span><br></pre></td></tr></table></figure><p>mrk2文件被格式化成了3列,前两列的含义与mrk文件相同,而第三列的含义则是两个标记之间相隔的行数。可以观察到,每隔1100行左右就会生成一个标记(同时也说明该表内1MB的数据大约包含1100行)。同时,在偏移量计数达到8192、16384等8192的倍数时(即经过了index_granularity_bytes的倍数行),同样也会生成标记,证明两个参数是协同生效的。</p><p>最后一个问题:ClickHouse为什么要设计自适应索引粒度呢?当一行的数据量比较大时(比如达到了1kB甚至数kB),单纯按照固定索引粒度会造成每个granule的数据量膨胀,拖累读写性能。有了自适应索引粒度之后,每个granule的数据量可以被控制在合理的范围内,官方给定的默认值10MB在大多数情况下都不需要更改。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> ClickHouse </tag>
</tags>
</entry>
<entry>
<title>无缝更改ClickHouse物化视图SELECT逻辑的方法</title>
<link href="/2021/11/18/23f3f36da036.html"/>
<url>/2021/11/18/23f3f36da036.html</url>
<content type="html"><![CDATA[<p>在我司的ClickHouse DWS层有一张ReplicatedAggregatingMergeTree引擎的物化视图,为近线推荐业务提供关键用户行为的计数值。该物化视图的底表(即所谓“inner”表)有两张,分别从不同的事实表聚合数据,如下图所示。 <img src="https://oscimg.oschina.net/oscnet/up-ac406843f42bfac15c16815aeb376bca9ad.png"><br>Q:算法同学希望在物化视图中增加一些用户行为,如何在保证不影响线上业务(不删表)的前提下把这些行为加进去? A:以mv_recommend_access_stat为例。</p><ol><li> 使用<code>DETACH TABLE</code>语句将底表解挂,即暂时屏蔽掉它的元数据</li></ol><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">DETACH <span class="keyword">TABLE</span> rtdw_dws.mv_recommend_access_stat <span class="keyword">ON</span> CLUSTER sht_ck_cluster_pro;</span><br><span class="line"></span><br></pre></td></tr></table></figure><ol start="2"><li> 定位到它的元数据文件所在的路径,可以从系统表查询。</li></ol><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">SELECT</span> metadata_path <span class="keyword">FROM</span> system.tables <span class="keyword">WHERE</span> name <span class="operator">=</span> <span class="string">&#x27;mv_recommend_access_stat&#x27;</span>;</span><br><span class="line">┌─metadata_path─────────────────────────────────────────────────────────┐</span><br><span class="line">│ <span class="operator">/</span>data1<span class="operator">/</span>clickhouse<span class="operator">/</span>data<span class="operator">/</span>metadata<span class="operator">/</span>rtdw_dws<span class="operator">/</span>mv_recommend_access_stat.sql │</span><br><span class="line">└───────────────────────────────────────────────────────────────────────┘</span><br><span class="line"></span><br></pre></td></tr></table></figure><ol start="3"><li> 打开上述SQL文件,修改SELECT语句如下。</li></ol><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">ATTACH MATERIALIZED <span class="keyword">VIEW</span> mv_recommend_access_stat <span class="keyword">TO</span> rtdw_dws.mv_recommend_events_stat</span><br><span class="line">(</span><br><span class="line"> `ts_date` <span class="type">Date</span>,</span><br><span class="line"> `groupon_id` Int64,</span><br><span class="line"> `merchandise_id` Int64,</span><br><span class="line"> `event_type` String,</span><br><span class="line"> `event_count` AggregateFunction(count, UInt64)</span><br><span class="line">) <span class="keyword">AS</span> <span class="keyword">SELECT</span> </span><br><span class="line"> ts_date,</span><br><span class="line"> groupon_id,</span><br><span class="line"> merchandise_id,</span><br><span class="line"> event_type,</span><br><span class="line"> countState(toUInt64(<span class="number">1</span>)) <span class="keyword">AS</span> event_count</span><br><span class="line"><span class="keyword">FROM</span> rtdw_dwd.analytics_access_log</span><br><span class="line"><span class="keyword">WHERE</span> event_type <span class="keyword">IN</span> (<span class="string">&#x27;openDetail&#x27;</span>, <span class="string">&#x27;addCart&#x27;</span>, <span class="string">&#x27;buyNow&#x27;</span>, <span class="string">&#x27;share&#x27;</span>, <span class="string">&#x27;openAllEvaluations&#x27;</span>)</span><br><span class="line"><span class="keyword">GROUP</span> <span class="keyword">BY</span> ts_date, groupon_id, merchandise_id, event_type</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>然后将修改过的文件通过scp等方式分发到集群中所有节点(重要!) 4. 使用<code>ATTACH TABLE</code>语句将底表重新挂载回去</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">ATTACH <span class="keyword">TABLE</span> rtdw_dws.mv_recommend_access_stat <span class="keyword">ON</span> CLUSTER sht_ck_cluster_pro;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>观察物化视图,出现了新加入的事件计数,大功告成。</p><p>来源:<a href="https://www.jianshu.com/p/27e450817e32">https://www.jianshu.com/p/27e450817e32</a></p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> ClickHouse </tag>
</tags>
</entry>
<entry>
<title>物化视图简介与ClickHouse中的应用示例</title>
<link href="/2021/11/18/fec2d399fcb4.html"/>
<url>/2021/11/18/fec2d399fcb4.html</url>
<content type="html"><![CDATA[<p>前言</p><p>最近在搞520大促的事情,忙到脚不点地,所以就写些简单省事的吧。<br>物化视图概念</p><p>我们都知道,数据库中的视图(view)是从一张或多张数据库表查询导出的虚拟表,反映基础表中数据的变化,且本身不存储数据。那么物化视图(materialized view)是什么呢?英文维基中给出的描述是相当准确的,抄录如下。</p><blockquote><p> In computing, a materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function.</p></blockquote><blockquote><p>The process of setting up a materialized view is sometimes called materialization. This is a form of caching the results of a query, similar to memoization of the value of a function in functional languages, and it is sometimes described as a form of precomputation. As with other forms of precomputation, database users typically use materialized views for performance reasons, i.e. as a form of optimization.</p></blockquote><p><img src="https://upload-images.jianshu.io/upload_images/195230-70832933a196c44b.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/428"></p><p>物化视图是查询结果集的一份持久化存储,所以它与普通视图完全不同,而非常趋近于表。“查询结果集”的范围很宽泛,可以是基础表中部分数据的一份简单拷贝,也可以是多表join之后产生的结果或其子集,或者原始数据的聚合指标等等。所以,物化视图不会随着基础表的变化而变化,所以它也称为快照(snapshot)。如果要更新数据的话,需要用户手动进行,如周期性执行SQL,或利用触发器等机制。</p><p>产生物化视图的过程就叫做“物化”(materialization)。广义地讲,物化视图是数据库中的预计算逻辑+显式缓存,典型的空间换时间思路。所以用得好的话,它可以避免对基础表的频繁查询并复用结果,从而显著提升查询的性能。它当然也可以利用一些表的特性,如索引。</p><p>在传统关系型数据库中,Oracle、PostgreSQL、SQL Server等都支持物化视图,作为流处理引擎的Kafka和Spark也支持在流上建立物化视图。下面来聊聊ClickHouse里的物化视图功能。<br>ClickHouse物化视图示例</p><p>我们目前只是将CK当作点击流数仓来用,故拿点击流日志表当作基础表。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">CREATE TABLE IF NOT EXISTS ods.analytics_access_log</span><br><span class="line">ON CLUSTER sht_ck_cluster_1 (</span><br><span class="line"> ts_date Date,</span><br><span class="line"> ts_date_time DateTime,</span><br><span class="line"> user_id Int64,</span><br><span class="line"> event_type String,</span><br><span class="line"> from_type String,</span><br><span class="line"> column_type String,</span><br><span class="line"> groupon_id Int64,</span><br><span class="line"> site_id Int64,</span><br><span class="line"> site_name String,</span><br><span class="line"> main_site_id Int64,</span><br><span class="line"> main_site_name String,</span><br><span class="line"> merchandise_id Int64,</span><br><span class="line"> merchandise_name String,</span><br><span class="line"> -- A lot more other columns......</span><br><span class="line">)</span><br><span class="line">ENGINE = ReplicatedMergeTree(&#x27;/clickhouse/tables/&#123;shard&#125;/ods/analytics_access_log&#x27;,&#x27;&#123;replica&#125;&#x27;)</span><br><span class="line">PARTITION BY ts_date</span><br><span class="line">ORDER BY (ts_date,toStartOfHour(ts_date_time),main_site_id,site_id,event_type,column_type)</span><br><span class="line">TTL ts_date + INTERVAL 1 MONTH</span><br><span class="line">SETTINGS index_granularity = 8192,</span><br><span class="line">use_minimalistic_part_header_in_zookeeper = 1,</span><br><span class="line">merge_with_ttl_timeout = 86400;</span><br><span class="line">w/ SummingMergeTree</span><br></pre></td></tr></table></figure><p>如果要查询某个站点一天内分时段的商品点击量,写出如下SQL语句。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">SELECT toStartOfHour(ts_date_time) AS ts_hour,</span><br><span class="line">merchandise_id,</span><br><span class="line">count() AS pv</span><br><span class="line">FROM ods.analytics_access_log_all</span><br><span class="line">WHERE ts_date = today() AND site_id = 10087</span><br><span class="line">GROUP BY ts_hour,merchandise_id;</span><br></pre></td></tr></table></figure><p>这是一个典型的聚合查询。如果各个地域的分析人员都经常执行该类查询(只是改变ts_date与site_id的条件而已),那么肯定有相同的语句会被重复执行多次,每次都会从analytics_access_log_all这张大的明细表取数据,显然是比较浪费资源的。而通过将CK中的物化视图与合适的MergeTree引擎配合使用,就可以实现预聚合,从物化视图出数的效率非常好。</p><p>下面就根据上述SQL语句的查询条件创建一个物化视图,请注意其语法。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">CREATE MATERIALIZED VIEW IF NOT EXISTS test.mv_site_merchandise_visit</span><br><span class="line">ON CLUSTER sht_ck_cluster_1</span><br><span class="line">ENGINE = ReplicatedSummingMergeTree(&#x27;/clickhouse/tables/&#123;shard&#125;/test/mv_site_merchandise_visit&#x27;,&#x27;&#123;replica&#125;&#x27;)</span><br><span class="line">PARTITION BY ts_date</span><br><span class="line">ORDER BY (ts_date,ts_hour,site_id,merchandise_id)</span><br><span class="line">SETTINGS index_granularity = 8192, use_minimalistic_part_header_in_zookeeper = 1</span><br><span class="line">AS SELECT</span><br><span class="line"> ts_date,</span><br><span class="line"> toStartOfHour(ts_date_time) AS ts_hour,</span><br><span class="line"> site_id,</span><br><span class="line"> merchandise_id,</span><br><span class="line"> count() AS visit</span><br><span class="line">FROM ods.analytics_access_log</span><br><span class="line">GROUP BY ts_date,ts_hour,site_id,merchandise_id;</span><br></pre></td></tr></table></figure><p>可见,物化视图与表一样,也可以指定表引擎、分区键、主键和表设置参数。商品点击量是个简单累加的指标,所以我们选择SummingMergeTree作为表引擎(上述是高可用情况,所以用了带复制的ReplicatedSummingMergeTree)。该引擎支持以主键分组,对数值型指标做自动累加。每当表的parts做后台merge的时候,主键相同的所有记录会被加和合并成一行记录,大大节省空间。</p><p>用户在创建物化视图时,通过AS SELECT …子句从基础表中查询需要的列,十分灵活。在默认情况下,物化视图刚刚创建时没有数据,随着基础表中的数据批量写入,物化视图的计算结果也逐渐填充起来。如果需要从历史数据初始化,在AS SELECT子句的前面加上POPULATE关键字即可。需要注意,在POPULATE填充历史数据的期间,新进入的这部分数据会被忽略掉,所以如果对准确性要求非常高,应慎用。</p><p>执行完上述CREATE MATERIALIZED VIEW语句后,通过SHOW TABLES语句查询,会发现有一张名为.inner.[物化视图名]的表,这就是持久化物化视图数据的表,当然我们是不会直接操作它的。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">SHOW TABLES</span><br><span class="line"></span><br><span class="line">┌─name─────────────────────────────┐</span><br><span class="line">│ .inner.mv_site_merchandise_visit │</span><br><span class="line">│ mv_site_merchandise_visit │</span><br><span class="line">└──────────────────────────────────┘</span><br></pre></td></tr></table></figure><p>基础表、物化视图与物化视图的underlying table的关系如下简图所示。</p><p><img src="https://upload-images.jianshu.io/upload_images/195230-462138b5b8e6e815.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/697"></p><p>当然,在物化视图上也可以建立分布式表。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">CREATE TABLE IF NOT EXISTS test.mv_site_merchandise_visit_all</span><br><span class="line">ON CLUSTER sht_ck_cluster_1</span><br><span class="line">AS test.mv_site_merchandise_visit</span><br><span class="line">ENGINE = Distributed(sht_ck_cluster_1,test,mv_site_merchandise_visit,rand());</span><br></pre></td></tr></table></figure><p>查询物化视图的风格与查询普通表没有区别,返回的就是预聚合的数据了。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">SELECT ts_hour,</span><br><span class="line">merchandise_id,</span><br><span class="line">sum(visit) AS visit_sum</span><br><span class="line">FROM test.mv_site_merchandise_visit_all</span><br><span class="line">WHERE ts_date = today() AND site_id = 10087</span><br><span class="line">GROUP BY ts_hour,merchandise_id;</span><br><span class="line">w/ AggregatingMergeTree</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>SummingMergeTree只能处理累加的情况,如果不只有累加呢?物化视图还可以配合更加通用的AggregatingMergeTree引擎使用,用户能够通过聚合函数(aggregate function)来自定义聚合指标。举个例子,假设我们要按各城市的页面来按分钟统计PV和UV,就可以创建如下的物化视图。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">CREATE MATERIALIZED VIEW IF NOT EXISTS dw.main_site_minute_pv_uv</span><br><span class="line">ON CLUSTER sht_ck_cluster_1</span><br><span class="line">ENGINE = ReplicatedAggregatingMergeTree(&#x27;/clickhouse/tables/&#123;shard&#125;/dw/main_site_minute_pv_uv&#x27;,&#x27;&#123;replica&#125;&#x27;)</span><br><span class="line">PARTITION BY ts_date</span><br><span class="line">ORDER BY (ts_date,ts_minute,main_site_id)</span><br><span class="line">SETTINGS index_granularity = 8192, use_minimalistic_part_header_in_zookeeper = 1</span><br><span class="line">AS SELECT</span><br><span class="line"> ts_date,</span><br><span class="line"> toStartOfMinute(ts_date_time) as ts_minute,</span><br><span class="line"> main_site_id,</span><br><span class="line"> sumState(1) as pv,</span><br><span class="line"> uniqState(user_id) as uv</span><br><span class="line">FROM ods.analytics_access_log</span><br><span class="line">GROUP BY ts_date,ts_minute,main_site_id;</span><br></pre></td></tr></table></figure><p>利用AggregatingMergeTree产生物化视图时,实际上是记录了被聚合指标的状态,所以需要在原本的聚合函数名(如sum、uniq)之后加上”State”后缀。</p><p>创建分布式表的步骤就略去了。而从物化视图查询时,相当于将被聚合指标的状态进行合并并产生结果,所以需要在原本的聚合函数名(如sum、uniq)之后加上”Merge”后缀。-State和-Merge语法都是CK规定好的,称为聚合函数的组合器(combinator)。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">SELECT ts_date,</span><br><span class="line">formatDateTime(ts_minute,&#x27;%H:%M&#x27;) AS hour_minute,</span><br><span class="line">sumMerge(pv) AS pv,</span><br><span class="line">uniqMerge(uv) AS uv</span><br><span class="line">FROM dw.main_site_minute_pv_uv_all</span><br><span class="line">WHERE ts_date = today() AND main_site_id = 10029</span><br><span class="line">GROUP BY ts_date,hour_minute</span><br><span class="line">ORDER BY hour_minute ASC;</span><br></pre></td></tr></table></figure><p>我们也可以通过查询system.parts系统表来查看物化视图实际占用的parts信息。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">SELECT </span><br><span class="line"> partition, </span><br><span class="line"> name, </span><br><span class="line"> rows, </span><br><span class="line"> bytes_on_disk, </span><br><span class="line"> modification_time, </span><br><span class="line"> min_date, </span><br><span class="line"> max_date, </span><br><span class="line"> engine</span><br><span class="line">FROM system.parts</span><br><span class="line">WHERE (database = &#x27;dw&#x27;) AND (table = &#x27;.inner.main_site_minute_pv_uv&#x27;)</span><br><span class="line"></span><br><span class="line">┌─partition──┬─name───────────────┬─rows─┬─bytes_on_disk─┬───modification_time─┬───min_date─┬───max_date─┬─engine─────────────────────────┐</span><br><span class="line">│ 2020-05-19 │ 20200519_0_169_18 │ 9162 │ 4540922 │ 2020-05-19 20:33:29 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_170_179_2 │ 318 │ 294479 │ 2020-05-19 20:37:18 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_170_184_3 │ 449 │ 441282 │ 2020-05-19 20:40:24 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_170_189_4 │ 696 │ 594995 │ 2020-05-19 20:47:40 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_180_180_0 │ 40 │ 33416 │ 2020-05-19 20:37:58 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_181_181_0 │ 70 │ 34200 │ 2020-05-19 20:38:44 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_182_182_0 │ 83 │ 35981 │ 2020-05-19 20:39:32 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_183_183_0 │ 77 │ 35786 │ 2020-05-19 20:39:32 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_184_184_0 │ 81 │ 35766 │ 2020-05-19 20:40:19 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_185_185_0 │ 42 │ 32859 │ 2020-05-19 20:41:54 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_186_186_0 │ 83 │ 35750 │ 2020-05-19 20:43:30 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_187_187_0 │ 79 │ 34272 │ 2020-05-19 20:46:45 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_188_188_0 │ 75 │ 33917 │ 2020-05-19 20:46:45 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">│ 2020-05-19 │ 20200519_189_189_0 │ 81 │ 35712 │ 2020-05-19 20:47:35 │ 2020-05-19 │ 2020-05-19 │ ReplicatedAggregatingMergeTree │</span><br><span class="line">└────────────┴────────────────────┴──────┴───────────────┴─────────────────────┴────────────┴────────────┴────────────────────────────────┘</span><br></pre></td></tr></table></figure><p>后记:</p><blockquote><p>如果表数据不是只增的,而是有较频繁的删除或修改(如接入changelog的表),物化视图底层需要改用CollapsingMergeTree/VersionedCollapsingMergeTree;<br>如果物化视图是由两表join产生的,那么物化视图仅有在左表插入数据时才更新。如果只有右表插入数据,则不更新。</p></blockquote><p>来源:<a href="https://www.jianshu.com/p/3f385e4e7f95">https://www.jianshu.com/p/3f385e4e7f95</a></p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> ClickHouse </tag>
</tags>
</entry>
<entry>
<title>深入浅出学习Hive</title>
<link href="/2021/11/13/3f0a31ee0039.html"/>
<url>/2021/11/13/3f0a31ee0039.html</url>
<content type="html"><![CDATA[<blockquote><p>本文是基于CentOS 7.9系统环境,进行hive的学习和使用</p></blockquote><h1 id="一、Hive的简介"><a href="#一、Hive的简介" class="headerlink" title="一、Hive的简介"></a>一、Hive的简介</h1><h2 id="1-1-Hive基本概念"><a href="#1-1-Hive基本概念" class="headerlink" title="1.1 Hive基本概念"></a>1.1 Hive基本概念</h2><p><strong>(1) 什么是hive</strong></p><p>Hive是用于解决海量结构化日志的数据统计工具,是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张表,并提供类SQL查询功能<br><strong>(2) Hive的本质</strong></p><p>Hive的本质就是将HQL转化成MapReduce程序</p><h2 id="1-2-Hive优缺点"><a href="#1-2-Hive优缺点" class="headerlink" title="1.2 Hive优缺点"></a>1.2 Hive优缺点</h2><p><strong>(1) 优点</strong></p><ol><li> 操作接口采用类SQL语法,提供快速开发的能力(简单、容易)</li><li> 避免写MapReduce程序,减少开发人员的学习成本</li><li> Hive优势在于处理大数据,常用于数据分析,适用于实时性要求不高的场景</li><li> hive支持用户自定义函数,用户可以根据自己的需求来实现自己的函数</li></ol><p><strong>(2) 缺点</strong></p><ol><li> Hive执行延迟比较高,对于处理小数据没有优势</li><li> hive的HQL表达能力有限(迭代式算法无法表达;数据挖掘方面不擅长,由于MapReduce数据处理流程的限制,效率更高的算法却无法实现)</li><li> hive的效率比较低(hive自动生成的MapReduce作业,通常情况下不够智能化;hive调优比较困难,粒度较粗)</li></ol><h2 id="1-3-Hive架构"><a href="#1-3-Hive架构" class="headerlink" title="1.3 Hive架构"></a>1.3 Hive架构</h2><p><img src="https://img-blog.csdnimg.cn/20200529093005305.png"></p><ul><li>Client 用户接口 CLI(command-line interface)、JDBC/ODBC(jdbc访问hive)、WEBUI(浏览器访问hive)</li><li>Metastore 元数据包括:表名、表所属的数据库(默认是default)、表的拥有者、列/分区字段、表的类型(是否是外部表)、表的数据所在目录等;默认存储在自带的derby数据库中,推荐使用MySQL存储Metastore</li><li>SQL Parser 解析器 对SQL语句进行解析,转换成抽象语法树AST,并进行语法分析和检查</li><li>Physical Plan 编译器 将抽象语法树AST编译成逻辑执行计划</li><li>Query Optimizer 优化器 对逻辑执行计划进行优化</li><li>Execution 执行器 将逻辑执行计划转换成可以运行的物理计划,也就是MR任务</li></ul><h2 id="1-4-Hive工作机制"><a href="#1-4-Hive工作机制" class="headerlink" title="1.4 Hive工作机制"></a>1.4 Hive工作机制</h2><p>Hive通过给用户提供的一系列交互接口,接收到用户的指令(SQL),使用自己的Driver,结合元数据(MetaStore),将这些指令翻译成MapReduce,提交到Hadoop中执行,最后,将执行返回的结果输出到用户交互接口。<br><img src="https://img-blog.csdnimg.cn/20200529133302731.png"></p><h2 id="1-5-Hive和数据库比较"><a href="#1-5-Hive和数据库比较" class="headerlink" title="1.5 Hive和数据库比较"></a>1.5 Hive和数据库比较</h2><p>由于 Hive 采用了类似SQL 的查询语言 HQL(Hive Query Language),因此很容易将 Hive 理解为数据库。其实从结构上来看,Hive 和数据库除了拥有类似的查询语言,再无类似之处。本文将从多个方面来阐述 Hive 和数据库的差异。数据库可以用在 Online 的应用中,但是Hive 是为数据仓库而设计的,清楚这一点,有助于从应用角度理解 Hive 的特性。</p><ul><li>查询语言 由于SQL被广泛的应用在数据仓库中,因此,专门针对Hive的特性设计了类SQL的查询语言HQL。熟悉SQL开发的开发者可以很方便的使用Hive进行开发。</li><li>数据存储位置 Hive 是建立在 Hadoop 之上的,所有 Hive 的数据都是存储在 HDFS 中的。而数据库则可以将数据保存在块设备或者本地文件系统中。</li><li>数据更新 由于Hive是针对数据仓库应用设计的,而数据仓库的内容是读多写少的。因此,Hive中不建议对数据的改写,所有的数据都是在加载的时候确定好的。而数据库中的数据通常是需要经常进行修改的,因此可以使用 INSERT INTO … VALUES 添加数据,使用 UPDATE … SET修改数据。</li><li>执行 Hive中大多数查询的执行是通过 Hadoop 提供的 MapReduce 来实现的。而数据库通常有自己的执行引擎。</li><li>执行延迟 Hive 在查询数据的时候,由于没有索引,需要扫描整个表,因此延迟较高。另外一个导致 Hive 执行延迟高的因素是 MapReduce框架。由于MapReduce 本身具有较高的延迟,因此在利用MapReduce 执行Hive查询时,也会有较高的延迟。相对的,数据库的执行延迟较低。当然,这个低是有条件的,即数据规模较小,当数据规模大到超过数据库的处理能力的时候,Hive的并行计算显然能体现出优势。</li><li>可扩展性 由于Hive是建立在Hadoop之上的,因此Hive的可扩展性是和Hadoop的可扩展性是一致的(世界上最大的Hadoop 集群在 Yahoo!,2009年的规模在4000 台节点左右)。而数据库由于 ACID 语义的严格限制,扩展行非常有限。目前最先进的并行数据库 Oracle 在理论上的扩展能力也只有100台左右。</li><li>数据规模 由于Hive建立在集群上并可以利用MapReduce进行并行计算,因此可以支持很大规模的数据;对应的,数据库可以支持的数据规模较小。</li></ul><h1 id="二、Hive的安装"><a href="#二、Hive的安装" class="headerlink" title="二、Hive的安装"></a>二、Hive的安装</h1><h2 id="2-1-Hive下载"><a href="#2-1-Hive下载" class="headerlink" title="2.1 Hive下载"></a>2.1 Hive下载</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">apache-hive-1.2.1-bin.tar.gz</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-2-Hive解压"><a href="#2-2-Hive解压" class="headerlink" title="2.2 Hive解压"></a>2.2 Hive解压</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">tar -xzvf apache-hive-1.2.1-bin.tar.gz -C /opt/module</span><br><span class="line"><span class="built_in">cd</span> /opt/module</span><br><span class="line">mv apache-hive-1.2.1-bin hive</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-3-配置环境变量"><a href="#2-3-配置环境变量" class="headerlink" title="2.3 配置环境变量"></a>2.3 配置环境变量</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /etc/profile</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line"><span class="comment">#HIVE_HOME</span></span><br><span class="line"><span class="built_in">export</span> HIVE_HOME=/opt/module/hive</span><br><span class="line"><span class="built_in">export</span> PATH=<span class="variable">$PATH</span>:<span class="variable">$HIVE_HOME</span>/bin</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>2.4 修改hive配置文件</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive/conf</span><br><span class="line">cp hive-env.sh.template hive-env.sh</span><br><span class="line">vi hive-env.sh</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line"><span class="comment"># Set HADOOP_HOME to point to a specific hadoop install directory</span></span><br><span class="line">HADOOP_HOME=/opt/module/hadoop-2.7.2</span><br><span class="line"><span class="comment"># Hive Configuration Directory can be controlled by:</span></span><br><span class="line"><span class="built_in">export</span> HIVE_CONF_DIR=/opt/module/hive/conf</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-5-启动并测试hive"><a href="#2-5-启动并测试hive" class="headerlink" title="2.5 启动并测试hive"></a>2.5 启动并测试hive</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 创建数据库</span></span><br><span class="line">create database <span class="built_in">test</span>;</span><br><span class="line"><span class="comment"># 创建数据表</span></span><br><span class="line">create table student(id int, name string);</span><br><span class="line"><span class="comment"># 插入数据</span></span><br><span class="line">insert into table student values(1001, <span class="string">&quot;zhangsan&quot;</span>);</span><br><span class="line"><span class="comment"># 查询数据</span></span><br><span class="line">select * from student;</span><br><span class="line"><span class="comment"># 删除数据表</span></span><br><span class="line">drop table student;</span><br><span class="line"><span class="comment"># 删除数据库</span></span><br><span class="line">drop database <span class="built_in">test</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-6-hive的bug"><a href="#2-6-hive的bug" class="headerlink" title="2.6 hive的bug"></a>2.6 hive的bug</h2><p>hive默认存储元数据的数据库为derby,不支持并发访问,多开几个hive客户端会出现异常</p><h2 id="2-7-MySQL的安装"><a href="#2-7-MySQL的安装" class="headerlink" title="2.7 MySQL的安装"></a>2.7 MySQL的安装</h2><p>hive默认存储元数据的数据库为derby,不支持并发访问,多开几个hive客户端会出现异常,因此需要安装MySQL数据库来替换</p><p>CentOS 7离线安装MySQL 5.6</p><h2 id="2-8-Hive配置MySQL"><a href="#2-8-Hive配置MySQL" class="headerlink" title="2.8 Hive配置MySQL"></a>2.8 Hive配置MySQL</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive/conf</span><br><span class="line">vi hive-site.xml</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line">&lt;?xml version=<span class="string">&quot;1.0&quot;</span>?&gt;</span><br><span class="line">&lt;?xml-stylesheet <span class="built_in">type</span>=<span class="string">&quot;text/xsl&quot;</span> href=<span class="string">&quot;configuration.xsl&quot;</span>?&gt;</span><br><span class="line">&lt;configuration&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionURL&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;jdbc:mysql://192.168.1.101:3306/metastore?createDatabaseIfNotExist=<span class="literal">true</span>&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;JDBC connect string <span class="keyword">for</span> a JDBC metastore&lt;/description&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionDriverName&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;com.mysql.jdbc.Driver&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;Driver class name <span class="keyword">for</span> a JDBC metastore&lt;/description&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionUserName&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;root&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;username to use against metastore database&lt;/description&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionPassword&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;123456&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;password to use against metastore database&lt;/description&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line">&lt;/configuration&gt;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-9-启动Hive"><a href="#2-9-启动Hive" class="headerlink" title="2.9 启动Hive"></a>2.9 启动Hive</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-10-Beeline启动Hive"><a href="#2-10-Beeline启动Hive" class="headerlink" title="2.10 Beeline启动Hive"></a>2.10 Beeline启动Hive</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive/bin</span><br><span class="line">./hiveserver2</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>打开另一个终端</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive/bin</span><br><span class="line">./beeline</span><br><span class="line">!connect jdbc:hive2://hadoop101:10000</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>只需输入启动hadoop的用户名,不需要密码</p><h1 id="三、Hive的使用"><a href="#三、Hive的使用" class="headerlink" title="三、Hive的使用"></a>三、Hive的使用</h1><h2 id="3-1-Hive的交互命令"><a href="#3-1-Hive的交互命令" class="headerlink" title="3.1 Hive的交互命令"></a>3.1 Hive的交互命令</h2><p>运行来自命令行的SQL</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive</span><br><span class="line">bin/hive -e <span class="string">&quot;select * from test.student;&quot;</span></span><br><span class="line">bin/hive -e <span class="string">&quot;select * from test.student;&quot;</span>&gt;result.log</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>运行来自文件的SQL</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive</span><br><span class="line">vi test.sql</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line">select * from test.student;</span><br><span class="line"><span class="comment"># 执行下面命令</span></span><br><span class="line">bin/hive -f test.sql&gt;result.log</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>退出hive客户端</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">quit;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="3-2-Hive数据仓库位置配置"><a href="#3-2-Hive数据仓库位置配置" class="headerlink" title="3.2 Hive数据仓库位置配置"></a>3.2 Hive数据仓库位置配置</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive/conf</span><br><span class="line">vi hive-site.xml</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.metastore.warehouse.dir&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;/user/hive/warehouse&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;location of default database <span class="keyword">for</span> the warehouse&lt;/description&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="3-3-查询后信息显示配置(可选配)"><a href="#3-3-查询后信息显示配置(可选配)" class="headerlink" title="3.3 查询后信息显示配置(可选配)"></a>3.3 查询后信息显示配置(可选配)</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive/conf</span><br><span class="line">vi hive-site.xml</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.cli.print.header&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;<span class="literal">true</span>&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.cli.print.current.db&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;<span class="literal">true</span>&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="3-4-Hive运行日志信息配置(必须配置)"><a href="#3-4-Hive运行日志信息配置(必须配置)" class="headerlink" title="3.4 Hive运行日志信息配置(必须配置)"></a>3.4 Hive运行日志信息配置(必须配置)</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hive/conf</span><br><span class="line">cp hive-log4j.properties.template hive-log4j.properties</span><br><span class="line">vi hive-log4j.properties</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line">hive.log.dir=/opt/module/hive/logs</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="四、Hive的数据类型"><a href="#四、Hive的数据类型" class="headerlink" title="四、Hive的数据类型"></a>四、Hive的数据类型</h1><h2 id="4-1-Hive基本数据类型"><a href="#4-1-Hive基本数据类型" class="headerlink" title="4.1 Hive基本数据类型"></a>4.1 Hive基本数据类型</h2><table><thead><tr><th>hive数据类型</th><th>java数据类型</th><th>长度</th><th>示例</th></tr></thead><tbody><tr><td>tinyint</td><td>byte</td><td>1byte有符号整数</td><td>20</td></tr><tr><td>smalint</td><td>short</td><td>2byte有符号整数</td><td>20</td></tr><tr><td>int</td><td>int</td><td>4byte有符号整数</td><td>20</td></tr><tr><td>bigint</td><td>long</td><td>8byte有符号整数</td><td>20</td></tr><tr><td>boolean</td><td>boolean</td><td>布尔类型,true或者false</td><td>TRUE FALSE</td></tr><tr><td>float</td><td>float</td><td>单精度浮点数</td><td>3.14159</td></tr><tr><td>double</td><td>double</td><td>双精度浮点数</td><td>3.14159</td></tr><tr><td>string</td><td>string</td><td>字符系列,可以使用单引号或者双引号</td><td>‘now is’ “i am a”</td></tr><tr><td>timestamp</td><td></td><td>时间类型</td><td></td></tr><tr><td>binary</td><td></td><td>字节数组</td><td></td></tr></tbody></table><h2 id="4-2-Hive集合数据类型"><a href="#4-2-Hive集合数据类型" class="headerlink" title="4.2 Hive集合数据类型"></a>4.2 Hive集合数据类型</h2><table><thead><tr><th>数据类型</th><th>描述</th><th>语法示例</th></tr></thead><tbody><tr><td>struct</td><td>和c语言中的struct类似,都可以通过“点”符号访问元素内容。例如,如果某个列的数据类型是STRUCT{first STRING, last STRING},那么第1个元素可以通过字段.first来引用。</td><td>struct() 例如struct&lt;street:string, city:string&gt;</td></tr><tr><td>map</td><td>MAP是一组键-值对元组集合,使用数组表示法可以访问数据。例如,如果某个列的数据类型是MAP,其中键-&gt;值对是’first’-&gt;’John’和’last’-&gt;’Doe’,那么可以通过字段名[‘last’]获取最后一个元素</td><td>map() 例如map&lt;string, int&gt;</td></tr><tr><td>array</td><td>数组是一组具有相同类型和名称的变量的集合。这些变量称为数组的元素,每个数组元素都有一个编号,编号从零开始。例如,数组值为[‘John’, ‘Doe’],那么第2个元素可以通过数组名[1]进行引用。</td><td>Array() 例如array</td></tr></tbody></table><ul><li>案例 创建数据文件test.txt</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi test.txt</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line">songsong,bingbing_lili,xiao song:18_xiaoxiao song:19,hui long guan_beijing</span><br><span class="line">yangyang,caicai_susu,xiao yang:18_xiaoxiao yang:19,chao yang_beijing</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>创建表结构文件test.sql</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi test.sql</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line">create table test.test(</span><br><span class="line">name string,</span><br><span class="line">friends array&lt;string&gt;,</span><br><span class="line">children map&lt;string, int&gt;,</span><br><span class="line">address struct&lt;street:string, city:string&gt;</span><br><span class="line">)</span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;,&#x27;</span></span><br><span class="line">collection items terminated by <span class="string">&#x27;_&#x27;</span></span><br><span class="line">map keys terminated by <span class="string">&#x27;:&#x27;</span></span><br><span class="line">lines terminated by <span class="string">&#x27;\n&#x27;</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>上传数据文件test.txt</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hdfs dfs -put test.txt /user/hive/warehouse/test.db/<span class="built_in">test</span></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>测试查询</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive</span><br><span class="line">use <span class="built_in">test</span>;</span><br><span class="line">select name,friends[1],children[<span class="string">&quot;xiao song&quot;</span>],address.city from <span class="built_in">test</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="4-3-类型转化"><a href="#4-3-类型转化" class="headerlink" title="4.3 类型转化"></a>4.3 类型转化</h2><p>Hive的原子数据类型是可以进行隐式转换的,类似于Java的类型转换,例如某表达式使用INT类型,TINYINT会自动转换为INT类型,但是Hive不会进行反向转化,例如,某表达式使用TINYINT类型,INT不会自动转换为TINYINT类型,它会返回错误,除非使用CAST操作。</p><ul><li> 隐式类型转换规则如下</li></ul><ol><li> 任何整数类型都可以隐式地转换为一个范围更广的类型,如TINYINT可以转换成INT,INT可以转换成BIGINT。</li><li> 所有整数类型、FLOAT和STRING类型都可以隐式地转换成DOUBLE。</li><li> TINYINT、SMALLINT、INT都可以转换为FLOAT。</li><li> BOOLEAN类型不可以转换为任何其它的类型。</li></ol><ul><li>可以使用CAST操作显示进行数据类型转换 例如CAST(‘1’ AS INT)将把字符串’1’ 转换成整数1;如果强制类型转换失败,如执行CAST(‘X’ AS INT),表达式返回空值 NULL。</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">select <span class="string">&#x27;1&#x27;</span>+2, cast(<span class="string">&#x27;1&#x27;</span>as int) + 2;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="五、DDL数据库定义语言"><a href="#五、DDL数据库定义语言" class="headerlink" title="五、DDL数据库定义语言"></a>五、DDL数据库定义语言</h1><h2 id="5-1-创建数据库"><a href="#5-1-创建数据库" class="headerlink" title="5.1 创建数据库"></a>5.1 创建数据库</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">CREATE DATABASE [IF NOT EXISTS] database_name</span><br><span class="line">[COMMENT database_comment]</span><br><span class="line">[LOCATION hdfs_path]</span><br><span class="line">[WITH DBPROPERTIES (property_name=property_value, ...)];</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 实例:新建一个名为test1的数据库,存储在HDFS中的 /test 路径下</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create database test1 comment <span class="string">&quot;test1 database&quot;</span> location <span class="string">&quot;/test&quot;</span> with dbproperties(<span class="string">&quot;zhangsan&quot;</span>=<span class="string">&quot;lisi&quot;</span>);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-2-显示所有数据库"><a href="#5-2-显示所有数据库" class="headerlink" title="5.2 显示所有数据库"></a>5.2 显示所有数据库</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">show databases;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-3-过滤显示数据库"><a href="#5-3-过滤显示数据库" class="headerlink" title="5.3 过滤显示数据库"></a>5.3 过滤显示数据库</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">show databases like <span class="string">&#x27;test&#x27;</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-4-显示指定数据库的信息"><a href="#5-4-显示指定数据库的信息" class="headerlink" title="5.4 显示指定数据库的信息"></a>5.4 显示指定数据库的信息</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">desc database test1;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-5-显示指定数据库的详细信息"><a href="#5-5-显示指定数据库的详细信息" class="headerlink" title="5.5 显示指定数据库的详细信息"></a>5.5 显示指定数据库的详细信息</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">desc database extended test1;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-6-切换数据库"><a href="#5-6-切换数据库" class="headerlink" title="5.6 切换数据库"></a>5.6 切换数据库</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">use test1;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-7-修改数据库"><a href="#5-7-修改数据库" class="headerlink" title="5.7 修改数据库"></a>5.7 修改数据库</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">alter database test1 <span class="built_in">set</span> dbproperties(<span class="string">&#x27;name&#x27;</span>=<span class="string">&#x27;zhangsan&#x27;</span>);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-8-删除空的数据库"><a href="#5-8-删除空的数据库" class="headerlink" title="5.8 删除空的数据库"></a>5.8 删除空的数据库</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">drop database test1;</span><br><span class="line">drop database <span class="keyword">if</span> exists test1;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-9-删除非空数据库"><a href="#5-9-删除非空数据库" class="headerlink" title="5.9 删除非空数据库"></a>5.9 删除非空数据库</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">drop database test1 cascade;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-10-创建数据表"><a href="#5-10-创建数据表" class="headerlink" title="5.10 创建数据表"></a>5.10 创建数据表</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name</span><br><span class="line">[(col_name data_type [COMMENT col_comment], ...)]</span><br><span class="line">[COMMENT table_comment]</span><br><span class="line">[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)] <span class="comment">## 分区表在HDFS中体现为分为多个文件夹</span></span><br><span class="line">[CLUSTERED BY (col_name, col_name, ...) <span class="comment">## 分区表在HDFS中体现为一个文件夹下分为多个文件</span></span><br><span class="line">[SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]</span><br><span class="line">[ROW FORMAT row_format]</span><br><span class="line">[STORED AS file_format]</span><br><span class="line">[LOCATION hdfs_path]</span><br><span class="line">[TBLPROPERTIES (property_name=property_value, ...)]</span><br><span class="line">[AS select_statement]</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 实例:新建一个名为student1的表,存储在HDFS中的 /student 路径下</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create table student1(id int comment <span class="string">&quot;Identity&quot;</span>, age int comment <span class="string">&quot;Age&quot;</span>) comment <span class="string">&quot;Student&quot;</span> row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span> location <span class="string">&#x27;/student&#x27;</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-11-显示指定表的信息和创建表时的配置"><a href="#5-11-显示指定表的信息和创建表时的配置" class="headerlink" title="5.11 显示指定表的信息和创建表时的配置"></a>5.11 显示指定表的信息和创建表时的配置</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">desc student1;</span><br><span class="line">show create table student1;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-12-显示指定表的详细信息"><a href="#5-12-显示指定表的详细信息" class="headerlink" title="5.12 显示指定表的详细信息"></a>5.12 显示指定表的详细信息</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">desc formatted student1;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-13-外部表和管理表"><a href="#5-13-外部表和管理表" class="headerlink" title="5.13 外部表和管理表"></a>5.13 外部表和管理表</h2><ul><li> 管理表(managed_table)</li></ul><p>删除表后,存储在HDFS上的数据也会被删除</p><ul><li> 外部表(external_table)</li></ul><p>删除表后,存储在HDFS上的数据不会被删除</p><h2 id="5-14-将数据表修改为外部表"><a href="#5-14-将数据表修改为外部表" class="headerlink" title="5.14 将数据表修改为外部表"></a>5.14 将数据表修改为外部表</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">alter table student1 <span class="built_in">set</span> tblproperties(<span class="string">&#x27;EXTERNAL&#x27;</span>=<span class="string">&#x27;TRUE&#x27;</span>);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-15-将数据表修改为管理表"><a href="#5-15-将数据表修改为管理表" class="headerlink" title="5.15 将数据表修改为管理表"></a>5.15 将数据表修改为管理表</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">alter table student2 <span class="built_in">set</span> tblproperties(<span class="string">&#x27;EXTERNAL&#x27;</span>=<span class="string">&#x27;FALSE&#x27;</span>);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-16-创建分区表"><a href="#5-16-创建分区表" class="headerlink" title="5.16 创建分区表"></a>5.16 创建分区表</h2><p>谓词下退,先走过滤<br><strong>1. 新建表</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create table dept_partition(deptno int, dname string, loc string)</span><br><span class="line">partitioned by (month string)</span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>2. 加载数据</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; load data <span class="built_in">local</span> inpath <span class="string">&#x27;/opt/module/datas/dept.txt&#x27;</span> into table default.dept_partition partition(month=<span class="string">&#x27;201709&#x27;</span>);</span><br><span class="line">hive (default)&gt; load data <span class="built_in">local</span> inpath <span class="string">&#x27;/opt/module/datas/dept.txt&#x27;</span> into table default.dept_partition partition(month=<span class="string">&#x27;201708&#x27;</span>);</span><br><span class="line">hive (default)&gt; load data <span class="built_in">local</span> inpath <span class="string">&#x27;/opt/module/datas/dept.txt&#x27;</span> into table default.dept_partition partition(month=<span class="string">&#x27;201707&#x27;</span>);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>3. 查询表数据</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">select * from dept_partition <span class="built_in">where</span> month=<span class="string">&#x27;201709&#x27;</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>4. 联合查询表数据</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">select * from dept_partition <span class="built_in">where</span> month=<span class="string">&#x27;201709&#x27;</span> union</span><br><span class="line">select * from dept_partition <span class="built_in">where</span> month=<span class="string">&#x27;201708&#x27;</span> union</span><br><span class="line">select * from dept_partition <span class="built_in">where</span> month=<span class="string">&#x27;201707&#x27;</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-17-新增分区"><a href="#5-17-新增分区" class="headerlink" title="5.17 新增分区"></a>5.17 新增分区</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; alter table dept_partition add partition(month=&#x27;201705&#x27;) partition(month=&#x27;201704&#x27;);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-18-删除分区"><a href="#5-18-删除分区" class="headerlink" title="5.18 删除分区"></a>5.18 删除分区</h2><figure class="highlight apache"><table><tr><td class="code"><pre><span class="line"><span class="attribute">hive</span> (default)&gt; alter table dept_partition drop partition(month=&#x27;<span class="number">201705</span>&#x27;), partition(month=&#x27;<span class="number">201704</span>&#x27;);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-19-查询分区表有多少分区"><a href="#5-19-查询分区表有多少分区" class="headerlink" title="5.19 查询分区表有多少分区"></a>5.19 查询分区表有多少分区</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">show partitions dept_partition;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-20-查询分区表的结构"><a href="#5-20-查询分区表的结构" class="headerlink" title="5.20 查询分区表的结构"></a>5.20 查询分区表的结构</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">desc formatted dept_partition;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-21-创建二级分区表"><a href="#5-21-创建二级分区表" class="headerlink" title="5.21 创建二级分区表"></a>5.21 创建二级分区表</h2><ol><li> 新建表</li></ol><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create table dept_partition2(deptno int, dname string, loc string)</span><br><span class="line">partitioned by (month string, day string)</span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><ol start="2"><li> 加载数据</li></ol><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">load data <span class="built_in">local</span> inpath <span class="string">&#x27;/opt/module/datas/dept.txt&#x27;</span> into table</span><br><span class="line"> default.dept_partition2 partition(month=<span class="string">&#x27;201709&#x27;</span>, day=<span class="string">&#x27;13&#x27;</span>);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-22-修复分区"><a href="#5-22-修复分区" class="headerlink" title="5.22 修复分区"></a>5.22 修复分区</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">msck repair table dept_partition2;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-23-重命名表"><a href="#5-23-重命名表" class="headerlink" title="5.23 重命名表"></a>5.23 重命名表</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">alter table dept_partition2 rename to dept_partition3;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-24-添加列"><a href="#5-24-添加列" class="headerlink" title="5.24 添加列"></a>5.24 添加列</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">alter table dept_partition add columns(deptdesc string);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-25-更新列"><a href="#5-25-更新列" class="headerlink" title="5.25 更新列"></a>5.25 更新列</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">alter table dept_partition change column deptdesc desc int;</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="5-26-替换列"><a href="#5-26-替换列" class="headerlink" title="5.26 替换列"></a>5.26 替换列</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">alter table dept_partition replace columns(deptno string, dname </span><br><span class="line"> string, loc string);</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="六、DML数据库操作语言"><a href="#六、DML数据库操作语言" class="headerlink" title="六、DML数据库操作语言"></a>六、DML数据库操作语言</h1><h2 id="6-1-数据导入"><a href="#6-1-数据导入" class="headerlink" title="6.1 数据导入"></a>6.1 数据导入</h2><h3 id="6-1-1-向表中装载数据"><a href="#6-1-1-向表中装载数据" class="headerlink" title="6.1.1 向表中装载数据"></a>6.1.1 向表中装载数据</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">load data [<span class="built_in">local</span>] inpath <span class="string">&#x27;/opt/module/datas/student.txt&#x27;</span> [overwrite] into table student [partition (partcol1=val1,…)];</span><br><span class="line"><span class="comment"># local:表示从本地加载数据到hive表;否则从HDFS加载数据到hive表</span></span><br><span class="line"><span class="comment"># overwrite:表示覆盖表中已有数据,否则表示追加</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="6-1-2-通过查询语句向表中插入数据"><a href="#6-1-2-通过查询语句向表中插入数据" class="headerlink" title="6.1.2 通过查询语句向表中插入数据"></a>6.1.2 通过查询语句向表中插入数据</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 基本插入</span></span><br><span class="line">insert into table student values(1,<span class="string">&#x27;wangwu&#x27;</span>),(2,<span class="string">&#x27;zhaoliu&#x27;</span>);</span><br><span class="line"><span class="comment"># 查询插入</span></span><br><span class="line">insert overwrite table student</span><br><span class="line">select id, name from student <span class="built_in">where</span> id&gt;10;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="6-1-3-查询语句中创建表并加载数据(As-Select)"><a href="#6-1-3-查询语句中创建表并加载数据(As-Select)" class="headerlink" title="6.1.3 查询语句中创建表并加载数据(As Select)"></a>6.1.3 查询语句中创建表并加载数据(As Select)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create table <span class="keyword">if</span> not exists student3 as select id, name from student;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="6-1-4-创建表时通过Location指定加载数据路径"><a href="#6-1-4-创建表时通过Location指定加载数据路径" class="headerlink" title="6.1.4 创建表时通过Location指定加载数据路径"></a>6.1.4 创建表时通过Location指定加载数据路径</h3><p>上传数据至HDFS</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; dfs -mkdir /student;</span><br><span class="line">hive (default)&gt; dfs -put /opt/module/datas/student.txt /student;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>创建表,并指定HDFS上的位置</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create external table <span class="keyword">if</span> not exists student5(id int, name string)</span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span></span><br><span class="line">location <span class="string">&#x27;/student;</span></span><br><span class="line"><span class="string"></span></span><br></pre></td></tr></table></figure><p>查询数据</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; select * from student5;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="6-1-5-Export导出到HDFS上"><a href="#6-1-5-Export导出到HDFS上" class="headerlink" title="6.1.5 Export导出到HDFS上"></a>6.1.5 Export导出到HDFS上</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 先用export导出后,再将数据导入</span></span><br><span class="line">import table student2 from <span class="string">&#x27;/user/hive/warehouse/export/student&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="6-2-数据导出"><a href="#6-2-数据导出" class="headerlink" title="6.2 数据导出"></a>6.2 数据导出</h2><h3 id="6-2-1-Insert导出"><a href="#6-2-1-Insert导出" class="headerlink" title="6.2.1 Insert导出"></a>6.2.1 Insert导出</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 导出到本地</span></span><br><span class="line">insert overwrite <span class="built_in">local</span> directory <span class="string">&#x27;/opt/module/datas/export/student1&#x27;</span></span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span></span><br><span class="line">select * from student;</span><br><span class="line"><span class="comment"># 导出到HDFS</span></span><br><span class="line">insert overwrite directory <span class="string">&#x27;/user/lytdev/student2&#x27;</span></span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span></span><br><span class="line">select * from student;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="6-2-2-Hadoop命令导出到本地"><a href="#6-2-2-Hadoop命令导出到本地" class="headerlink" title="6.2.2 Hadoop命令导出到本地"></a>6.2.2 Hadoop命令导出到本地</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; dfs -get /user/hive/warehouse/student/month=201709/000000_0</span><br><span class="line">/opt/module/datas/<span class="built_in">export</span>/student3.txt;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="6-2-3-Hive-Shell-命令导出"><a href="#6-2-3-Hive-Shell-命令导出" class="headerlink" title="6.2.3 Hive Shell 命令导出"></a>6.2.3 Hive Shell 命令导出</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/hive -e <span class="string">&#x27;select * from default.student;&#x27;</span> &gt; /opt/module/datas/<span class="built_in">export</span>/student4.txt;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="6-2-4-Export导出到HDFS上"><a href="#6-2-4-Export导出到HDFS上" class="headerlink" title="6.2.4 Export导出到HDFS上"></a>6.2.4 Export导出到HDFS上</h3><p>export和import主要用于两个Hadoop平台集群之间Hive表迁移</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 既能导出元数据,也能导出数据</span></span><br><span class="line"><span class="built_in">export</span> table default.student to <span class="string">&#x27;/user/hive/warehouse/export/student&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="6-3-清除数据(不清除元数据)"><a href="#6-3-清除数据(不清除元数据)" class="headerlink" title="6.3 清除数据(不清除元数据)"></a>6.3 清除数据(不清除元数据)</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Truncate只能删除管理表,不能删除外部表中数据</span></span><br><span class="line">truncate table student;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="七、基本查询"><a href="#七、基本查询" class="headerlink" title="七、基本查询"></a>七、基本查询</h1><p><strong>(1)写SQL语句关键字的顺序</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span></span><br><span class="line"><span class="keyword">from</span></span><br><span class="line"><span class="keyword">join</span> <span class="keyword">on</span></span><br><span class="line"><span class="keyword">where</span></span><br><span class="line"><span class="keyword">group</span> <span class="keyword">by</span></span><br><span class="line"><span class="keyword">order</span> <span class="keyword">by</span></span><br><span class="line"><span class="keyword">having</span></span><br><span class="line">limit</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(1)执行SQL语句关键字的顺序</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">from</span></span><br><span class="line"><span class="keyword">join</span></span><br><span class="line"><span class="keyword">on</span></span><br><span class="line"><span class="keyword">where</span></span><br><span class="line"><span class="keyword">group</span> <span class="keyword">by</span>(开始使用<span class="keyword">select</span>中的别名,后面的语句中都可以使用)</span><br><span class="line">avg,sum....</span><br><span class="line"><span class="keyword">having</span></span><br><span class="line"><span class="keyword">select</span></span><br><span class="line"><span class="keyword">distinct</span></span><br><span class="line"><span class="keyword">order</span> <span class="keyword">by</span></span><br><span class="line">limit</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(1)谓词下推</strong></p><p>先走过滤,再走查询<br><strong>(1)SQL优化</strong></p><p>在join on条件中,可以使用子查询语句仅需优化</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span></span><br><span class="line">id,name</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">A <span class="keyword">join</span> B</span><br><span class="line"><span class="keyword">on</span> A.id<span class="operator">=</span>B.id <span class="keyword">and</span> A.id<span class="operator">&gt;</span><span class="number">100</span>;</span><br><span class="line">优化后<span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span><span class="operator">=</span></span><br><span class="line"><span class="keyword">select</span></span><br><span class="line">id,name</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">(<span class="keyword">select</span> id,name <span class="keyword">from</span> A <span class="keyword">where</span> A.id<span class="operator">&gt;</span><span class="number">100</span>) t1</span><br><span class="line"><span class="keyword">join</span> B</span><br><span class="line"><span class="keyword">on</span> t1.id<span class="operator">=</span>B.id;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(2) 数据准备</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 创建部门表</span></span><br><span class="line">create table <span class="keyword">if</span> not exists dept(deptno int, dname string, loc int)</span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 创建员工表</span></span><br><span class="line">create table <span class="keyword">if</span> not exists emp(empno int, ename string, job string, mgr int, hiredate string, sal double, comm double, deptno int)</span><br><span class="line">row format delimited fields terminated by <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 向部门表导入数据</span></span><br><span class="line">load data <span class="built_in">local</span> inpath <span class="string">&#x27;/home/lytdev/dept.txt&#x27;</span> into table dept;</span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 向员工表导入数据</span></span><br><span class="line">load data <span class="built_in">local</span> inpath <span class="string">&#x27;/home/lytdev/emp.txt&#x27;</span> into table emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(3) 全表和特定列查询</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 全表查询</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp;</span><br><span class="line"># 特定列查询</span><br><span class="line"><span class="keyword">select</span> empno, ename <span class="keyword">from</span> emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(4) 列别名</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> ename <span class="keyword">as</span> name, deptno dn <span class="keyword">from</span> emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(5)算术运算符</strong><br>运算符 描述<br>A + B A和B相加<br>A - B A和B相减<br>A * B A和B相乘<br>A / B A和B相除<br>A % B A对B取余<br>A &amp; B A和B按位取余<br>A|B A和B按位取或<br>A ^ B A和B按位取异或<br>~A A按位取反</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> sal <span class="operator">+</span><span class="number">1</span> <span class="keyword">from</span> emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(6)常用函数</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 求总行数</span><br><span class="line"><span class="keyword">select</span> <span class="built_in">count</span>(<span class="operator">*</span>) cnt <span class="keyword">from</span> emp;</span><br><span class="line"># 求最大值</span><br><span class="line"><span class="keyword">select</span> <span class="built_in">max</span>(sal) max_sal <span class="keyword">from</span> emp;</span><br><span class="line"># 求最小值</span><br><span class="line"><span class="keyword">select</span> <span class="built_in">min</span>(sal) min_sal <span class="keyword">from</span> emp;</span><br><span class="line"># 求总和</span><br><span class="line"><span class="keyword">select</span> <span class="built_in">sum</span>(sal) sum_sal <span class="keyword">from</span> emp;</span><br><span class="line"># 求平均值</span><br><span class="line"><span class="keyword">select</span> <span class="built_in">avg</span>(sal) avg_sal <span class="keyword">from</span> emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(7) Limit语句</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp limit <span class="number">5</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(8)where语句</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># <span class="keyword">where</span>子句中不能使用字段别名</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal <span class="operator">&gt;</span><span class="number">1000</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(9)比较运算符</strong><br>操作符 支持的数据类型 描述<br>A = B 基本数据类型 如果A等于B则返回TRUE,反之返回FALSE<br>A &lt;=&gt; B 基本数据类型 如果A和B都为NULL,则返回TRUE,其他的和等号(=)操作符的结果一致,如果任一为NULL则结果为NULL<br>A&lt;&gt;B, A!=B 基本数据类型 A或者B为NULL则返回NULL;如果A不等于B,则返回TRUE,反之返回FALSE<br>A&lt;B 基本数据类型 A或者B为NULL,则返回NULL;如果A小于B,则返回TRUE,反之返回FALSE<br>A&lt;=B 基本数据类型 A或者B为NULL,则返回NULL;如果A小于等于B,则返回TRUE,反之返回FALSE<br>A&gt;B 基本数据类型 A或者B为NULL,则返回NULL;如果A大于B,则返回TRUE,反之返回FALSE<br>A&gt;=B 基本数据类型 A或者B为NULL,则返回NULL;如果A大于等于B,则返回TRUE,反之返回FALSE<br>A [NOT] BETWEEN B AND C 基本数据类型 如果A,B或者C任一为NULL,则结果为NULL。如果A的值大于等于B而且小于或等于C,则结果为TRUE,反之为FALSE。如果使用NOT关键字则可达到相反的效果。<br>A IS NULL 所有数据类型 如果A等于NULL,则返回TRUE,反之返回FALSE<br>A IS NOT NULL 所有数据类型 如果A不等于NULL,则返回TRUE,反之返回FALSE<br>IN(数值1, 数值2) 所有数据类型 使用 IN运算显示列表中的值<br>A [NOT] LIKE B STRING 类型 B是一个SQL下的简单正则表达式,也叫通配符模式,如果A与其匹配的话,则返回TRUE;反之返回FALSE。B的表达式说明如下:‘x%’表示A必须以字母‘x’开头,‘%x’表示A必须以字母’x’结尾,而‘%x%’表示A包含有字母’x’,可以位于开头,结尾或者字符串中间。如果使用NOT关键字则可达到相反的效果。<br>A RLIKE B, A REGEXP B STRING 类型 B是基于java的正则表达式,如果A与其匹配,则返回TRUE;反之返回FALSE。匹配使用的是JDK中的正则表达式接口实现的,因为正则也依据其中的规则。例如,正则表达式必须和整个字符串A相匹配,而不是只需与其字符串匹配。</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 查询出薪水等于<span class="number">5000</span>的所有员工</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal <span class="operator">=</span> <span class="number">5000</span>;</span><br><span class="line"># 查询工资在<span class="number">500</span>到<span class="number">1000</span>的员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal <span class="keyword">between</span> <span class="number">500</span> <span class="keyword">and</span> <span class="number">1000</span>;</span><br><span class="line"># 查询comm为空的所有员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> comm <span class="keyword">is</span> <span class="keyword">null</span>;</span><br><span class="line"># 查询工资是<span class="number">1500</span>或<span class="number">5000</span>的员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal <span class="keyword">IN</span> (<span class="number">1500</span>, <span class="number">5000</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(10)like和rlike</strong></p><p>**rlike:**带有正则表达式的like语句<br>正则匹配 描述<br>\ 转义<br>^ 一行的开头<br>^R 匹配以R为开头的行<br>$ 匹配一行的结尾<br>R$ 匹配以R为结尾的行</p><ul><li>表示上一个子式匹配0次或多次,贪心匹配<br> Zo* Zo Zoo Zooo<br> . 匹配一个任意的字符<br> .* 匹配任意字符串<br> [] 匹配某个范围内的字符<br> [a-z] 匹配一个a-z之间的字符<br> [a-z]* 匹配任意字母字符串</li></ul><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 查找以<span class="number">2</span>开头薪水的员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal <span class="keyword">LIKE</span> <span class="string">&#x27;2%&#x27;</span>;</span><br><span class="line"># 查找第二个数值为<span class="number">2</span>的薪水的员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal <span class="keyword">LIKE</span> <span class="string">&#x27;_2%&#x27;</span>;</span><br><span class="line"># 查找薪水中含有<span class="number">2</span>的员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal RLIKE <span class="string">&#x27;[2]&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(11)逻辑运算符</strong><br>操作符 描述<br>and 逻辑并<br>or 逻辑或<br>not 逻辑否</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 查询薪水大于<span class="number">1000</span>,部门是<span class="number">30</span></span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal<span class="operator">&gt;</span><span class="number">1000</span> <span class="keyword">and</span> deptno<span class="operator">=</span><span class="number">30</span>;</span><br><span class="line"># 查询薪水大于<span class="number">1000</span>,或者部门是<span class="number">30</span></span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> sal<span class="operator">&gt;</span><span class="number">1000</span> <span class="keyword">or</span> deptno<span class="operator">=</span><span class="number">30</span>;</span><br><span class="line"># 查询除了<span class="number">20</span>部门和<span class="number">30</span>部门以外的员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">where</span> deptno <span class="keyword">not</span> <span class="keyword">IN</span>(<span class="number">30</span>, <span class="number">20</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="八、分组"><a href="#八、分组" class="headerlink" title="八、分组"></a>八、分组</h1><h2 id="8-1-group-by语句"><a href="#8-1-group-by语句" class="headerlink" title="8.1 group by语句"></a>8.1 group by语句</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 计算emp表每个部门的平均工资</span><br><span class="line"><span class="keyword">select</span> t.deptno, <span class="built_in">avg</span>(t.sal) avg_sal <span class="keyword">from</span> emp t <span class="keyword">group</span> <span class="keyword">by</span> t.deptno;</span><br><span class="line"># 计算emp每个部门中每个岗位的最高薪水</span><br><span class="line"><span class="keyword">select</span> t.deptno, t.job, <span class="built_in">max</span>(t.sal) max_sal <span class="keyword">from</span> emp t <span class="keyword">group</span> <span class="keyword">by</span></span><br><span class="line">t.deptno, t.job;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="8-2-having语句"><a href="#8-2-having语句" class="headerlink" title="8.2 having语句"></a>8.2 having语句</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">having和where不同点</span><br><span class="line"></span><br><span class="line">where后面不能写分组函数,而having后面可以使用分组函数 </span><br><span class="line">having只用于group by分组统计语句</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 求每个部门的平均工资</span><br><span class="line"><span class="keyword">select</span> deptno, <span class="built_in">avg</span>(sal) <span class="keyword">from</span> emp <span class="keyword">group</span> <span class="keyword">by</span> deptno;</span><br><span class="line"># 求每个部门的平均薪水大于<span class="number">2000</span>的部门</span><br><span class="line"><span class="keyword">select</span> deptno, <span class="built_in">avg</span>(sal) avg_sal <span class="keyword">from</span> emp <span class="keyword">group</span> <span class="keyword">by</span> deptno <span class="keyword">having</span></span><br><span class="line">avg_sal <span class="operator">&gt;</span> <span class="number">2000</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="九、join语句"><a href="#九、join语句" class="headerlink" title="九、join语句"></a>九、join语句</h1><h2 id="9-1-等值join"><a href="#9-1-等值join" class="headerlink" title="9.1 等值join"></a>9.1 等值join</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 根据员工表和部门表中的部门编号相等,查询员工编号、员工名称和部门名称;</span><br><span class="line"><span class="keyword">select</span> e.empno, e.ename, d.deptno, d.dname <span class="keyword">from</span> emp e <span class="keyword">join</span> dept d <span class="keyword">on</span> e.deptno <span class="operator">=</span> d.deptno;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-2-表的别名"><a href="#9-2-表的别名" class="headerlink" title="9.2 表的别名"></a>9.2 表的别名</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 合并员工表和部门表</span><br><span class="line"><span class="keyword">select</span> e.empno, e.ename, d.deptno <span class="keyword">from</span> emp e <span class="keyword">join</span> dept d <span class="keyword">on</span> e.deptno <span class="operator">=</span> d.deptno;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-3-内连接"><a href="#9-3-内连接" class="headerlink" title="9.3 内连接"></a>9.3 内连接</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 只有进行连接的两个表中都存在与连接条件相匹配的数据才会被保留下来</span><br><span class="line"><span class="keyword">select</span> e.empno, e.ename, d.deptno <span class="keyword">from</span> emp e <span class="keyword">join</span> dept d <span class="keyword">on</span> e.deptno <span class="operator">=</span> d.deptno;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-4-左外连接"><a href="#9-4-左外连接" class="headerlink" title="9.4 左外连接"></a>9.4 左外连接</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># <span class="keyword">JOIN</span>操作符左边表中符合<span class="keyword">WHERE</span>子句的所有记录将会被返回</span><br><span class="line"><span class="keyword">select</span> e.empno, e.ename, d.deptno <span class="keyword">from</span> emp e <span class="keyword">left</span> <span class="keyword">join</span> dept d <span class="keyword">on</span> e.deptno <span class="operator">=</span> d.deptno;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-5-右外连接"><a href="#9-5-右外连接" class="headerlink" title="9.5 右外连接"></a>9.5 右外连接</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># <span class="keyword">JOIN</span>操作符右边表中符合<span class="keyword">WHERE</span>子句的所有记录将会被返回</span><br><span class="line"><span class="keyword">select</span> e.empno, e.ename, d.deptno <span class="keyword">from</span> emp e <span class="keyword">right</span> <span class="keyword">join</span> dept d <span class="keyword">on</span> e.deptno <span class="operator">=</span> d.deptno;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-6-满外连接"><a href="#9-6-满外连接" class="headerlink" title="9.6 满外连接"></a>9.6 满外连接</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 将会返回所有表中符合<span class="keyword">WHERE</span>语句条件的所有记录。如果任一表的指定字段没有符合条件的值的话,那么就使用<span class="keyword">NULL</span>值替代</span><br><span class="line"><span class="keyword">select</span> e.empno, e.ename, d.deptno <span class="keyword">from</span> emp e <span class="keyword">full</span> <span class="keyword">join</span> dept d <span class="keyword">on</span> e.deptno <span class="operator">=</span> d.deptno;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-7-多表连接"><a href="#9-7-多表连接" class="headerlink" title="9.7 多表连接"></a>9.7 多表连接</h2><p>**创建位置表 **</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> if <span class="keyword">not</span> <span class="keyword">exists</span> location(loc <span class="type">int</span>, loc_name string)</span><br><span class="line"><span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">load data <span class="keyword">local</span> inpath <span class="string">&#x27;/home/lytdev/location.txt&#x27;</span> <span class="keyword">into</span> <span class="keyword">table</span> location;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>多表连接查询</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">SELECT</span> e.ename, d.dname, l.loc_name</span><br><span class="line"><span class="keyword">FROM</span> emp e</span><br><span class="line"><span class="keyword">JOIN</span> dept d</span><br><span class="line"><span class="keyword">ON</span> d.deptno <span class="operator">=</span> e.deptno</span><br><span class="line"><span class="keyword">JOIN</span> location l</span><br><span class="line"><span class="keyword">ON</span> d.loc <span class="operator">=</span> l.loc;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-8-笛卡尔积"><a href="#9-8-笛卡尔积" class="headerlink" title="9.8 笛卡尔积"></a>9.8 笛卡尔积</h2><p>hive中严禁使用笛卡尔积</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">产生笛卡尔的条件</span><br><span class="line"></span><br><span class="line">省略连接条件 </span><br><span class="line">连接条件无效 </span><br><span class="line">所有表中的所有行互相连接</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-9-连接谓词中不支持or"><a href="#9-9-连接谓词中不支持or" class="headerlink" title="9.9 连接谓词中不支持or"></a>9.9 连接谓词中不支持or</h2><p>hive join目前不支持在on子句中使用谓词or</p><h1 id="十、排序"><a href="#十、排序" class="headerlink" title="十、排序"></a>十、排序</h1><h2 id="10-1-全局排序"><a href="#10-1-全局排序" class="headerlink" title="10.1 全局排序"></a>10.1 全局排序</h2><p>排序规则</p><p>asc: 升序<br>desc:降序</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 查询员工信息按工资升序排列</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">order</span> <span class="keyword">by</span> sal;</span><br><span class="line"># 查询员工信息按工资降序排列</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp <span class="keyword">order</span> <span class="keyword">by</span> sal <span class="keyword">desc</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="10-2-按照别名排序"><a href="#10-2-按照别名排序" class="headerlink" title="10.2 按照别名排序"></a>10.2 按照别名排序</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 按照员工薪水的<span class="number">2</span>倍排序</span><br><span class="line"><span class="keyword">select</span> ename, sal<span class="operator">*</span><span class="number">2</span> twosal <span class="keyword">from</span> emp <span class="keyword">order</span> <span class="keyword">by</span> twosal;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="10-3-多个列排序"><a href="#10-3-多个列排序" class="headerlink" title="10.3 多个列排序"></a>10.3 多个列排序</h2><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 按照部门和工资升序排序</span><br><span class="line"><span class="keyword">select</span> ename, deptno, sal <span class="keyword">from</span> emp <span class="keyword">order</span> <span class="keyword">by</span> deptno, sal;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="10-4-每个MapReduce内部排序"><a href="#10-4-每个MapReduce内部排序" class="headerlink" title="10.4 每个MapReduce内部排序"></a>10.4 每个MapReduce内部排序</h2><p>Sort By:对于大规模的数据集order by的效率非常低。在很多情况下,并不需要全局排序,此时可以使用sort by,按照分区排序。</p><p>Sort by为每个reducer产生一个排序文件。每个Reducer内部进行排序,对全局结果集来说不是排序。</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 设置reduce个数</span><br><span class="line"><span class="keyword">set</span> mapreduce.job.reduces<span class="operator">=</span><span class="number">3</span>;</span><br><span class="line"># 查看设置reduce个数</span><br><span class="line"><span class="keyword">set</span> mapreduce.job.reduces;</span><br><span class="line"># 根据部门编号降序查看员工信息</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp sort <span class="keyword">by</span> deptno <span class="keyword">desc</span>;</span><br><span class="line"># 将查询结果导入到文件中(按照部门编号降序排序)</span><br><span class="line"><span class="keyword">insert</span> overwrite <span class="keyword">local</span> directory <span class="string">&#x27;/home/lytdev/datas/sortby-result&#x27;</span> <span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp sort <span class="keyword">by</span> deptno <span class="keyword">desc</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="10-5-分区排序"><a href="#10-5-分区排序" class="headerlink" title="10.5 分区排序"></a>10.5 分区排序</h2><p>Distribute By: 在有些情况下,我们需要控制某个特定行应该到哪个reducer,通常是为了进行后续的聚集操作。distribute by 子句可以做这件事。distribute by类似MR中partition(自定义分区),进行分区,结合sort by使用。</p><p>对于distribute by进行测试,一定要分配多reduce进行处理,否则无法看到distribute by的效果。</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 设置reduce个数</span><br><span class="line"><span class="keyword">set</span> mapreduce.job.reduces<span class="operator">=</span><span class="number">3</span>;</span><br><span class="line"># 先按照部门编号分区,再按照员工编号降序排序</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp distribute <span class="keyword">by</span> deptno sort <span class="keyword">by</span> empno <span class="keyword">desc</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>排序规则</p><p>distribute by的分区规则是根据分区字段的hash码与reduce的个数进行模除后,余数相同的分到一个区<br>Hive要求DISTRIBUTE BY语句要写在SORT BY语句之前</p><h2 id="10-6-Cluster-By"><a href="#10-6-Cluster-By" class="headerlink" title="10.6 Cluster By"></a>10.6 Cluster By</h2><p>当distribute by和sorts by字段相同时,可以使用cluster by方式</p><p>cluster by除了具有distribute by的功能外还兼具sort by的功能。但是排序只能是升序排序,不能指定排序规则为ASC或者DESC</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 按照部门编号分区排序</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp cluster <span class="keyword">by</span> deptno;</span><br><span class="line"># 与上面语句等价</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> emp distribute <span class="keyword">by</span> deptno sort <span class="keyword">by</span> deptno;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="十一、分桶及抽样查询"><a href="#十一、分桶及抽样查询" class="headerlink" title="十一、分桶及抽样查询"></a>十一、分桶及抽样查询</h1><h2 id="11-1-分桶表数据存储"><a href="#11-1-分桶表数据存储" class="headerlink" title="11.1 分桶表数据存储"></a>11.1 分桶表数据存储</h2><p><strong>创建分桶表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> stu_buck(id <span class="type">int</span>, name string)</span><br><span class="line">clustered <span class="keyword">by</span>(id)</span><br><span class="line"><span class="keyword">into</span> <span class="number">4</span> buckets</span><br><span class="line"><span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建临时表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> stu(id <span class="type">int</span>, name string)</span><br><span class="line"><span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据至临时表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">load data <span class="keyword">local</span> inpath <span class="string">&#x27;/home/lytdev/student.txt&#x27;</span> <span class="keyword">into</span> <span class="keyword">table</span> stu;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>设置强制分桶</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">set</span> hive.enforce.bucketing<span class="operator">=</span><span class="literal">true</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>设置reduce个数</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 让hive自己去决定分桶个数</span></span><br><span class="line"><span class="built_in">set</span> mapreduce.job.reduces=-1;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据至分桶表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">insert</span> <span class="keyword">into</span> stu_buck <span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> stu;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="11-2-分桶抽样查询"><a href="#11-2-分桶抽样查询" class="headerlink" title="11.2 分桶抽样查询"></a>11.2 分桶抽样查询</h2><p>tablesample((bucket x out of y)</p><p>y必须是table总bucket数的倍数或者因子。hive根据y的大小,决定抽样的比例。例如,table总共分了4份,当y=2时,抽取(4/2=)2个bucket的数据,当y=8时,抽取(4/8=)1/2个bucket的数据</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 按照id抽样查询,将数据分<span class="number">4</span>份,每一份取第<span class="number">1</span>个数据</span><br><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> stu_buck <span class="keyword">tablesample</span>(bucket <span class="number">1</span> <span class="keyword">out</span> <span class="keyword">of</span> <span class="number">4</span> <span class="keyword">on</span> id);</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="十二、其他常用查询函数"><a href="#十二、其他常用查询函数" class="headerlink" title="十二、其他常用查询函数"></a>十二、其他常用查询函数</h1><h2 id="12-1-空字段赋值"><a href="#12-1-空字段赋值" class="headerlink" title="12.1 空字段赋值"></a>12.1 空字段赋值</h2><blockquote><p>函数说明<br>NVL:给值为NULL的数据赋值,它的格式是NVL( value,default\_value)。它的功能是如果value为NULL,则NVL函数返回default\_value的值,否则返回value的值,如果两个参数都为NULL ,则返回NULL。</p></blockquote><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 如果员工的comm为<span class="keyword">NULL</span>,则用<span class="number">-1</span>代替</span><br><span class="line"><span class="keyword">select</span> comm,nvl(comm, <span class="number">-1</span>) <span class="keyword">from</span> emp;</span><br><span class="line"></span><br><span class="line"># 如果员工的comm为<span class="keyword">NULL</span>,则用领导id代替</span><br><span class="line"><span class="keyword">select</span> comm, nvl(comm,mgr) <span class="keyword">from</span> emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="12-2-时间类"><a href="#12-2-时间类" class="headerlink" title="12.2 时间类"></a>12.2 时间类</h2><p><strong>date_format格式化时间</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> date_format(<span class="string">&#x27;2019-06-12&#x27;</span>, <span class="string">&#x27;yyyy-MM-dd&#x27;</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>date_add时间相加天数</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> date_add(<span class="string">&#x27;2019-06-12&#x27;</span>, <span class="number">5</span>);</span><br><span class="line"><span class="keyword">select</span> date_add(<span class="string">&#x27;2019-06-12&#x27;</span>, <span class="number">-5</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>date_sub时间相减天数</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> date_sub(<span class="string">&#x27;2019-06-12&#x27;</span>, <span class="number">5</span>);</span><br><span class="line"><span class="keyword">select</span> date_sub(<span class="string">&#x27;2019-06-12&#x27;</span>, <span class="number">-5</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>两个时间相减得天数</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> datediff(<span class="string">&#x27;2019-06-12&#x27;</span>, <span class="string">&#x27;2019-06-10&#x27;</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>替换函数</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> regexp_replace(<span class="string">&#x27;2019/06/12&#x27;</span>, <span class="string">&#x27;/&#x27;</span>, <span class="string">&#x27;-&#x27;</span>);</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="12-3-CASE-WHEN"><a href="#12-3-CASE-WHEN" class="headerlink" title="12.3 CASE WHEN"></a>12.3 CASE WHEN</h2><p><strong>数据准备</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">姓名 部门 性别</span><br><span class="line">悟空 A 男</span><br><span class="line">大海 A 男</span><br><span class="line">宋宋 B 男</span><br><span class="line">凤姐 A 女</span><br><span class="line">婷姐 B 女</span><br><span class="line">婷婷 B 女</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> emp_sex(name string, dept_id string, sex string)</span><br><span class="line"><span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> &quot;\t&quot;;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据</strong></p><p>load data local inpath ‘/home/lytdev/emp_sex.txt’ into table emp_sex;</p><p><strong>查询语句</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># 求出不同部门男女各多少人。结果如下:</span><br><span class="line"><span class="keyword">select</span></span><br><span class="line">dept_id,</span><br><span class="line"><span class="built_in">sum</span>(<span class="keyword">case</span> sex <span class="keyword">when</span> <span class="string">&#x27;男&#x27;</span> <span class="keyword">then</span> <span class="number">1</span> <span class="keyword">else</span> <span class="number">0</span> <span class="keyword">end</span>) male_count,</span><br><span class="line"><span class="built_in">sum</span>(<span class="keyword">case</span> sex <span class="keyword">when</span> <span class="string">&#x27;女&#x27;</span> <span class="keyword">then</span> <span class="number">1</span> <span class="keyword">else</span> <span class="number">0</span> <span class="keyword">end</span>) female_count</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">emp_sex</span><br><span class="line"><span class="keyword">group</span> <span class="keyword">by</span></span><br><span class="line">dept_id;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="12-4-行转列"><a href="#12-4-行转列" class="headerlink" title="12.4 行转列"></a>12.4 行转列</h2><p><strong>数据准备</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">姓名 星座 血型</span><br><span class="line">孙悟空 白羊座 A</span><br><span class="line">大海 射手座 A</span><br><span class="line">宋宋 白羊座 B</span><br><span class="line">猪八戒 白羊座 A</span><br><span class="line">凤姐 射手座 A</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>**需求 **<br>把星座和血型一样的人归类到一起。结果如下:</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">射手座,A 大海|凤姐</span><br><span class="line">白羊座,A 孙悟空|猪八戒</span><br><span class="line">白羊座,B 宋宋|苍老师</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建本地constellation.txt</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi constellation.txt</span><br><span class="line">孙悟空 白羊座 A</span><br><span class="line">大海 射手座 A</span><br><span class="line">宋宋 白羊座 B</span><br><span class="line">猪八戒 白羊座 A</span><br><span class="line">凤姐 射手座 A</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建hive表</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create table person_info(name string, constellation string, blood_type string)</span><br><span class="line">row format delimited fields terminated by <span class="string">&quot;\t&quot;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">load data <span class="built_in">local</span> inpath <span class="string">&quot;/home/lytdev/constellation.txt&quot;</span> into table person_info;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>查询语句</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span></span><br><span class="line">t1.base,</span><br><span class="line">concat_ws(<span class="string">&#x27;|&#x27;</span>, collect_set(t1.name)) name</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">(<span class="keyword">select</span></span><br><span class="line">name,</span><br><span class="line">concat(constellation, &quot;,&quot;, blood_type) base</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">person_info) t1</span><br><span class="line"><span class="keyword">group</span> <span class="keyword">by</span></span><br><span class="line">t1.base;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="12-5-列转行"><a href="#12-5-列转行" class="headerlink" title="12.5 列转行"></a>12.5 列转行</h2><p><strong>数据准备</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">电源 分类</span><br><span class="line">《疑犯追踪》 悬疑,动作,科幻,剧情</span><br><span class="line">《Lie to me》 悬疑,警匪,动作,心理,剧情</span><br><span class="line">《战狼2》 战争,动作,灾难</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>**需求 **<br>将电影分类中的数组数据展开。结果如下:</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">《疑犯追踪》 悬疑</span><br><span class="line">《疑犯追踪》 动作</span><br><span class="line">《疑犯追踪》 科幻</span><br><span class="line">《疑犯追踪》 剧情</span><br><span class="line">《Lie to me》 悬疑</span><br><span class="line">《Lie to me》 警匪</span><br><span class="line">《Lie to me》 动作</span><br><span class="line">《Lie to me》 心理</span><br><span class="line">《Lie to me》 剧情</span><br><span class="line">《战狼2》 战争</span><br><span class="line">《战狼2》 动作</span><br><span class="line">《战狼2》 灾难</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建本地movie.txt</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi movie.txt</span><br><span class="line">《疑犯追踪》 悬疑,动作,科幻,剧情</span><br><span class="line">《Lie to me》 悬疑,警匪,动作,心理,剧情</span><br><span class="line">《战狼2》 战争,动作,灾难</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建hive表</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create table movie_info(movie string, category array&lt;string&gt;)</span><br><span class="line">row format delimited fields terminated by <span class="string">&quot;\t&quot;</span></span><br><span class="line">collection items terminated by <span class="string">&quot;,&quot;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">load data <span class="keyword">local</span> inpath &quot;/home/lytdev/movie.txt&quot; <span class="keyword">into</span> <span class="keyword">table</span> movie_info;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>查询语句</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span></span><br><span class="line">movie,</span><br><span class="line">category_name</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">movie_info <span class="keyword">lateral</span> <span class="keyword">view</span> explode(category) table_tmp <span class="keyword">as</span> category_name;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="12-6-窗口函数"><a href="#12-6-窗口函数" class="headerlink" title="12.6 窗口函数"></a>12.6 窗口函数</h2><p><strong>数据准备</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">姓名 购买日期 价格</span><br><span class="line">jack <span class="number">2017</span><span class="number">-01</span><span class="number">-01</span> <span class="number">10</span></span><br><span class="line">tony <span class="number">2017</span><span class="number">-01</span><span class="number">-02</span> <span class="number">15</span></span><br><span class="line">jack <span class="number">2017</span><span class="number">-02</span><span class="number">-03</span> <span class="number">23</span></span><br><span class="line">tony <span class="number">2017</span><span class="number">-01</span><span class="number">-04</span> <span class="number">29</span></span><br><span class="line">jack <span class="number">2017</span><span class="number">-01</span><span class="number">-05</span> <span class="number">46</span></span><br><span class="line">jack <span class="number">2017</span><span class="number">-04</span><span class="number">-06</span> <span class="number">42</span></span><br><span class="line">tony <span class="number">2017</span><span class="number">-01</span><span class="number">-07</span> <span class="number">50</span></span><br><span class="line">jack <span class="number">2017</span><span class="number">-01</span><span class="number">-08</span> <span class="number">55</span></span><br><span class="line">mart <span class="number">2017</span><span class="number">-04</span><span class="number">-08</span> <span class="number">62</span></span><br><span class="line">mart <span class="number">2017</span><span class="number">-04</span><span class="number">-09</span> <span class="number">68</span></span><br><span class="line">neil <span class="number">2017</span><span class="number">-05</span><span class="number">-10</span> <span class="number">12</span></span><br><span class="line">mart <span class="number">2017</span><span class="number">-04</span><span class="number">-11</span> <span class="number">75</span></span><br><span class="line">neil <span class="number">2017</span><span class="number">-06</span><span class="number">-12</span> <span class="number">80</span></span><br><span class="line">mart <span class="number">2017</span><span class="number">-04</span><span class="number">-13</span> <span class="number">94</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建本地business.txt</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi business.txt</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建hive表</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create table business(name string, orderdate string,cost int)</span><br><span class="line">ROW FORMAT DELIMITED FIELDS TERMINATED BY <span class="string">&#x27;,&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">load data <span class="built_in">local</span> inpath <span class="string">&quot;/home/lytdev/business.txt&quot;</span> into table business;</span><br><span class="line"></span><br></pre></td></tr></table></figure><blockquote><p>按需求查询数据</p></blockquote><p><strong>查询在2017年4月份购买过的顾客及总人数</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> name,<span class="built_in">count</span>(<span class="operator">*</span>) <span class="keyword">over</span> ()</span><br><span class="line"><span class="keyword">from</span> business</span><br><span class="line"><span class="keyword">where</span> <span class="built_in">substring</span>(orderdate,<span class="number">1</span>,<span class="number">7</span>) <span class="operator">=</span> <span class="string">&#x27;2017-04&#x27;</span></span><br><span class="line"><span class="keyword">group</span> <span class="keyword">by</span> name;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>查询顾客的购买明细及月购买总额</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">select name,orderdate,cost,sum(cost) over(partition by name, month(orderdate)) from business;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>上述的场景, 将每个顾客的cost按照日期进行累加</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> name,orderdate,cost,</span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>() <span class="keyword">as</span> sample1,<span class="comment">--所有行相加</span></span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name) <span class="keyword">as</span> sample2,<span class="comment">--按name分组,组内数据相加</span></span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate) <span class="keyword">as</span> sample3,<span class="comment">--按name分组,组内数据累加</span></span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate <span class="keyword">rows</span> <span class="keyword">between</span> UNBOUNDED PRECEDING <span class="keyword">and</span> <span class="keyword">current</span> <span class="type">row</span> ) <span class="keyword">as</span> sample4 ,<span class="comment">--和sample3一样,由起点到当前行的聚合</span></span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate <span class="keyword">rows</span> <span class="keyword">between</span> <span class="number">1</span> PRECEDING <span class="keyword">and</span> <span class="keyword">current</span> <span class="type">row</span>) <span class="keyword">as</span> sample5, <span class="comment">--当前行和前面一行做聚合</span></span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate <span class="keyword">rows</span> <span class="keyword">between</span> <span class="number">1</span> PRECEDING <span class="keyword">AND</span> <span class="number">1</span> FOLLOWING ) <span class="keyword">as</span> sample6,<span class="comment">--当前行和前边一行及后面一行</span></span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate <span class="keyword">rows</span> <span class="keyword">between</span> <span class="keyword">current</span> <span class="type">row</span> <span class="keyword">and</span> UNBOUNDED FOLLOWING ) <span class="keyword">as</span> sample7 <span class="comment">--当前行及后面所有行</span></span><br><span class="line"><span class="keyword">from</span> business;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>查询每个顾客上次的购买时间</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> name,orderdate,cost,</span><br><span class="line"><span class="built_in">lag</span>(orderdate,<span class="number">1</span>,<span class="string">&#x27;1900-01-01&#x27;</span>) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate ) <span class="keyword">as</span> time1, <span class="built_in">lag</span>(orderdate,<span class="number">2</span>) <span class="keyword">over</span> (<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate) <span class="keyword">as</span> time2</span><br><span class="line"><span class="keyword">from</span> business;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>查询前20%时间的订单信息</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> (</span><br><span class="line"><span class="keyword">select</span> name,orderdate,cost, <span class="built_in">ntile</span>(<span class="number">5</span>) <span class="keyword">over</span>(<span class="keyword">order</span> <span class="keyword">by</span> orderdate) sorted</span><br><span class="line"><span class="keyword">from</span> business</span><br><span class="line">) t</span><br><span class="line"><span class="keyword">where</span> sorted <span class="operator">=</span> <span class="number">1</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>over函数中的2种组合</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span></span><br><span class="line">name,</span><br><span class="line">orderdate,</span><br><span class="line">cost,</span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(distribute <span class="keyword">by</span> name sort <span class="keyword">by</span> orderdate)</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">business;</span><br><span class="line"></span><br></pre></td></tr></table></figure><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span></span><br><span class="line">name,</span><br><span class="line">orderdate,</span><br><span class="line">cost,</span><br><span class="line"><span class="built_in">sum</span>(cost) <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> name <span class="keyword">order</span> <span class="keyword">by</span> orderdate)</span><br><span class="line"><span class="keyword">from</span></span><br><span class="line">business;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="12-7-Rank"><a href="#12-7-Rank" class="headerlink" title="12.7 Rank"></a>12.7 Rank</h2><p><strong>函数说明</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">RANK() 排序相同时会重复,总数不会变 1 1 3 4 </span><br><span class="line">DENSE_RANK() 排序相同时会重复,总数会减少 1 1 2 3 </span><br><span class="line">ROW_NUMBER() 会根据顺序计算 1 2 3 4</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>数据准备</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">姓名 科目 成绩</span><br><span class="line">孙悟空 语文 87</span><br><span class="line">孙悟空 数学 95</span><br><span class="line">孙悟空 英语 68</span><br><span class="line">大海 语文 94</span><br><span class="line">大海 数学 56</span><br><span class="line">大海 英语 84</span><br><span class="line">宋宋 语文 64</span><br><span class="line">宋宋 数学 86</span><br><span class="line">宋宋 英语 84</span><br><span class="line">婷婷 语文 65</span><br><span class="line">婷婷 数学 85</span><br><span class="line">婷婷 英语 78</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>需求<br>计算每门学科成绩排名<br><strong>创建本地score.txt</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi score.txt</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>创建hive表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> score(</span><br><span class="line">name string,</span><br><span class="line">subject string,</span><br><span class="line">score <span class="type">int</span>)</span><br><span class="line"><span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> &quot;\t&quot;;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>导入数据</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">load data <span class="keyword">local</span> inpath <span class="string">&#x27;/home/lytdev/score.txt&#x27;</span> <span class="keyword">into</span> <span class="keyword">table</span> score;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>按需求查询数据</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">select</span> name,</span><br><span class="line">subject,</span><br><span class="line">score,</span><br><span class="line"><span class="built_in">rank</span>() <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> subject <span class="keyword">order</span> <span class="keyword">by</span> score <span class="keyword">desc</span>) rp,</span><br><span class="line"><span class="built_in">dense_rank</span>() <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> subject <span class="keyword">order</span> <span class="keyword">by</span> score <span class="keyword">desc</span>) drp,</span><br><span class="line"><span class="built_in">row_number</span>() <span class="keyword">over</span>(<span class="keyword">partition</span> <span class="keyword">by</span> subject <span class="keyword">order</span> <span class="keyword">by</span> score <span class="keyword">desc</span>) rmp</span><br><span class="line"><span class="keyword">from</span> score;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="12-8-系统函数"><a href="#12-8-系统函数" class="headerlink" title="12.8 系统函数"></a>12.8 系统函数</h2><p><strong>查看系统提供的函数</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">show</span> functions;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>显示指定函数的用法</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">desc <span class="keyword">function</span> upper;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>详细介绍指定函数的具体用法</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">desc</span> <span class="keyword">function</span> extended split;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="十三、自定义函数UDF"><a href="#十三、自定义函数UDF" class="headerlink" title="十三、自定义函数UDF"></a>十三、自定义函数UDF</h1><h2 id="13-1-自定义函数类型"><a href="#13-1-自定义函数类型" class="headerlink" title="13.1 自定义函数类型"></a>13.1 自定义函数类型</h2><ul><li> UDF函数(user-defined function)</li></ul><p>一进一出的函数</p><ul><li> UDAF函数(user-defined aggregation function)</li></ul><p>多进一出的函数,例如count、max、min</p><ul><li> UDF函数(user-defined table-generating functions)</li></ul><p>一进多出的函数,例如later view explore()炸裂函数</p><h2 id="13-2-创建项目导入依赖"><a href="#13-2-创建项目导入依赖" class="headerlink" title="13.2 创建项目导入依赖"></a>13.2 创建项目导入依赖</h2><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">dependencies</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">dependency</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">groupId</span>&gt;</span>org.apache.hive<span class="tag">&lt;/<span class="name">groupId</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">artifactId</span>&gt;</span>hive-exec<span class="tag">&lt;/<span class="name">artifactId</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">version</span>&gt;</span>1.2.1<span class="tag">&lt;/<span class="name">version</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">dependency</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">dependencies</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="13-3-创建一个类继承与UDF"><a href="#13-3-创建一个类继承与UDF" class="headerlink" title="13.3 创建一个类继承与UDF"></a>13.3 创建一个类继承与UDF</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">package com.inspur.hive;</span><br><span class="line"></span><br><span class="line">import org.apache.hadoop.hive.ql.exec.UDF;</span><br><span class="line"></span><br><span class="line">public class Lower extends UDF &#123;</span><br><span class="line">public int evaluate(String line) &#123;</span><br><span class="line">if (line == null) &#123;</span><br><span class="line">return 0;</span><br><span class="line">&#125; else &#123;</span><br><span class="line">return line.length();</span><br><span class="line">&#125;</span><br><span class="line">&#125;</span><br><span class="line">public int evalute(Number line) &#123; </span><br><span class="line">if (line == null) &#123; </span><br><span class="line">return 0; </span><br><span class="line">&#125; else &#123; </span><br><span class="line">return line.toString().length(); </span><br><span class="line">&#125; </span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">public int evalute(Boolean line) &#123; </span><br><span class="line">if (line == null) &#123; </span><br><span class="line">return 0; </span><br><span class="line">&#125; else &#123; </span><br><span class="line">return line.toString().length(); </span><br><span class="line">&#125; </span><br><span class="line">&#125; </span><br><span class="line">&#125;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="13-4-打成jar包,并上传集群"><a href="#13-4-打成jar包,并上传集群" class="headerlink" title="13.4 打成jar包,并上传集群"></a>13.4 打成jar包,并上传集群</h2><p><img src="https://img-blog.csdnimg.cn/20200608195229741.png"></p><h2 id="13-5-临时上传jar包至hive,退出时失效"><a href="#13-5-临时上传jar包至hive,退出时失效" class="headerlink" title="13.5 临时上传jar包至hive,退出时失效"></a>13.5 临时上传jar包至hive,退出时失效</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">add jar /home/lytdev/1.jar;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="13-6-创建自定义函数"><a href="#13-6-创建自定义函数" class="headerlink" title="13.6 创建自定义函数"></a>13.6 创建自定义函数</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create <span class="keyword">function</span> mylen as <span class="string">&quot;com.inspur.hive.Lower&quot;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="13-7-测试自定义函数"><a href="#13-7-测试自定义函数" class="headerlink" title="13.7 测试自定义函数"></a>13.7 测试自定义函数</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">select ename, mylen(ename) from emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="十四、压缩与存储"><a href="#十四、压缩与存储" class="headerlink" title="十四、压缩与存储"></a>十四、压缩与存储</h1><h2 id="14-1-开启Map输出阶段压缩"><a href="#14-1-开启Map输出阶段压缩" class="headerlink" title="14.1 开启Map输出阶段压缩"></a>14.1 开启Map输出阶段压缩</h2><p>开启hive中间传输数据压缩功能</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt;<span class="built_in">set</span> hive.exec.compress.intermediate=<span class="literal">true</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>开启mapreduce中map输出压缩功能</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">hive (<span class="keyword">default</span>)<span class="operator">&gt;</span><span class="keyword">set</span> mapreduce.map.output.compress<span class="operator">=</span><span class="literal">true</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>设置mapreduce中map输出数据的压缩方式</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt;<span class="built_in">set</span> mapreduce.map.output.compress.codec = org.apache.hadoop.io.compress.SnappyCodec;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>执行查询语句</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; select count(ename) name from emp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="14-2-开启Reduce输出阶段压缩"><a href="#14-2-开启Reduce输出阶段压缩" class="headerlink" title="14.2 开启Reduce输出阶段压缩"></a>14.2 开启Reduce输出阶段压缩</h2><p><strong>开启hive最终输出数据压缩功能</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt;<span class="built_in">set</span> hive.exec.compress.output=<span class="literal">true</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>开启mapreduce最终输出数据压缩</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt;<span class="built_in">set</span> mapreduce.output.fileoutputformat.compress=<span class="literal">true</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>设置mapreduce最终数据输出压缩方式</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; <span class="built_in">set</span> mapreduce.output.fileoutputformat.compress.codec =</span><br><span class="line">org.apache.hadoop.io.compress.SnappyCodec;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>设置mapreduce最终数据输出压缩为块压缩</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; <span class="built_in">set</span> mapreduce.output.fileoutputformat.compress.type=BLOCK;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>测试一下输出结果是否是压缩文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hive (default)&gt; insert overwrite <span class="built_in">local</span> directory</span><br><span class="line"><span class="string">&#x27;/home/lytdev/distribute-result&#x27;</span> select * from emp distribute by deptno sort by empno desc;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="14-3-文件存储格式"><a href="#14-3-文件存储格式" class="headerlink" title="14.3 文件存储格式"></a>14.3 文件存储格式</h2><p>Hive支持的存储数据的格式主要有:TEXTFILE 、SEQUENCEFILE、ORC、PARQUET</p><h2 id="14-4-列式存储和行式存储"><a href="#14-4-列式存储和行式存储" class="headerlink" title="14.4 列式存储和行式存储"></a>14.4 列式存储和行式存储</h2><ul><li> 行存储的特点</li></ul><p>查询满足条件的一整行数据的时候,列存储则需要去每个聚集的字段找到对应的每个列的值,行存储只需要找到其中一个值,其余的值都在相邻地方,所以此时行存储查询的速度更快。</p><ul><li> 列存储的特点</li></ul><p>因为每个字段的数据聚集存储,在查询只需要少数几个字段的时候,能大大减少读取的数据量;每个字段的数据类型一定是相同的,列式存储可以针对性的设计更好的设计压缩算法。</p><ul><li> TEXTFILE和SEQUENCEFILE的存储格式都是基于行存储的;</li><li> ORC和PARQUET是基于列式存储的;</li><li> ORC常用于MapReduce,PARQUET常用于spark。</li></ul><h2 id="14-5-存储和压缩结合"><a href="#14-5-存储和压缩结合" class="headerlink" title="14.5 存储和压缩结合"></a>14.5 存储和压缩结合</h2><blockquote><p>创建一个非压缩的的ORC存储方式</p></blockquote><p><strong>创建一个orc表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> log_orc_none(</span><br><span class="line">track_time string,</span><br><span class="line">url string,</span><br><span class="line">session_id string,</span><br><span class="line">referer string,</span><br><span class="line">ip string,</span><br><span class="line">end_user_id string,</span><br><span class="line">city_id string</span><br><span class="line">)</span><br><span class="line"><span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span></span><br><span class="line">stored <span class="keyword">as</span> orc tblproperties (&quot;orc.compress&quot;<span class="operator">=</span>&quot;NONE&quot;);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>插入数据</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">insert</span> <span class="keyword">into</span> <span class="keyword">table</span> log_orc <span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> log_text;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>查看表中数据大小</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">dfs <span class="operator">-</span>du <span class="operator">-</span>h <span class="operator">/</span><span class="keyword">user</span><span class="operator">/</span>hive<span class="operator">/</span>warehouse<span class="operator">/</span>log_orc<span class="operator">/</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><blockquote><p>创建一个SNAPPY压缩的ORC存储方式</p></blockquote><p><strong>创建一个orc表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> log_orc_snappy(</span><br><span class="line">track_time string,</span><br><span class="line">url string,</span><br><span class="line">session_id string,</span><br><span class="line">referer string,</span><br><span class="line">ip string,</span><br><span class="line">end_user_id string,</span><br><span class="line">city_id string</span><br><span class="line">)</span><br><span class="line"><span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span></span><br><span class="line">stored <span class="keyword">as</span> orc tblproperties (&quot;orc.compress&quot;<span class="operator">=</span>&quot;SNAPPY&quot;);</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>插入数据</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">insert</span> <span class="keyword">into</span> <span class="keyword">table</span> log_orc_snappy <span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> log_text;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>查看表中数据大小</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">dfs <span class="operator">-</span>du <span class="operator">-</span>h <span class="operator">/</span><span class="keyword">user</span><span class="operator">/</span>hive<span class="operator">/</span>warehouse<span class="operator">/</span>log_orc_snappy<span class="operator">/</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>**存储方式和压缩总结 **</p><blockquote><p>在实际的项目开发当中,hive表的数据存储格式一般选择:orc或parquet。压缩方式一般选择snappy,lzo</p></blockquote><h1 id="十五、企业级调优"><a href="#十五、企业级调优" class="headerlink" title="十五、企业级调优"></a>十五、企业级调优</h1><h2 id="15-1-Fetch抓取"><a href="#15-1-Fetch抓取" class="headerlink" title="15.1 Fetch抓取"></a>15.1 Fetch抓取</h2><ul><li> <strong>Fetch抓取</strong></li></ul><p>Hive中对某些情况的查询可以不必使用MapReduce计算。例如:SELECT * FROM employees;在这种情况下,Hive可以简单地读取employee对应的存储目录下的文件,然后输出查询结果到控制台。</p><ul><li> <strong>Fetch参数配置</strong></li></ul><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">name</span>&gt;</span>hive.fetch.task.conversion<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">value</span>&gt;</span>more<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">description</span>&gt;</span></span><br><span class="line">Expects one of [none, minimal, more].</span><br><span class="line">Some select queries can be converted to single FETCH task minimizing latency.</span><br><span class="line">Currently the query should be single sourced not having any subquery and should not have any aggregations or distincts (which incurs RS), lateral views and joins.</span><br><span class="line">0. none : disable hive.fetch.task.conversion</span><br><span class="line">1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only</span><br><span class="line">2. more : SELECT, FILTER, LIMIT only (support TABLESAMPLE and virtual columns)</span><br><span class="line"><span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>案例实操1</strong><br>把hive.fetch.task.conversion设置成none,然后执行查询语句,都会执行mapreduce程序</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.fetch.task.conversion=none;</span><br><span class="line">select * from emp;</span><br><span class="line">select ename from emp;</span><br><span class="line">select ename from emp <span class="built_in">limit</span> 3;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>案例实操2</strong><br>把hive.fetch.task.conversion设置成more,然后执行查询语句,如下查询方式都不会执行mapreduce程序</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.fetch.task.conversion=more;</span><br><span class="line">select * from emp;</span><br><span class="line">select ename from emp;</span><br><span class="line">select ename from emp <span class="built_in">limit</span> 3;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-2-本地模式"><a href="#15-2-本地模式" class="headerlink" title="15.2 本地模式"></a>15.2 本地模式</h2><p><strong>意义</strong><br>Hive可以通过本地模式在单台机器上处理所有的任务。对于小数据集,执行时间可以明显被缩短<br><strong>开启本地模式</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 开启本地mr</span></span><br><span class="line"><span class="built_in">set</span> hive.exec.mode.local.auto=<span class="literal">true</span>;</span><br><span class="line"><span class="comment"># 设置local mr的最大输入数据量,当输入数据量小于这个值时采用local mr的方式,默认为134217728,即128M</span></span><br><span class="line"><span class="built_in">set</span> hive.exec.mode.local.auto.inputbytes.max=50000000;</span><br><span class="line"><span class="comment"># 设置local mr的最大输入文件个数,当输入文件个数小于这个值时采用local mr的方式,默认为4</span></span><br><span class="line"><span class="built_in">set</span> hive.exec.mode.local.auto.input.files.max=10;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-3-小表、大表join"><a href="#15-3-小表、大表join" class="headerlink" title="15.3 小表、大表join"></a>15.3 小表、大表join</h2><p>新版的hive已经对小表JOIN大表和大表JOIN小表进行了优化。小表放在左边和右边已经没有明显区别。</p><h3 id="15-3-1-需求"><a href="#15-3-1-需求" class="headerlink" title="15.3.1 需求"></a>15.3.1 需求</h3><p>测试大表join小表和小表join大表的效率</p><h3 id="15-3-2-建大表、小表和join大表的语句"><a href="#15-3-2-建大表、小表和join大表的语句" class="headerlink" title="15.3.2 建大表、小表和join大表的语句"></a>15.3.2 建大表、小表和join大表的语句</h3><p><strong>(1)创建大表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> bigtable(id <span class="type">bigint</span>, <span class="type">time</span> <span class="type">bigint</span>, uid string, keyword string, url_rank <span class="type">int</span>, click_num <span class="type">int</span>, click_url string) <span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(2)创建小表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> smalltable(id <span class="type">bigint</span>, <span class="type">time</span> <span class="type">bigint</span>, uid string, keyword string, url_rank <span class="type">int</span>, click_num <span class="type">int</span>, click_url string) <span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(3)创建join后的表</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">create</span> <span class="keyword">table</span> jointable(id <span class="type">bigint</span>, <span class="type">time</span> <span class="type">bigint</span>, uid string, keyword string, url_rank <span class="type">int</span>, click_num <span class="type">int</span>, click_url string) <span class="type">row</span> format delimited fields terminated <span class="keyword">by</span> <span class="string">&#x27;\t&#x27;</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(4)导入数据</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line">load data <span class="keyword">local</span> inpath <span class="string">&#x27;/home/lytdev/bigtable&#x27;</span> <span class="keyword">into</span> <span class="keyword">table</span> bigtable;</span><br><span class="line">load data <span class="keyword">local</span> inpath <span class="string">&#x27;/home/lytdev/smalltable&#x27;</span> <span class="keyword">into</span> <span class="keyword">table</span> smalltable;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(5)关闭mapjoin功能(默认是打开的)</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.auto.convert.join = <span class="literal">false</span>;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(6)执行小表JOIN大表语句</strong></p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">insert</span> overwrite <span class="keyword">table</span> jointable</span><br><span class="line"><span class="keyword">select</span> b.id, b.time, b.uid, b.keyword, b.url_rank, b.click_num, b.click_url</span><br><span class="line"><span class="keyword">from</span> smalltable s</span><br><span class="line"><span class="keyword">join</span> bigtable b</span><br><span class="line"><span class="keyword">on</span> b.id <span class="operator">=</span> s.id;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(7)执行结果</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">MapReduce Total cumulative CPU time: 31 seconds 100 msec</span><br><span class="line">No rows affected (52.897 seconds)</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(8)执行大表JOIN小表语句</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">insert overwrite table jointable</span><br><span class="line">select b.id, b.time, b.uid, b.keyword, b.url_rank, b.click_num, b.click_url</span><br><span class="line">from bigtable b</span><br><span class="line">join smalltable s</span><br><span class="line">on s.id = b.id;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(9)执行结果</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">MapReduce Total cumulative CPU time: 29 seconds 790 msec</span><br><span class="line">No rows affected (50.443 seconds)</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>(10)注意</strong><br>大表放在左边 left join 小表,可以走mapjoin进行优化;<br>如果使用 join,也就是inner join 大表小表的左右顺序无所谓,都会进行优化</p><h2 id="15-4-大表和大表join"><a href="#15-4-大表和大表join" class="headerlink" title="15.4 大表和大表join"></a>15.4 大表和大表join</h2><h3 id="15-4-1-空key过滤"><a href="#15-4-1-空key过滤" class="headerlink" title="15.4.1 空key过滤"></a>15.4.1 空key过滤</h3><p>进行空值过滤时,放在子查询中</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">insert</span> overwrite <span class="keyword">table</span> jointable <span class="keyword">select</span> n.<span class="operator">*</span> <span class="keyword">from</span> (<span class="keyword">select</span> <span class="operator">*</span> <span class="keyword">from</span> nullidtable <span class="keyword">where</span> id <span class="keyword">is</span> <span class="keyword">not</span> <span class="keyword">null</span> ) n <span class="keyword">left</span> <span class="keyword">join</span> ori o <span class="keyword">on</span> n.id <span class="operator">=</span> o.id;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="15-4-2-空key替换"><a href="#15-4-2-空key替换" class="headerlink" title="15.4.2 空key替换"></a>15.4.2 空key替换</h3><p>有时虽然某个key为空对应的数据很多,但是相应的数据不是异常数据,必须要包含在join的结果中,此时我们可以表a中key为空的字段赋一个随机的值,使得数据随机均匀地分不到不同的reducer上,避免数据倾斜,数据倾斜会导致MapTask或者ReduceTask执行不了,从而导致整个MR任务执行不了。</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"><span class="keyword">insert</span> overwrite <span class="keyword">table</span> jointable</span><br><span class="line"><span class="keyword">select</span> n.<span class="operator">*</span> <span class="keyword">from</span> nullidtable n <span class="keyword">full</span> <span class="keyword">join</span> ori o <span class="keyword">on</span></span><br><span class="line"><span class="keyword">case</span> <span class="keyword">when</span> n.id <span class="keyword">is</span> <span class="keyword">null</span> <span class="keyword">then</span> concat(<span class="string">&#x27;hive&#x27;</span>, rand()) <span class="keyword">else</span> n.id <span class="keyword">end</span> <span class="operator">=</span> o.id;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-5-开启mapjoin参数配置"><a href="#15-5-开启mapjoin参数配置" class="headerlink" title="15.5 开启mapjoin参数配置"></a>15.5 开启mapjoin参数配置</h2><ul><li> 设置自动选择Mapjoin</li></ul><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">set hive.auto.convert.join = true; #默认为true</span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 大表小表的阈值设置(默认25M一下认为是小表)</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.mapjoin.smalltable.filesize=25000000</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>当服务器内容256GB时,可以增大该参数配置</p><h2 id="15-6-group-by"><a href="#15-6-group-by" class="headerlink" title="15.6 group by"></a>15.6 group by</h2><p>默认情况下,Map阶段同一Key数据分发给一个reduce,当一个key数据过大时就倾斜了。并不是所有的聚合操作都需要在Reduce端完成,很多聚合操作都可以先在Map端进行部分聚合,最后在Reduce端得出最终结果。</p><ul><li> 是否在Map端进行聚合,默认为True</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.map.aggr = <span class="literal">true</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 在Map端进行聚合操作的条目数目</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.groupby.mapaggr.checkinterval = 100000</span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 有数据倾斜的时候进行负载均衡(默认是false)</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.groupby.skewindata = <span class="literal">true</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-7-Count-Distinct-去重统计"><a href="#15-7-Count-Distinct-去重统计" class="headerlink" title="15.7 Count(Distinct) 去重统计"></a>15.7 Count(Distinct) 去重统计</h2><p>数据量小的时候无所谓,数据量大的情况下,由于COUNT DISTINCT的全聚合操作,即使设定了reduce task个数,set mapred.reduce.tasks=100;hive也只会启动一个reducer。这就造成一个Reduce处理的数据量太大,导致整个Job很难完成,一般COUNT DISTINCT使用先GROUP BY再COUNT的方式替换:</p><figure class="highlight sql"><table><tr><td class="code"><pre><span class="line"># <span class="built_in">count</span>(<span class="keyword">distinct</span>)方式</span><br><span class="line"><span class="keyword">select</span> <span class="built_in">count</span>(<span class="keyword">distinct</span> id) <span class="keyword">from</span> bigtable;</span><br><span class="line"># <span class="keyword">group</span> <span class="keyword">by</span>方式,必须先设置reduce数量,否则也是默认一个reduce</span><br><span class="line"><span class="keyword">set</span> mapreduce.job.reduces <span class="operator">=</span> <span class="number">5</span>;</span><br><span class="line"><span class="keyword">select</span> <span class="built_in">count</span>(id) <span class="keyword">from</span> (<span class="keyword">select</span> id <span class="keyword">from</span> bigtable <span class="keyword">group</span> <span class="keyword">by</span> id) a;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-8-MR优化"><a href="#15-8-MR优化" class="headerlink" title="15.8 MR优化"></a>15.8 MR优化</h2><ul><li> 合理设置Map数</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> hive.map.aggr = <span class="literal">true</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 开启小文件合并</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># CombineHiveInputFormat具有对小文件进行合并的功能</span></span><br><span class="line"><span class="built_in">set</span> hive.input.format= org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;</span><br><span class="line"><span class="comment"># 在map-only任务结束时合并小文件,默认true</span></span><br><span class="line"><span class="built_in">set</span> hive.merge.mapfiles = <span class="literal">true</span>;</span><br><span class="line"><span class="comment"># 在map-reduce任务结束时合并小文件,默认false</span></span><br><span class="line"><span class="built_in">set</span> hive.merge.mapredfiles = <span class="literal">true</span>;</span><br><span class="line"><span class="comment"># 合并文件的大小,默认256M</span></span><br><span class="line"><span class="built_in">set</span> hive.merge.size.per.task = 268435456;</span><br><span class="line"><span class="comment"># 当输出文件的平均大小小于该值时,启动一个独立的map-reduce任务进行文件merge</span></span><br><span class="line"><span class="built_in">set</span> hive.merge.smallfiles.avgsize = 16777216;</span><br><span class="line"></span><br></pre></td></tr></table></figure><ul><li> 合理设置Reduce数</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 调整reduce个数方法一</span></span><br><span class="line"><span class="comment"># 每个Reduce处理的数据量默认是256MB</span></span><br><span class="line">hive.exec.reducers.bytes.per.reducer=256000000</span><br><span class="line"><span class="comment"># 每个任务最大的reduce数,默认为1009</span></span><br><span class="line">hive.exec.reducers.max=1009</span><br><span class="line"><span class="comment"># 计算reducer数的公式</span></span><br><span class="line">N=min(参数2,总输入数据量/参数1)</span><br><span class="line"><span class="comment"># 调整reduce个数方法二</span></span><br><span class="line"><span class="comment"># 在hadoop的mapred-default.xml文件中修改,设置每个job的Reduce个数</span></span><br><span class="line"><span class="built_in">set</span> mapreduce.job.reduces = 15;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-9-开启并行执行"><a href="#15-9-开启并行执行" class="headerlink" title="15.9 开启并行执行"></a>15.9 开启并行执行</h2><ul><li> 设置任务并行度</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># 打开任务并行执行</span></span><br><span class="line"><span class="built_in">set</span> hive.exec.parallel=<span class="literal">true</span>;</span><br><span class="line"><span class="comment"># 同一个sql允许最大并行度,默认为8。</span></span><br><span class="line"><span class="built_in">set</span> hive.exec.parallel.thread.number=16;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-10-JVM重用"><a href="#15-10-JVM重用" class="headerlink" title="15.10 JVM重用"></a>15.10 JVM重用</h2><p>JVM重用是Hadoop调优参数的内容,其对Hive的性能具有非常大的影响,特别是对于很难避免小文件的场景或task特别多的场景,这类场景大多数执行时间都很短</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line">&lt;name&gt;mapreduce.job.jvm.numtasks&lt;/name&gt;</span><br><span class="line">&lt;value&gt;10&lt;/value&gt;</span><br><span class="line">&lt;description&gt;How many tasks to run per jvm. If set to -1, there is</span><br><span class="line">no limit.</span><br><span class="line">&lt;/description&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="15-11-推测执行"><a href="#15-11-推测执行" class="headerlink" title="15.11 推测执行"></a>15.11 推测执行</h2><p>在分布式集群环境下,因为程序Bug(包括Hadoop本身的bug),负载不均衡或者资源分布不均等原因,会造成同一个作业的多个任务之间运行速度不一致,有些任务的运行速度可能明显慢于其他任务(比如一个作业的某个任务进度只有50%,而其他所有任务已经运行完毕),则这些任务会拖慢作业的整体执行进度。为了避免这种情况发生,Hadoop采用了推测执行(Speculative Execution)机制,它根据一定的法则推测出“拖后腿”的任务,并为这样的任务启动一个备份任务,让该任务与原始任务同时处理同一份数据,并最终选用最先成功运行完成任务的计算结果作为最终结果。</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">name</span>&gt;</span>mapreduce.map.speculative<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">value</span>&gt;</span>true<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> Hive </tag>
</tags>
</entry>
<entry>
<title>深入浅出学习MapReduce</title>
<link href="/2021/11/12/56ac102284d9.html"/>
<url>/2021/11/12/56ac102284d9.html</url>
<content type="html"><![CDATA[<blockquote><p>本文是基于CentOS 7.3系统环境,进行MapReduce的学习和使用本文是基于CentOS 7.3系统环境,进行MapReduce的学习和使用</p></blockquote><h1 id="1-MapReduce简介"><a href="#1-MapReduce简介" class="headerlink" title="1. MapReduce简介"></a>1. MapReduce简介</h1><h2 id="1-1-MapReduce定义"><a href="#1-1-MapReduce定义" class="headerlink" title="1.1 MapReduce定义"></a>1.1 MapReduce定义</h2><p>MapReduce是一个分布式运算程序的编程框架,是基于Hadoop的数据分析计算的核心框架</p><h2 id="1-2-MapReduce处理过程"><a href="#1-2-MapReduce处理过程" class="headerlink" title="1.2 MapReduce处理过程"></a>1.2 MapReduce处理过程</h2><p>主要分为两个阶段:Map和Reduce</p><ul><li>Map负责把一个任务分解成多个任务</li><li>Reduce负责把分解后多任务处理的结果进行汇总</li></ul><h2 id="1-3-MapReduce的优点"><a href="#1-3-MapReduce的优点" class="headerlink" title="1.3 MapReduce的优点"></a>1.3 MapReduce的优点</h2><ol><li><strong>MapReduce易于编程</strong><br>只需要实现一些简单接口,就可以完成一个分布式程序,这个分布式程序可以分布到大量廉价的PC机器上运行。也就是说你写一个分布式程序,就跟写一个简单的串行程序是一模一样的。 </li><li><strong>良好的扩展性(hadoop的特点)</strong><br>当你的计算资源不能满足的时候,你可以通过简单的增加机器(nodemanager)来扩展它的计算能力 </li><li><strong>高容错性</strong><br>MapReduce设计的初衷就是使程序能够部署在廉价的PC机器上,这就要求它具有很高的容错性,比如其中一台机器挂了,它可以把上面的计算任务转移到另外一个节点上运行,不至于整个任务运行失败。 </li><li><strong>适合PB级以上海量数据的离线处理</strong><br>可以实现上千台服务器集群并发工作,提供数据处理能力</li></ol><h2 id="1-4-MapReduce的缺点"><a href="#1-4-MapReduce的缺点" class="headerlink" title="1.4 MapReduce的缺点"></a>1.4 MapReduce的缺点</h2><ol><li><strong>不擅长实时计算</strong><br>MapReduce无法像MySQL一样,在毫秒或者秒级内返回结果 </li><li><strong>不擅长流式计算</strong><br>流式计算的输入数据是动态的,而MapReduce的输入数据集是静态的,不能动态变化。这是因为MapReduce自身的设计特点决定了数据源必须是静态的 </li><li><strong>不擅长DAG有向图计算</strong><br>多个应用程序之间存在依赖关系,后一个应用程序的输入为前一个程序的输出。在这种情况下,每个MapReduce作业的输出结果都会写入到磁盘,会造成大量的磁盘IO,导致性能非常低下</li></ol><h2 id="1-5-MapReduce核心编程思想"><a href="#1-5-MapReduce核心编程思想" class="headerlink" title="1.5 MapReduce核心编程思想"></a>1.5 MapReduce核心编程思想</h2><p><img src="https://img-blog.csdnimg.cn/20200523112141354.png"></p><p>分布式的运算程序往往需要分成至少2个阶段。<br>第一个阶段的MapTask并发实例,完全并行运行,互不相干。<br>第二个阶段的ReduceTask并发实例互不相干,但是他们的数据依赖于上一个阶段的所有MapTask并发实例的输出。<br>MapReduce编程模型只能包含一个Map阶段和一个Reduce阶段,如果用户的业务逻辑非常复杂,那就只能多个MapReduce程序,串行运行。</p><h2 id="1-6-MapReduce进程"><a href="#1-6-MapReduce进程" class="headerlink" title="1.6 MapReduce进程"></a>1.6 MapReduce进程</h2><ol><li><strong>MrAppMaster</strong><br>负责整个程序的过程调度及状态协调 </li><li><strong>MapTask</strong><br>负责Map阶段的整个数据处理流程 </li><li><strong>ReduceTask</strong><br>负责Reduce阶段的整个数据处理流程</li></ol><h2 id="1-7-数据切片与MapTask并行度机制"><a href="#1-7-数据切片与MapTask并行度机制" class="headerlink" title="1.7 数据切片与MapTask并行度机制"></a>1.7 数据切片与MapTask并行度机制</h2><p><img src="https://img-blog.csdnimg.cn/20200526152737847.png"></p><h2 id="1-8-FileInputFormat切片机制"><a href="#1-8-FileInputFormat切片机制" class="headerlink" title="1.8 FileInputFormat切片机制"></a>1.8 FileInputFormat切片机制</h2><ol><li><strong>切片机制</strong></li></ol><p>简单地安装文件的内容长度进行切片<br>切片大小,默认等于Block大小<br>切片时不考虑数据集整体,而是逐个针对每一个文件单独切片</p><ol start="3"><li><strong>案例分析</strong></li></ol><p>输入数据有两个文件: </p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">file1.txt 320M </span><br><span class="line">file2.txt 10M </span><br></pre></td></tr></table></figure><p>经过FileInputFormat的切片机制运算后,形成的切片信息如下: </p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">file1.txt.split1 ---- 0~128M </span><br><span class="line">file1.txt.split2 ---- 128M~256M </span><br><span class="line">file1.txt.split1 ---- 256M~320M </span><br><span class="line">file2.txt.split1 ---- 0~10M</span><br></pre></td></tr></table></figure><h2 id="1-9-CombineTextInputFormat切片机制"><a href="#1-9-CombineTextInputFormat切片机制" class="headerlink" title="1.9 CombineTextInputFormat切片机制"></a><strong>1.9 CombineTextInputFormat切片机制</strong></h2><p>框架默认的TextInputFormat切片机制是对任务按文件规划切片,不管文件多小,都会是一个单独的切片,都会交给一个MapTask,这样如果有大量小文件,就会产生大量的MapTask,而创建MapTask的开销比较大,处理效率极其低下。</p><ol><li><strong>应用场景</strong><br>CombineTextInputFormat用于小文件过多的场景,它可以将多个小文件从逻辑上规划到一个切片中,这样,多个小文件就可以交给一个MapTask处理。 </li><li><strong>虚拟存储切片最大值设置</strong><br>CombineTextInputFormat.setMaxInputSplitSize(job, 4194304);// 4m </li><li><strong>切片机制</strong><br>生成切片过程包括:虚拟存储过程和切片过程两部分。</li></ol><p><strong>虚拟存储过程:</strong><br>将输入目录下所有文件大小,依次和设置的setMaxInputSplitSize值比较,如果不大于设置的最大值,逻辑上划分一个块。如果输入文件大于设置的最大值且大于两倍,那么以最大值切割一块;当剩余数据大小超过设置的最大值且不大于最大值2倍,此时将文件均分成2个虚拟存储块(防止出现太小切片)。<br><strong>切片过程:</strong> </p><ul><li>(a)判断虚拟存储的文件大小是否大于setMaxInputSplitSize值,大于等于则单独形成一个切片。 </li><li>(b)如果不大于则跟下一个虚拟存储文件进行合并,共同形成一个切片。</li></ul><p><strong>4. 案例分析</strong><br><img src="https://img-blog.csdnimg.cn/2020052617342823.png"></p><p>例如setMaxInputSplitSize值为4M,输入文件大小为8.02M,则先逻辑上分成一个4M。剩余的大小为4.02M,如果按照4M逻辑划分,就会出现0.02M的小的虚拟存储文件,所以将剩余的4.02M文件切分成(2.01M和2.01M)两个文件。<br>有4个小文件大小分别为1.7M、5.1M、3.4M以及6.8M这四个小文件,则虚拟存储之后形成6个文件块,大小分别为:<br><code>1.7M</code>,<code>(2.55M、2.55M)</code>,<code>3.4M</code>以及<code>(3.4M、3.4M) </code><br>最终会形成3个切片,大小分别为:<br><code>(1.7+2.55)M</code>,<code>(2.55+3.4)M</code>,<code>(3.4+3.4)M</code></p><h2 id="1-10-FileInputFormat实现类"><a href="#1-10-FileInputFormat实现类" class="headerlink" title="1.10 FileInputFormat实现类"></a>1.10 FileInputFormat实现类</h2><table><thead><tr><th>InputFormat</th><th>切片规则(getSplits)</th><th>把切片分解成KV(createRecordReader)</th></tr></thead><tbody><tr><td>FileInputFormat</td><td>按文件-&gt;块大小</td><td>没有实现</td></tr><tr><td>TextInputFormat</td><td>继承FileInputFormat</td><td>LineRecordReader&lt;偏移量,行数据&gt;</td></tr><tr><td>CombineTextInputFormat</td><td>重写了getSplit,小块合并切</td><td>CombineFileRecordReader(和LineRecordReader处理一样,只不过跨文件了)&lt;偏移量,行数据&gt;</td></tr><tr><td>KeyValueTextInputFormat</td><td>继承FileInputFormat</td><td>KeyValueLineRecordReader&lt;分隔符前,分隔符后&gt;</td></tr><tr><td>NLineInputFormat</td><td>重写了getSplit,按行切</td><td>LineRecordReader&lt;偏移量,行数据&gt;</td></tr><tr><td>自定义</td><td>继承FileInputFormat</td><td>自定义RecordReader</td></tr></tbody></table><h2 id="1-11-Shuffle机制"><a href="#1-11-Shuffle机制" class="headerlink" title="1.11 Shuffle机制"></a>1.11 Shuffle机制</h2><p><img src="https://img-blog.csdnimg.cn/2020052717205493.png"></p><p><img src="https://img-blog.csdnimg.cn/20200527172102742.png"> </p><h2 id="1-12-分区"><a href="#1-12-分区" class="headerlink" title="1.12 分区"></a>1.12 分区</h2><p>分区数量等于ReduceTask的进程数</p><p><strong>1. 分区机制</strong></p><ol><li>如果ReduceTask的数量&gt;getPartition的结果数,则会多产生几个空的输出文件part-r-000xxx;</li><li>如果1&lt;ReduceTask的数量&lt;getPartition的结果数,则有一部分分区数据无处安放,会报Exception;</li><li>如果ReduceTask的数量=1,则不管MapTask端输出多少个分区文件,最终结果都交给这一个ReduceTask,最终也就只会产生一个结果文件part-r-00000;</li><li>分区号必须从零开始,逐一累加。</li></ol><p><strong>2. 案例分析</strong><br>例如:假设自定义分区数为5,</p><ol><li>job.setNumReduceTasks(6);大于5,程序会正常运行,则会多产生几个空的输出文件</li><li>job.setNumReduceTasks(2);会报Exception;</li><li>job.setNumReduceTasks(1),会正常运行,最终也就只会产生一个输出文件;</li></ol><h2 id="1-13-排序"><a href="#1-13-排序" class="headerlink" title="1.13 排序"></a>1.13 排序</h2><p><strong>1. 排序概述</strong></p><ol><li>排序是MapReduce框架中最重要的操作之一</li><li>MapTask和ReduceTask均会对数据按照key进行排序。该操作数据Hadoop的默认行为。任何应用程序中的数据均会被排序,而不是逻辑上是否需要。</li><li>默认排序是按照字典顺序排序,且实现该排序的方法是快速排序。</li></ol><p><strong>2. 排序分类</strong></p><ol><li>部分排序:MapReduce根据输入记录的键对数据集排序,保障输出的每个文件内部有序。</li><li>全排序:最终输出结果只有一个文件,且文件内部有序。</li><li>辅助排序(GroupingComparator):在Reduce端对key进行分组</li><li>二次排序:在自定义排序过程中,如果compareTo中的判断条件为两个即为二次排序</li></ol><h2 id="1-14-Combiner合并"><a href="#1-14-Combiner合并" class="headerlink" title="1.14 Combiner合并"></a>1.14 Combiner合并</h2><ol><li>Combiner是MR程序中Mapper和Reducer之外的一种组件</li><li>Combiner组件的父类就是Reducer</li><li>Combiner和Reducer的区别在于运行的位置不同(Combiner是在每一个MapTask所在的节点运行;Reducer是接收全局所有Mapper的输出结果)</li><li>Combiner的意义就是对每一个MapTask的输出进行局部汇总,以减少网络传输量</li><li>Combiner能够应用的前提是不能影响最终的业务逻辑,而且Combiner的输出kv应用跟Reducer的输入kv类型要对应起来</li></ol><p><img src="https://img-blog.csdnimg.cn/2020052720084062.png"></p><h2 id="1-15-MapTask工作机制"><a href="#1-15-MapTask工作机制" class="headerlink" title="1.15 MapTask工作机制"></a>1.15 MapTask工作机制</h2><p><img src="https://img-blog.csdnimg.cn/20200527233708643.png"></p><ol><li><p>Read阶段:MapTask通过用户编写的RecordReader,从输入InputSplit中解析出一个个key/value。</p></li><li><p>Map阶段:该节点主要是将解析出的key/value交给用户编写map()函数处理,并产生一系列新的key/value。</p></li><li><p>Collect收集阶段:在用户编写map()函数中,当数据处理完成后,一般会调用OutputCollector.collect()输出结果。在该函数内部,它会将生成的key/value分区(调用Partitioner),并写入一个环形内存缓冲区中。</p></li><li><p>Spill阶段:即“溢写”,当环形缓冲区满后,MapReduce会将数据写到本地磁盘上,生成一个临时文件。需要注意的是,将数据写入本地磁盘之前,先要对数据进行一次本地排序,并在必要时对数据进行合并、压缩等操作。</p><p> <strong>溢写阶段详情</strong><br> <strong>步骤1:</strong>利用快速排序算法对缓存区内的数据进行排序,排序方式是,先按照分区编号Partition进行排序,然后按照key进行排序。这样,经过排序后,数据以分区为单位聚集在一起,且同一分区内所有数据按照key有序。<br> <strong>步骤2:</strong>按照分区编号由小到大依次将每个分区中的数据写入任务工作目录下的临时文件output/spillN.out(N表示当前溢写次数)中。如果用户设置了Combiner,则写入文件之前,对每个分区中的数据进行一次聚集操作。<br> <strong>步骤3:</strong>将分区数据的元信息写到内存索引数据结构SpillRecord中,其中每个分区的元信息包括在临时文件中的偏移量、压缩前数据大小和压缩后数据大小。如果当前内存索引大小超过1MB,则将内存索引写到文件output/spillN.out.index中。</p></li><li><p>Combine阶段:当所有数据处理完成后,MapTask对所有临时文件进行一次合并,以确保最终只会生成一个数据文件。</p></li></ol><p>当所有数据处理完后,MapTask会将所有临时文件合并成一个大文件,并保存到文件output/file.out中,同时生成相应的索引文件output/file.out.index。</p><p>在进行文件合并过程中,MapTask以分区为单位进行合并。对于某个分区,它将采用多轮递归合并的方式。每轮合并io.sort.factor(默认10)个文件,并将产生的文件重新加入待合并列表中,对文件排序后,重复以上过程,直到最终得到一个大文件。</p><p>让每个MapTask最终只生成一个数据文件,可避免同时打开大量文件和同时读取大量小文件产生的随机读取带来的开销。</p><h2 id="1-16-ReduceTask工作机制"><a href="#1-16-ReduceTask工作机制" class="headerlink" title="1.16 ReduceTask工作机制"></a>1.16 ReduceTask工作机制</h2><p><img src="https://img-blog.csdnimg.cn/20200528083022658.png"></p><ol><li>Copy阶段:ReduceTask从各个MapTask上远程拷贝一片数据,并针对某一片数据,如果其大小超过一定阈值,则写到磁盘上,否则直接放到内存中。</li><li>Merge阶段:在远程拷贝数据的同时,ReduceTask启动了两个后台线程对内存和磁盘上的文件进行合并,以防止内存使用过多或磁盘上文件过多。</li><li>Sort阶段:按照MapReduce语义,用户编写reduce()函数输入数据是按key进行聚集的一组数据。为了将key相同的数据聚在一起,Hadoop采用了基于排序的策略。由于各个MapTask已经实现对自己的处理结果进行了局部排序,因此,ReduceTask只需对所有数据进行一次归并排序即可。</li><li>Reduce阶段:reduce()函数将计算结果写到HDFS上。</li></ol><h2 id="1-17-ReduceTask并行度"><a href="#1-17-ReduceTask并行度" class="headerlink" title="1.17 ReduceTask并行度"></a>1.17 ReduceTask并行度</h2><p>ReduceTask的并行度影响整个job的执行并发度和执行效率,但与MapTask的并发数有切片数决定不同,ReduceTask数量的决定是可以直接手动设置的</p><ol><li>ReduceTask=0,表示没有Reduce阶段,输出文件个数和Map个数一致,在实际开发中,如果可以不用Reduce,可以将值设置为0,因为在整个MR阶段,比较耗时的shuffle,省掉了Reduce,就相当于省掉了shuffle;</li><li>ReduceTask默认值就是1,所以输出文件个数为1;</li><li>如果数据分布不均匀,就有可能在Reduce阶段产生数据倾斜;</li><li>ReduceTask数量并不是任意设置,还要考虑业务逻辑需求,有些情况下,需要计算全局汇总结果,就只能有1个ReduceTask;</li><li>具体多少个ReduceTask,需要根据集群性能而定;</li><li>如果分区数不是1,但是ReduceTask为1,是否执行分区过程。答案是:不执行分区过程,因为在MapTask的源码中,执行分区的前提就是先判断ReduceNum个数是否大于1,不大于1肯定不执行。</li></ol><h2 id="1-18-Reduce-Join工作原理"><a href="#1-18-Reduce-Join工作原理" class="headerlink" title="1.18 Reduce Join工作原理"></a>1.18 Reduce Join工作原理</h2><p><strong>Map端的主要工作:</strong>为来自不同表或文件的key/value对,打标签以区别不同来源的记录。然后用连接字段作为key,其余部分和新加的标志作为value,最后进行输出。<br><strong>Reduce端的主要工作:</strong>在Reduce端以连接字段作为key的分组已经完成,我们只需要在每一个分组当中将那些来源于不同文件的记录(在Map阶段已经打标志)分开,最后进行合并就OK了。<br><strong>缺点:</strong>这种方式,合并的操作是在Reduce阶段完成,Reduce端的处理压力太大,Map节点的运算负载则很低,资源利用率不高,且在Reduce阶段极易产生数据倾斜<br><strong>解决方案:</strong>Map端实现数据合并</p><h2 id="1-19-Map-Join工作原理"><a href="#1-19-Map-Join工作原理" class="headerlink" title="1.19 Map Join工作原理"></a>1.19 Map Join工作原理</h2><p><strong>适用场景:</strong>Map Join试验于一张表十分大,一张表十分小的场景<br><strong>优点:</strong>在Map端缓存多张表,提前出来业务逻辑,这样增加Map端业务,减少Reduce端数据的压力,尽可能的减少数据倾斜</p><h2 id="1-20-计数器应用"><a href="#1-20-计数器应用" class="headerlink" title="1.20 计数器应用"></a>1.20 计数器应用</h2><p><strong>定义:</strong>Hadoop为每个作业维护若干内置计数器,以描述多项指标。例如,某些计数器记录已处理的字节数和记录数,使用户可监控已处理的输入数据量和已产生的输出数据量</p><p><strong>计数器API:</strong></p><p>采用枚举的方式统计计数<br>采用计数器组、计数器名称的方式统计</p><h2 id="1-21-数据压缩"><a href="#1-21-数据压缩" class="headerlink" title="1.21 数据压缩"></a>1.21 数据压缩</h2><p><strong>定义:</strong>压缩技术能够有效减少底层存储系统(HDFS)读写字节数。压缩提高了网络带宽和磁盘空间的效率。在运行MR程序时,I/O操作、网络数据传输、shuffle和Merge要花大量的时间,尤其是数据规模很大和工作负载密集的情况下。因此,使用数据压缩显得非常重要。</p><p><strong>优点:</strong>鉴于磁盘I/O和网络带宽是Hadoop的宝贵资源,数据压缩对于节省资源、最小化磁盘I/O和网络传输非常有帮助。可以在任意MapReduce阶段启用压缩。</p><p><strong>缺点:</strong>不过,尽管压缩与解压操作的CPU开销不高,其性能的提升和资源的节省并非没有代价</p><p><strong>压缩策略:</strong>压缩是提供Hadoop运行效率的一种优化策略。<br>通过对mapper、reducer运行过程的数据进行压缩,以减少磁盘I/O,提高MR程序运行速度。</p><p><strong>压缩原则:</strong></p><ul><li>运算密集型的job,少用压缩</li><li>I/O密集型的job,多用压缩</li></ul><h2 id="1-22-MR支持的压缩编码"><a href="#1-22-MR支持的压缩编码" class="headerlink" title="1.22 MR支持的压缩编码"></a>1.22 MR支持的压缩编码</h2><table><thead><tr><th>压缩格式</th><th>hadoop自带</th><th>算法文件扩展名</th><th>是否可切分</th><th>压缩后,原来的程序是否需要修改</th></tr></thead><tbody><tr><td>DEFLATE</td><td>是,直接使用</td><td>DEFLATE.deflate</td><td>否</td><td>和文本处理一样,不需要修改</td></tr><tr><td>gzip</td><td>是,直接使用</td><td>DEFLATE.gz</td><td>否</td><td>和文本处理一样,不需要修改</td></tr><tr><td>bzip2</td><td>是,直接使用</td><td>bzip2.bz</td><td>是</td><td>和文本处理一样,不需要修改</td></tr><tr><td>LZO</td><td>否,需要安装</td><td>LZO.lzo</td><td>是</td><td>需要建索引,还需要指定输入格式</td></tr><tr><td>Snappy</td><td>否,需要安装</td><td>Snappy.snappy</td><td>否</td><td>和文本处理一样,不需要修改</td></tr></tbody></table><table><thead><tr><th>压缩算法</th><th>原始文件大小</th><th>压缩文件大小</th><th>压缩速度</th><th>解压速度</th></tr></thead><tbody><tr><td>gzip</td><td>8.3G</td><td>1.8G</td><td>17.5MB/s</td><td>58MB/s</td></tr><tr><td>bzip</td><td>28.3G</td><td>1.1G</td><td>2.4MB/s</td><td>9.5MB/s</td></tr><tr><td>gzip</td><td>8.3G</td><td>2.9G</td><td>49.3MB/s</td><td>74.6MB/s</td></tr><tr><td>snappy</td><td>8.3G</td><td>*</td><td>250MB/s</td><td>500MB/s</td></tr></tbody></table><h2 id="1-23-压缩方式选择"><a href="#1-23-压缩方式选择" class="headerlink" title="1.23 压缩方式选择"></a>1.23 压缩方式选择</h2><ul><li><strong>Gzip压缩</strong></li></ul><p><strong>优点:</strong>压缩率比较高,而且压缩/解压速度也比较快;Hadoop本身支持,在应用中处理gzip格式的文件就和直接处理文本一样;大部分Linux系统都自带gzip命令,使用方便<br><strong>缺点:</strong>不支持split<br><strong>应用场景:</strong>当每个文件压缩之后在130M以内的(1个块大小内),都可以考虑gzip压缩格式 </p><ul><li><strong>Bzip2压缩</strong></li></ul><p><strong>优点:</strong>支持split,具有很高压缩率,比Gzip压缩率高;Hadoop本身支持,使用方便<br><strong>缺点:</strong>压缩/解压速度比较慢<br><strong>应用场景:</strong>适合对速度要求不高,但需要较高的压缩率的时候;或者输出之后的数据比较大,处理之后的数据需要压缩存档减少磁盘空间并且以后数据用得比较少的情况;或者对单个很大的文本文件想压缩减少存储空间,同时又需要支持split,而且兼容之前的应用程序的情况 </p><ul><li><strong>Lzo压缩</strong></li></ul><p><strong>优点:</strong>压缩/解压速度也比较快,合理的压缩率;支持split,是Hadoop中最流行的压缩格式;可以在Linux系统下安装lzop命令,使用方便<br><strong>缺点:</strong>压缩率比Gzip要低一些;Hadoop本身不支持,需要安装;在应用中对Lzo格式的文件需要做一些特殊处理(为了支持split要建索引,还需要指定InputFormat为Lzo格式)<br><strong>应用场景:</strong>一个很大的文本文件,压缩之后还大于200M以上的可以考虑,而且单个文件越大,Lzo优点越明显 </p><ul><li><strong>snappy压缩</strong></li></ul><p><strong>优点:</strong>高速压缩/解压速度,合理的压缩率;<br><strong>缺点:</strong>不支持split;压缩率比Gzip要低;Hadoop本身不支持,需要安装<br><strong>应用场景:</strong>当MapReduce作业的map输出的数据比较大的时候,作为map到reduce的中间数据的压缩格式;或者作为一个MapReduce作业的输出和另一个MapReduce作业的输入。</p><h1 id="2-MapReduce编程规范"><a href="#2-MapReduce编程规范" class="headerlink" title="2. MapReduce编程规范"></a>2. MapReduce编程规范</h1><h2 id="2-1-编写Mapper类"><a href="#2-1-编写Mapper类" class="headerlink" title="2.1 编写Mapper类"></a>2.1 编写Mapper类</h2><ol><li>继承org.apache.hadoop.mapreduce.Mapper类</li><li>设置mapper类的输入类型&lt;LongWritable, Text&gt;</li><li>设置mapper类的输出类型&lt;Text, IntWritable&gt;</li><li>将输入类型中的Text转换成String类型,并按照指定分隔符进行分割</li><li>通过context.write()方法进行输出</li></ol><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdev.dw.mapr;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.io.IntWritable;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.io.LongWritable;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.io.Text;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.mapreduce.Mapper;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.io.IOException;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">WordCountMapper</span> <span class="keyword">extends</span> <span class="title">Mapper</span>&lt;<span class="title">LongWritable</span>, <span class="title">Text</span>, <span class="title">Text</span>, <span class="title">IntWritable</span>&gt; </span>&#123;</span><br><span class="line"><span class="keyword">private</span> Text k = <span class="keyword">new</span> Text();</span><br><span class="line"><span class="keyword">private</span> IntWritable v = <span class="keyword">new</span> IntWritable(<span class="number">1</span>);</span><br><span class="line"></span><br><span class="line"> <span class="meta">@Override</span></span><br><span class="line"> <span class="function"><span class="keyword">protected</span> <span class="keyword">void</span> <span class="title">map</span><span class="params">(LongWritable key, Text value, Context context)</span> <span class="keyword">throws</span> IOException, InterruptedException </span>&#123;</span><br><span class="line"> String line = value.toString();</span><br><span class="line"> String[] words = line.split(<span class="string">&quot; &quot;</span>);</span><br><span class="line"> <span class="keyword">for</span> (String word : words) &#123;</span><br><span class="line"> k.set(word);</span><br><span class="line"> context.write(k, v);</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="2-2-编写Reducer类"><a href="#2-2-编写Reducer类" class="headerlink" title="2.2 编写Reducer类"></a>2.2 编写Reducer类</h2><ol><li>继承org.apache.hadoop.mapreduce.Reducer类</li><li>设置reducer类的输入类型&lt;Text, IntWritable&gt;</li><li>设置reducer类的输出类型&lt;Text, IntWritable&gt;</li><li>对values值进行汇总求和</li><li>通过context.write()方法进行输出</li></ol><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdev.dw.mapr.demo;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.io.IntWritable;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.io.Text;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.mapreduce.Reducer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.io.IOException;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">WordcountReducer</span> <span class="keyword">extends</span> <span class="title">Reducer</span>&lt;<span class="title">Text</span>, <span class="title">IntWritable</span>, <span class="title">Text</span>, <span class="title">IntWritable</span>&gt; </span>&#123;</span><br><span class="line"><span class="keyword">private</span> <span class="keyword">int</span> sum;</span><br><span class="line"><span class="keyword">private</span> IntWritable v = <span class="keyword">new</span> IntWritable();</span><br><span class="line"></span><br><span class="line"> <span class="meta">@Override</span></span><br><span class="line"> <span class="function"><span class="keyword">protected</span> <span class="keyword">void</span> <span class="title">reduce</span><span class="params">(Text key, Iterable&lt;IntWritable&gt; values, Context context)</span> <span class="keyword">throws</span> IOException, InterruptedException </span>&#123;</span><br><span class="line"> sum = <span class="number">0</span>;</span><br><span class="line"> <span class="keyword">for</span> (IntWritable value : values) &#123;</span><br><span class="line"> sum += value.get();</span><br><span class="line"> &#125;</span><br><span class="line"> v.set(sum);</span><br><span class="line"> context.write(key, v);</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="2-3-编写Driver类"><a href="#2-3-编写Driver类" class="headerlink" title="2.3 编写Driver类"></a>2.3 编写Driver类</h2><ol><li>创建一个org.apache.hadoop.conf.Configuration类对象</li><li>通过Job.getInstance(conf)获得一个job对象</li><li>设置job的3个类,driver、mapper、reducer</li><li>设置job的2个输出类型,map输出和总体输出</li><li>设置一个输入输出路径</li><li>调用job.waitForCompletion(true)进行提交任务</li></ol><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdev.dw.mapr.demo;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.conf.Configuration;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.fs.Path;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.io.IntWritable;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.io.Text;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.mapreduce.Job;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.mapreduce.lib.input.FileInputFormat;</span><br><span class="line"><span class="keyword">import</span> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.io.IOException;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">WordcountDriver</span> </span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title">main</span><span class="params">(String[] args)</span> <span class="keyword">throws</span> IOException, InterruptedException, ClassNotFoundException </span>&#123;</span><br><span class="line"> Configuration conf = <span class="keyword">new</span> Configuration();</span><br><span class="line"> Job job = Job.getInstance(conf);</span><br><span class="line"></span><br><span class="line"> job.setJarByClass(WordcountDriver.class);</span><br><span class="line"> job.setMapperClass(WordcountMapper.class);</span><br><span class="line"> job.setReducerClass(WordcountReducer.class);</span><br><span class="line"></span><br><span class="line"> job.setMapOutputKeyClass(Text.class);</span><br><span class="line"> job.setMapOutputValueClass(IntWritable.class);</span><br><span class="line"> job.setOutputKeyClass(Text.class);</span><br><span class="line"> job.setMapOutputValueClass(IntWritable.class);</span><br><span class="line"></span><br><span class="line"> FileInputFormat.setInputPaths(job, <span class="keyword">new</span> Path(<span class="string">&quot;/input&quot;</span>));</span><br><span class="line"> FileOutputFormat.setOutputPath(job, <span class="keyword">new</span> Path(<span class="string">&quot;/output&quot;</span>));</span><br><span class="line"> <span class="keyword">boolean</span> result = job.waitForCompletion(<span class="keyword">true</span>);</span><br><span class="line"> System.out.println(result);</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> MapReduce </tag>
</tags>
</entry>
<entry>
<title>一文彻底搞懂HBase</title>
<link href="/2021/11/12/2d8833eb4290.html"/>
<url>/2021/11/12/2d8833eb4290.html</url>
<content type="html"><![CDATA[<blockquote><p>本文是基于CentOS 7.9系统环境,进行HBase的学习和使用</p></blockquote><h1 id="一、HBase的简介"><a href="#一、HBase的简介" class="headerlink" title="一、HBase的简介"></a>一、HBase的简介</h1><h2 id="1-1-HBase基本概念"><a href="#1-1-HBase基本概念" class="headerlink" title="1.1 HBase基本概念"></a>1.1 HBase基本概念</h2><p>HBase是一种分布式、可扩展、支持海量数据存储的NoSQL数据库,可以解决HDFS随机写的问题</p><h2 id="1-2-HBase数据模型"><a href="#1-2-HBase数据模型" class="headerlink" title="1.2 HBase数据模型"></a>1.2 HBase数据模型</h2><p>逻辑上,HBase的数据模型同关系型数据库很类似,数据存储在一张表中,有行有列。但从HBase的底层物理存储结构(K-V)来看,HBase更像是一个multi-dimensional map</p><ul><li><p>Name Space</p><p> 命名空间,类似于关系型数据库的DatabBase概念,每个命名空间下有多个表。HBase有两个自带的命名空间,分别是hbase和default,hbase中存放的是HBase内置的表,default表是用户默认使用的命名空间。</p></li><li><p>Region</p><p> 类似于关系型数据库的表概念。不同的是,HBase定义表时只需要声明列族即可,不需要声明具体的列。这意味着,往HBase写入数据时,字段可以动态、按需指定。因此,和关系型数据库相比,HBase能够轻松应对字段变更的场景。</p></li><li><p>Row</p><p> HBase表中的每行数据都由一个RowKey和多个Column(列)组成,数据是按照RowKey的字典顺序存储的,并且查询数据时只能根据RowKey进行检索,所以RowKey的设计十分重要。</p></li><li><p>Column</p><p> HBase中的每个列都由Column Family(列族)和Column Qualifier(列限定符)进行限定,例如info:name,info:age。建表时,只需指明列族<br> ,而列限定符无需预先定义。</p></li><li><p>Time Stamp</p><p> 用于标识数据的不同版本(version),每条数据写入时,如果不指定时间戳,系统会自动为其加上该字段,其值为写入HBase的时间。</p></li><li><p>Cell</p><p> 由{rowkey, column Family:column Qualifier, time Stamp} 唯一确定的单元。cell中的数据是没有类型的,全部是字节码形式存储</p></li></ul><h2 id="1-3-HBase逻辑结构"><a href="#1-3-HBase逻辑结构" class="headerlink" title="1.3 HBase逻辑结构"></a>1.3 HBase逻辑结构</h2><h2 id="这种逻辑结构,采用避免全表扫描的分区策略"><a href="#这种逻辑结构,采用避免全表扫描的分区策略" class="headerlink" title="这种逻辑结构,采用避免全表扫描的分区策略"></a>这种逻辑结构,采用避免全表扫描的分区策略</h2><p><img src="https://img-blog.csdnimg.cn/20200715092910375.png"><br>1.4 HBase物理存储结构</p><p>HBase依赖时间戳进行数据的管理,因此时间戳对于HBase十分重要,一定要确保不同系统之间的时间一致,否则在操作HBase时会出现各种异常错误<br><img src="https://img-blog.csdnimg.cn/202007151021314.png"></p><h2 id="1-5-HBase基础架构"><a href="#1-5-HBase基础架构" class="headerlink" title="1.5 HBase基础架构"></a>1.5 HBase基础架构</h2><ul><li><p>Region Server</p><p> Region Server为 Region的管理者,其实现类为HRegionServer,主要作用如下:<br> 对于数据的操作:get, put, delete<br> 对于Region的操作:splitRegion、compactRegion</p></li><li><p>Master</p><p> Master是所有Region Server的管理者,其实现类为HMaster,主要作用如下:<br> 对于表的操作:create,delete,alter,select<br> 对于RegionServer的操作:分配regions到每个RegionServer,监控每个RegionServer的状态,负载均衡和故障转移</p></li><li><p>Zookeeper</p><p> HBase通过Zookeeper来做Master的高可用、RegionServer的监控、元数据的入口以及集群配置的维护等工作</p></li><li><p>HDFS</p><p> HDFS为HBase提供最终的底层数据存储服务,同时为HBase提供高可用的支持</p></li></ul><h1 id="二、HBase的伪分布式安装"><a href="#二、HBase的伪分布式安装" class="headerlink" title="二、HBase的伪分布式安装"></a>二、HBase的伪分布式安装</h1><h2 id="2-1-HBase依赖环境"><a href="#2-1-HBase依赖环境" class="headerlink" title="2.1 HBase依赖环境"></a>2.1 HBase依赖环境</h2><p><strong>(1) Hadoop启动</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">sbin/start-dfs.sh</span><br><span class="line">sbin/start-yarn.sh</span><br><span class="line">sbin/mr-jobhistory-daemon.sh start historyserver</span><br></pre></td></tr></table></figure><h2 id="2-2-HBase安装"><a href="#2-2-HBase安装" class="headerlink" title="2.2 HBase安装"></a>2.2 HBase安装</h2><p><strong>(1) HBase解压</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">tar -xzvf hbase-1.3.1-bin.tar.gz -C /opt/module</span><br></pre></td></tr></table></figure><p><strong>(2) 配置regionservers</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /opt/module/hbase/conf/regionservers</span><br><span class="line"><span class="comment"># 添加内容如下</span></span><br><span class="line">192.168.0.109</span><br></pre></td></tr></table></figure><p><strong>(3) 配置hbase-env.sh</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /opt/module/hbase/conf/hbase-env.sh</span><br><span class="line"><span class="comment"># 修改内容如下</span></span><br><span class="line"><span class="built_in">export</span> JAVA_HOME=/opt/module/jdk1.8.0_201</span><br><span class="line"></span><br><span class="line"><span class="comment"># 注释内容</span></span><br><span class="line"><span class="comment"># export HBASE_MASTER_OPTS=&quot;$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m&quot;</span></span><br><span class="line"><span class="comment"># export HBASE_REGIONSERVER_OPTS=&quot;$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m&quot;</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 修改内容如下</span></span><br><span class="line"><span class="built_in">export</span> HBASE_MANAGES_ZK=<span class="literal">true</span></span><br></pre></td></tr></table></figure><p><strong>(4) 配置hbase-site.xml</strong></p><p>首先创建一个zkData目录</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">mkdir -p /opt/module/hbase/data/zkData</span><br></pre></td></tr></table></figure><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">configuration</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.rootdir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>hdfs://192.168.0.109:9000/HBase<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.cluster.distributed<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>true<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.master.port<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>16000<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span> </span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.zookeeper.property.dataDir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>/opt/module/hbase/data/zkData<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">configuration</span>&gt;</span></span><br></pre></td></tr></table></figure><p><strong>(5) 配置hadoop的软连接到hbase</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml /opt/module/hbase/conf/core-site.xml</span><br><span class="line">ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml /opt/module/hbase/conf/hdfs-site.xml</span><br></pre></td></tr></table></figure><h2 id="2-3-HBase启动"><a href="#2-3-HBase启动" class="headerlink" title="2.3 HBase启动"></a>2.3 HBase启动</h2><p><strong>(1) HBase启动zookeeper</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh start zookeeper</span><br></pre></td></tr></table></figure><p><strong>(2) HBase启动master</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh start master</span><br></pre></td></tr></table></figure><p><strong>(3) HBase启动regionserver</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh start regionserver</span><br></pre></td></tr></table></figure><p><strong>(4) HBase停止regionserver</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh stop regionserver</span><br></pre></td></tr></table></figure><p><strong>(5) HBase停止master</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh stop master</span><br></pre></td></tr></table></figure><p><strong>(6) HBase停止zookeeper</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh stop zookeeper</span><br></pre></td></tr></table></figure><p><strong>(7) HBase集群启动</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/start-hbase.sh</span><br></pre></td></tr></table></figure><p>(8) HBase集群停止</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/stop-hbase.sh</span><br></pre></td></tr></table></figure><h1 id="三、HBase的完全分布式安装"><a href="#三、HBase的完全分布式安装" class="headerlink" title="三、HBase的完全分布式安装"></a>三、HBase的完全分布式安装</h1><h2 id="3-1-HBase依赖环境"><a href="#3-1-HBase依赖环境" class="headerlink" title="3.1 HBase依赖环境"></a>3.1 HBase依赖环境</h2><p><strong>(1) zookeeper启动</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment">#启动所有zookeeper集群节点</span></span><br><span class="line"><span class="built_in">cd</span> /opt/module/zookeeper-3.4.10</span><br><span class="line">bin/zkServer.sh start</span><br></pre></td></tr></table></figure><p><strong>(2) Hadoop启动</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">sbin/start-dfs.sh</span><br><span class="line">sbin/start-yarn.sh</span><br></pre></td></tr></table></figure><h2 id="3-2-HBase安装"><a href="#3-2-HBase安装" class="headerlink" title="3.2 HBase安装"></a>3.2 HBase安装</h2><p><strong>(1) HBase解压</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">tar -xzvf hbase-1.3.1-bin.tar.gz -C /opt/module</span><br></pre></td></tr></table></figure><p><strong>(2) 配置regionservers</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /opt/module/hbase/conf/regionservers</span><br><span class="line"><span class="comment"># 添加内容如下</span></span><br><span class="line">hadoop101</span><br><span class="line">hadoop102</span><br><span class="line">hadoop103</span><br></pre></td></tr></table></figure><p><strong>(3) 配置hbase-env.sh</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /opt/module/hbase/conf/hbase-env.sh</span><br><span class="line"><span class="comment"># 修改内容如下</span></span><br><span class="line"><span class="built_in">export</span> JAVA_HOME=/opt/module/jdk1.8.0_201</span><br><span class="line"></span><br><span class="line"><span class="comment"># 注释内容</span></span><br><span class="line"><span class="comment"># export HBASE_MASTER_OPTS=&quot;$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m&quot;</span></span><br><span class="line"><span class="comment"># export HBASE_REGIONSERVER_OPTS=&quot;$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m&quot;</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 修改内容如下</span></span><br><span class="line"><span class="built_in">export</span> HBASE_MANAGES_ZK=<span class="literal">false</span></span><br></pre></td></tr></table></figure><p><strong>(4) 配置hbase-site.xml</strong></p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">configuration</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.rootdir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>hdfs://hadoop101:9000/HBase<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.cluster.distributed<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>true<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.master.port<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>16000<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span> </span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.zookeeper.quorum<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>hadoop101,hadoop102,hadoop103<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span> </span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hbase.zookeeper.property.dataDir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>/opt/module/zookeeper-3.4.10/zkData<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">configuration</span>&gt;</span></span><br></pre></td></tr></table></figure><p><strong>(5) 配置hadoop的软连接到hbase</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ln -s /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml /opt/module/hbase/conf/core-site.xml</span><br><span class="line">ln -s /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml /opt/module/hbase/conf/hdfs-site.xml</span><br></pre></td></tr></table></figure><p><strong>(6) 配置集群时间同步</strong></p><p>安装ntp</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">yum -y install net-tools</span><br><span class="line">yum -y install ntp</span><br></pre></td></tr></table></figure><p>修改配置文件/etc/ntp.conf</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /etc/ntp.conf</span><br><span class="line"><span class="comment"># 限制 192.168.1.0~192.168.255.255之间的服务器可以访问</span></span><br><span class="line">restrict 192.168.1.0 mask 255.255.0.0 nomodify notrap</span><br><span class="line"></span><br><span class="line"><span class="comment"># 注释以下内容,因为集群在局域网中,所以不属于互联网其他时间</span></span><br><span class="line"><span class="comment">#server 0.centos.pool.ntp.org iburst</span></span><br><span class="line"><span class="comment">#server 1.centos.pool.ntp.org iburst</span></span><br><span class="line"><span class="comment">#server 2.centos.pool.ntp.org iburst</span></span><br><span class="line"><span class="comment">#server 3.centos.pool.ntp.org iburst</span></span><br><span class="line"></span><br><span class="line"><span class="comment">#添加以下信息,当失去互联网连接后,使用本机时间做同步</span></span><br><span class="line">server 127.127.1.0</span><br><span class="line">fudge 127.127.1.0 stratum 10</span><br></pre></td></tr></table></figure><p>修改配置文件/etc/sysconfig/ntpd</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /etc/sysconfig/ntpd</span><br><span class="line"><span class="comment">#添加以下信息,硬件时间与系统时间同步</span></span><br><span class="line">SYNC_HWCLOCK=yes</span><br><span class="line"></span><br><span class="line">hadoop101节点上启动ntp</span><br><span class="line"></span><br><span class="line">systemctl stop chronyd</span><br><span class="line">systemctl <span class="built_in">disable</span> chronyd</span><br><span class="line">systemctl start ntpd</span><br><span class="line">systemctl <span class="built_in">enable</span> ntpd</span><br></pre></td></tr></table></figure><p>其他节点上启动ntp</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">crontab -e</span><br><span class="line"><span class="comment"># 编写定时任务如下:</span></span><br><span class="line">*/10 * * * * /usr/sbin/ntpdate hadoop101</span><br></pre></td></tr></table></figure><h2 id="3-3-HBase启动"><a href="#3-3-HBase启动" class="headerlink" title="3.3 HBase启动"></a>3.3 HBase启动</h2><p><strong>(1) HBase启动master</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh start master</span><br></pre></td></tr></table></figure><p><strong>(2) HBase启动regionserver</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh start regionserver</span><br></pre></td></tr></table></figure><p><strong>(3) HBase停止regionserver</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh stop regionserver</span><br></pre></td></tr></table></figure><p><strong>(4) HBase停止master</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/hbase-daemon.sh stop master</span><br></pre></td></tr></table></figure><p><strong>(5) HBase集群启动</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/start-hbase.sh</span><br></pre></td></tr></table></figure><p><strong>(6) HBase集群停止</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/hbase/</span><br><span class="line">bin/stop-hbase.sh</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HBase </tag>
</tags>
</entry>
<entry>
<title>一文彻底搞懂Kafka</title>
<link href="/2021/10/12/245dd8355f34.html"/>
<url>/2021/10/12/245dd8355f34.html</url>
<content type="html"><![CDATA[<blockquote><p>本文是基于CentOS 7.9系统环境,进行Kafka的学习和使用</p></blockquote><h1 id="一、Kafka的简介"><a href="#一、Kafka的简介" class="headerlink" title="一、Kafka的简介"></a>一、Kafka的简介</h1><h2 id="1-1-Kafka基本概念"><a href="#1-1-Kafka基本概念" class="headerlink" title="1.1 Kafka基本概念"></a>1.1 Kafka基本概念</h2><p><strong>(1) 什么是Kafka</strong></p><p>Kafka是一个分布式的基于发布/订阅模式的消息队列,主要应用于大数据实时处理领域<br><strong>(2) 消息队列</strong></p><ol><li>点对点模式的消息队列</li></ol><p>对一个消息而言,只会有一个消费者可以消费,消费者主动拉取消息,消息收到后,会将消息删除</p><ol start="2"><li>基于发布/订阅模式的消息队列</li></ol><p>发布到topic的消息会被所有订阅者消费,消费者消费完消息后不会删除</p><ol start="3"><li>消息队列主动推送</li></ol><p>适用于消费者处理能力高,生产者生产消息较少的场景,整个系统的消息处理速度由消息队列的推送速度决定,多个消费者之间处理消息的能力不同,如果消息队列按照一定的速度推送消息,可能会造成某些消费者处理消息处理不过来,某些消费者处理速度很快一直处于空闲</p><ol start="4"><li>消费者主动拉取</li></ol><p>适用于生产者生产消息的速度高于消费者,整个系统的消息处理速度由消费者的拉取速度决定,如果消息队列中消息较少,消费者会一直维护着一个长轮询去访问消息队列是否有消息</p><p><strong>(3) Kafka的作用</strong></p><ul><li><strong>解耦和异步</strong></li></ul><p>将强依赖的两个数据上下游系统,通过消息队列进行解耦,可以实现上下游的异步通信</p><ul><li><strong>削峰</strong></li></ul><p>缓解数据上下游两个系统的流速差</p><h2 id="1-2-Kafka基础架构"><a href="#1-2-Kafka基础架构" class="headerlink" title="1.2 Kafka基础架构"></a>1.2 Kafka基础架构</h2><ol><li>为了方便横向扩展,并提高吞吐量,一个topic分为多个partition</li><li>配合分区的设计,提出消费者组的概念,组内每个消费者并行消费</li><li>为提高可用性,为每个partition增加若干副本,类似于NameNode HA</li></ol><p><img src="https://img-blog.csdnimg.cn/20200620092054385.png"></p><ul><li><strong>Producer</strong></li></ul><p>消息生产者,连接broker-list,就是向kafka broker发消息的客户端</p><ul><li><strong>Consumer</strong></li></ul><p>消息消费者,连接bootstrp-server,向kafka broker取消息的客户端</p><ul><li><strong>Consumer Group (CG)</strong></li></ul><p>消费者组,由多个consumer组成。消费者组内每个消费者负责消费不同分区的数据,一个分区只能由一个消费者消费;消费者组之间互不影响。所有的消费者都属于某个消费者组,即消费者组是逻辑上的一个订阅者</p><ul><li><strong>Cluster</strong></li></ul><p>kafka集群,连接zookeeper,包含多个broker</p><ul><li><strong>Broker</strong></li></ul><p>一台kafka服务器就是一个broker。一个集群由多个broker组成。一个broker可以容纳多个topic</p><ul><li><strong>Topic</strong></li></ul><p>可以理解为一个队列,生产者和消费者面向的都是一个topic</p><ul><li><strong>分区为了并发,备份为了容灾</strong></li><li><strong>Partition</strong></li></ul><p>为了实现扩展性,一个非常大的topic可以分布到多个broker(即服务器)上,一个topic可以分为多个partition,每个partition是一个有序的队列</p><ul><li><strong>Replica</strong></li></ul><p>副本,为保证集群中的某个节点发生故障时, 该节点上的partition数据不丢失,且kafka仍然能够继续工作,kafka提供了副本机制,一个topic的每个分区都有若干个副本,一个leader和若干个follower</p><ul><li><strong>leader</strong></li></ul><p>每个分区多个副本的“主”,生产者发送数据的对象,以及消费者消费数据的对象都是leader</p><ul><li><strong>follower</strong></li></ul><p>每个分区多个副本中的“从”,实时与leader保持数据的同步。leader发生故障时,某个follower会成为新的follower</p><ul><li><strong>zookeeper</strong></li></ul><p>存储topic信息、分区信息、副本信息、ISR信息、消费者的偏移量(0.9版本之前,之后存在kafka集群中d __consumer_offset主题中,为了提高效率,减少与zookeeper的通信时间)</p><h2 id="1-3-Kafka工作流程"><a href="#1-3-Kafka工作流程" class="headerlink" title="1.3 Kafka工作流程"></a>1.3 Kafka工作流程</h2><ol><li>kafka中消息是以topic进行分类的,生产者生成消息,消费者消费消息,都是面向topic的</li><li>topic是逻辑上的概念,而partition是物理上的概念,每个partition对应于一组log文件,该log文件中存储的就是producer生产的数据。</li><li>Producer生产的数据会被不断追加到该log文件末端,且每条数据都有自己的offset。</li><li>消费者组中的每个消费者,都会实时记录自己消费到了哪个offset,以便出错恢复时,从上次的位置继续消费。</li></ol><h2 id="1-4-Kafka文件存储机制"><a href="#1-4-Kafka文件存储机制" class="headerlink" title="1.4 Kafka文件存储机制"></a>1.4 Kafka文件存储机制</h2><p><img src="https://img-blog.csdnimg.cn/20200623152607642.png"></p><ol><li>由于生产者生产的消息会不断追加到log文件末尾,为防止log文件过大导致数据定位效率低下,Kafka采取了分片和索引机制,将每个partition分为多个segment</li><li>每个segment对应两个文件——“.index”文件和“.log”文件。这些文件位于一个文件夹下,该文件夹的命名规则为:topic名称+分区序号。例如,first这个topic有三个分区,则其对应的文件夹为first-0,first-1,first-2</li><li>index和log文件以当前segment的第一条消息的offset命名</li><li>index文件存储大量的索引信息,log文件存储大量的数据,索引文件中的元数据指向对应数据文件中message的物理偏移地址</li></ol><p><img src="https://img-blog.csdnimg.cn/20201225091515558.png"></p><h2 id="1-5-Kafka生产者"><a href="#1-5-Kafka生产者" class="headerlink" title="1.5 Kafka生产者"></a>1.5 Kafka生产者</h2><h3 id="1-5-1-分区策略"><a href="#1-5-1-分区策略" class="headerlink" title="1.5.1 分区策略"></a>1.5.1 分区策略</h3><p><strong>(1) 分区的原因</strong></p><ul><li>方便在集群中扩展</li></ul><p>每个Partition可以通过调整以适应它所在的机器,而一个topic又可以有多个Partition组成,因此整个集群就可以适应任意大小的数据了</p><ul><li>可以提高并发</li></ul><p>因为可以以Partition为单位读写了</p><p><strong>(2) 分区的原则</strong></p><p>我们需要将producer发送的数据封装成一个ProducerRecord对象。</p><ul><li>指明 partition 的情况下,直接将指明的值直接作为 partiton 值</li><li>没有指明 partition 值但有 key 的情况下,将 key 的 hash 值与 topic 的 partition 数进行取余得到 partition 值</li><li>既没有 partition 值又没有 key 值的情况下,第一次调用时随机生成一个整数(后面每次调用在这个整数上自增),将这个值与 topic 可用的 partition 总数取余得到 partition 值,也就是常说的 round-robin 算法</li></ul><h3 id="1-5-2-数据可靠性保证"><a href="#1-5-2-数据可靠性保证" class="headerlink" title="1.5.2 数据可靠性保证"></a>1.5.2 数据可靠性保证</h3><p>为保证producer发送的数据,能可靠的发送到指定的topic,topic的每个partition收到producer发送的数据后,都需要向producer发送ack(acknowledgement确认收到),如果producer收到ack,就会进行下一轮的发送,否则重新发送数据。</p><p><img src="https://img-blog.csdnimg.cn/20200623163726511.png"><br><strong>(1) 副本数据同步策略</strong><br>方策略 | 优点 | 缺点<br>|—|—|—|<br>半数以上完成同步,就发送ack | 延迟低 | 选举新的leader时,容忍n台节点的故障,需要2n+1个副本<br>全部完成同步,就发送ack | 选举新的leader时,容忍n台节点的故障,需要n+1个副本 | 延迟高</p><p>Kafka选择了第二种方案,原因如下:</p><ul><li>同样为了容忍n台节点的故障,第一种方案需要2n+1个副本,而第二种方案只需要n+1个副本,而Kafka的每个分区都有大量的数据,第一种方案会造成大量数据的冗余。</li><li>虽然第二种方案的网络延迟会比较高,但网络延迟对Kafka的影响较小。</li></ul><p><strong>(2) ISR</strong></p><p>采用第二种方案之后,设想以下情景:leader收到数据,所有follower都开始同步数据,但有一个follower,因为某种故障,迟迟不能与leader进行同步,那leader就要一直等下去,直到它完成同步,才能发送ack。这个问题怎么解决呢?</p><p>Leader维护了一个动态的in-sync replica set (ISR),意为和leader保持同步的follower集合。当ISR中的follower完成数据的同步之后,leader就会给producer发送ack。如果follower长时间未向leader同步数据,则该follower将被踢出ISR,该时间阈值由replica.lag.time.max.ms参数设定。Leader发生故障之后,就会从ISR中选举新的leader。<br><strong>(3) ack应答机制</strong></p><p>对于某些不太重要的数据,对数据的可靠性要求不是很高,能够容忍数据的少量丢失,所以没必要等ISR中的follower全部接收成功。<br>所以Kafka为用户提供了三种可靠性级别,用户根据对可靠性和延迟的要求进行权衡,选择以下的配置。</p><p>acks参数配置</p><ul><li><strong>0</strong></li></ul><p>producer不等待broker的ack,这一操作提供了一个最低的延迟,broker一接收到还没有写入磁盘就已经返回,当broker故障时有可能丢失数据;</p><ul><li><strong>1</strong></li></ul><p>producer等待broker的ack,partition的leader落盘成功后返回ack,如果在follower同步成功之前leader故障,那么将会丢失数据;</p><ul><li><strong>-1(all)</strong></li></ul><p>producer等待broker的ack,partition的leader和follower全部落盘成功后才返回ack。但是如果在follower同步完成后,broker发送ack之前,leader发生故障,那么会造成数据重复。</p><p><strong>(4)消费数据一致性保障</strong></p><ul><li><strong>LEO</strong></li></ul><p>指的是每个副本最大的offset</p><ul><li><strong>HW</strong></li></ul><p>指的是消费者能见到的最大的offset,ISR队列中最小的LEO<br><img src="https://img-blog.csdnimg.cn/20201225095954613.png"></p><p><strong>(5)故障处理细节</strong></p><ul><li><strong>follower故障</strong></li></ul><p>follower发生故障后会被临时踢出ISR,待该follower恢复后,follower会读取本地磁盘记录的上次的HW,并将log文件高于HW的部分截取掉,从HW开始向leader进行同步。等该follower的LEO大于等于该Partition的HW,即follower追上leader之后,就可以重新加入ISR了。</p><ul><li><strong>leader故障</strong></li></ul><p>leader发生故障之后,会从ISR中选出一个新的leader,之后,为保证多个副本之间的数据一致性,其余的follower会先将各自的log文件高于HW的部分截掉,然后从新的leader同步数据。<br> 注意:这只能保证副本之间的数据一致性,并不能保证数据不丢失或者不重复。</p><h3 id="1-5-3-Exactly-Once语义"><a href="#1-5-3-Exactly-Once语义" class="headerlink" title="1.5.3 Exactly Once语义"></a>1.5.3 Exactly Once语义</h3><p>将服务器的ACK级别设置为-1,可以保证Producer到Server之间不会丢失数据,即At Least Once语义。相对的,将服务器ACK级别设置为0,可以保证生产者每条消息只会被发送一次,即At Most Once语义。</p><p>At Least Once可以保证数据不丢失,但是不能保证数据不重复;相对的,At most Once可以保证数据不重复,但是不能保证数据不丢失。但是,对于一些非常重要的信息,比如说交易数据,下游数据消费者要求数据既不重复也不丢失,即Exactly Once语义。在0.11版本以前的Kafka,对此是无能为力的,只能保证数据不丢失,再在下游消费者对数据做全局去重。对于多个下游应用的情况,每个都需要单独做全局去重,这就对性能造成了很大影响。</p><p>0.11版本的Kafka,引入了一项重大特性:幂等性。所谓的幂等性就是指Producer不论向Server发送多少次重复数据,Server端都只会持久化一条。 幂等性结合At Least Once语义,就构成了Kafka的Exactly Once语义。即:</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">At Least Once + 幂等性 = Exactly Once</span><br></pre></td></tr></table></figure><p>要启用幂等性,只需要将Producer的参数中enable.idempotence设置为true即可。Kafka的幂等性实现其实就是将原来下游需要做的去重放在了数据上游。开启幂等性的Producer在初始化的时候会被分配一个PID,发往同一Partition的消息会附带Sequence Number。而Broker端会对&lt;PID, Partition, SeqNumber&gt;做缓存,当具有相同主键的消息提交时,Broker只会持久化一条。</p><p>但是PID重启就会变化,同时不同的Partition也具有不同主键,所以幂等性无法保证跨分区跨会话的Exactly Once。</p><h2 id="1-6-Kafka消费者"><a href="#1-6-Kafka消费者" class="headerlink" title="1.6 Kafka消费者"></a>1.6 Kafka消费者</h2><h3 id="1-6-1-消费方式"><a href="#1-6-1-消费方式" class="headerlink" title="1.6.1 消费方式"></a>1.6.1 消费方式</h3><p>consumer采用pull(拉)模式从broker中读取数据。</p><p>push(推)模式很难适应消费速率不同的消费者,因为消息发送速率是由broker决定的。它的目标是尽可能以最快速度传递消息,但是这样很容易造成consumer来不及处理消息,典型的表现就是拒绝服务以及网络拥塞。而pull模式则可以根据consumer的消费能力以适当的速率消费消息。</p><p>pull模式不足之处是,如果kafka没有数据,消费者可能会陷入循环中,一直返回空数据。针对这一点,Kafka的消费者在消费数据时会传入一个时长参数timeout,如果当前没有数据可供消费,consumer会等待一段时间之后再返回,这段时长即为timeout。</p><h3 id="1-6-2-分区分配策略"><a href="#1-6-2-分区分配策略" class="headerlink" title="1.6.2 分区分配策略"></a>1.6.2 分区分配策略</h3><p>一个consumer group中有多个consumer,一个 topic有多个partition,所以必然会涉及到partition的分配问题,即确定那个partition由哪个consumer来消费。</p><p>Kafka有两种分配策略,一是roundrobin,一是range。</p><h3 id="1-6-3-offset的维护"><a href="#1-6-3-offset的维护" class="headerlink" title="1.6.3 offset的维护"></a>1.6.3 offset的维护</h3><p>由于consumer在消费过程中可能会出现断电宕机等故障,consumer恢复后,需要从故障前的位置的继续消费,所以consumer需要实时记录自己消费到了哪个offset,以便故障恢复后继续消费。</p><p>Kafka 0.9版本之前,consumer默认将offset保存在Zookeeper中,从0.9版本开始,consumer默认将offset保存在Kafka一个内置的topic中,该topic为__consumer_offsets。</p><h1 id="二、Kafka的安装和使用"><a href="#二、Kafka的安装和使用" class="headerlink" title="二、Kafka的安装和使用"></a>二、Kafka的安装和使用</h1><p>2.0 Kafka群起脚本</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="meta">#! /bin/bash</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">case</span> <span class="variable">$1</span> <span class="keyword">in</span></span><br><span class="line"><span class="string">&quot;start&quot;</span>)&#123;</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> hadoop101 hadoop102 hadoop103</span><br><span class="line"> <span class="keyword">do</span></span><br><span class="line"> <span class="built_in">echo</span> <span class="string">&quot;==========<span class="variable">$i</span>==========&quot;</span></span><br><span class="line"> ssh <span class="variable">$i</span> <span class="string">&quot;source /etc/profile; /opt/module/kafka/bin/kafka-server-start.sh -daemon /opt/module/kafka/config/server.properties&quot;</span></span><br><span class="line"> <span class="keyword">done</span></span><br><span class="line">&#125;;;</span><br><span class="line"><span class="string">&quot;stop&quot;</span>)&#123;</span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> hadoop101 hadoop102 hadoop103</span><br><span class="line"> <span class="keyword">do</span></span><br><span class="line"> <span class="built_in">echo</span> <span class="string">&quot;==========<span class="variable">$i</span>==========&quot;</span></span><br><span class="line"> ssh <span class="variable">$i</span> <span class="string">&quot;source /etc/profile; /opt/module/kafka/bin/kafka-server-stop.sh&quot;</span></span><br><span class="line"> <span class="keyword">done</span></span><br><span class="line">&#125;;;</span><br><span class="line">*)&#123;</span><br><span class="line"> <span class="built_in">echo</span> <span class="string">&quot;Error: Could not find the option <span class="variable">$1</span> &quot;</span></span><br><span class="line">&#125;;;</span><br><span class="line"><span class="keyword">esac</span></span><br></pre></td></tr></table></figure><h2 id="2-1-Kafka安装"><a href="#2-1-Kafka安装" class="headerlink" title="2.1 Kafka安装"></a>2.1 Kafka安装</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">tar -xzvf kafka_2.11-0.11.0.0.tgz -C /opt/module</span><br><span class="line">cd /opt/module</span><br><span class="line">mv kafka_2.11-0.11.0.0/ kafka</span><br></pre></td></tr></table></figure><h2 id="2-2-修改配置文件"><a href="#2-2-修改配置文件" class="headerlink" title="2.2 修改配置文件"></a>2.2 修改配置文件</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi /opt/module/kafka/config/server.properties</span><br><span class="line"><span class="comment"># 修改如下内容</span></span><br><span class="line"><span class="comment"># The id of the broker. This must be set to a unique integer for each broker.</span></span><br><span class="line">broker.id=1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Switch to enable topic deletion or not, default value is false</span></span><br><span class="line">delete.topic.enable=<span class="literal">true</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># A comma seperated list of directories under which to store log files</span></span><br><span class="line">log.dirs=/opt/module/kafka/data1,/opt/module/kafka/data2</span><br><span class="line"></span><br><span class="line"><span class="comment"># root directory for all kafka znodes.</span></span><br><span class="line"><span class="comment"># 设置kafka信息在zookeeper中的存储节点路径</span></span><br><span class="line">zookeeper.connect=hadoop101:2181,hadoop102:2181,hadoop103:2181/kafka</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-3-配置环境变量"><a href="#2-3-配置环境变量" class="headerlink" title="2.3 配置环境变量"></a>2.3 配置环境变量</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /etc/profile</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line"><span class="comment">#KAFKA_HOME</span></span><br><span class="line"><span class="built_in">export</span> KAFKA_HOME=/opt/module/kafka</span><br><span class="line"><span class="built_in">export</span> PATH=<span class="variable">$PATH</span>:<span class="variable">$KAFKA_HOME</span>/bin</span><br></pre></td></tr></table></figure><p>保存退出,并重新打开一个终端,配置文件则会生效</p><h2 id="2-4-启动kafka"><a href="#2-4-启动kafka" class="headerlink" title="2.4 启动kafka"></a>2.4 启动kafka</h2><p>首先保证已经启动了zookeeper集群</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/kafka</span><br><span class="line">bin/kafka-server-start.sh -daemon config/server.properties</span><br></pre></td></tr></table></figure><h2 id="2-5-停止kafka"><a href="#2-5-停止kafka" class="headerlink" title="2.5 停止kafka"></a>2.5 停止kafka</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/kafka</span><br><span class="line">bin/kafka-server-stop.sh</span><br></pre></td></tr></table></figure><h2 id="2-5-kafka启动日志文件"><a href="#2-5-kafka启动日志文件" class="headerlink" title="2.5 kafka启动日志文件"></a>2.5 kafka启动日志文件</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/kafka/config</span><br><span class="line">vi server.properties</span><br></pre></td></tr></table></figure><h2 id="2-6-查看当前服务器中的所有topic"><a href="#2-6-查看当前服务器中的所有topic" class="headerlink" title="2.6 查看当前服务器中的所有topic"></a>2.6 查看当前服务器中的所有topic</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-topics.sh --zookeeper hadoop101:2181 --list</span><br></pre></td></tr></table></figure><h2 id="2-6-创建topic"><a href="#2-6-创建topic" class="headerlink" title="2.6 创建topic"></a>2.6 创建topic</h2><p>分区数可以大于集群节点数,但是副本数不可以大过集群副本数</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-topics.sh --zookeeper hadoop101:2181 --create --replication-factor 3 --partitions 1 --topic first</span><br></pre></td></tr></table></figure><h2 id="2-7-查看topic详情"><a href="#2-7-查看topic详情" class="headerlink" title="2.7 查看topic详情"></a>2.7 查看topic详情</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-topics.sh --zookeeper hadoop101:2181 --describe --topic first</span><br></pre></td></tr></table></figure><h2 id="2-8-修改topic分区数量"><a href="#2-8-修改topic分区数量" class="headerlink" title="2.8 修改topic分区数量"></a>2.8 修改topic分区数量</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-topics.sh --zookeeper hadoop101:2181 --alter --topic first --partitions 6</span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="2-9-删除topic"><a href="#2-9-删除topic" class="headerlink" title="2.9 删除topic"></a>2.9 删除topic</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-topics.sh --zookeeper hadoop101:2181 --delete --topic first</span><br></pre></td></tr></table></figure><h2 id="2-10-发送消息"><a href="#2-10-发送消息" class="headerlink" title="2.10 发送消息"></a>2.10 发送消息</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-console-producer.sh --broker-list hadoop101:9092 --topic first</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">hello world</span><br><span class="line">lytdev</span><br></pre></td></tr></table></figure><h2 id="2-11-从头消费消息"><a href="#2-11-从头消费消息" class="headerlink" title="2.11 从头消费消息"></a>2.11 从头消费消息</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-console-consumer.sh --bootstrap-server hadoop101:9092 --from-beginning --topic first</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># kafka 0.9版本之前借助于zookeeper存储offset信息,所以要使用</span></span><br><span class="line"><span class="comment"># bin/kafka-console-consumer.sh --zookeeper hadoop101:2181 --from-beginning --topic first</span></span><br></pre></td></tr></table></figure><h2 id="2-12-实时消费消息"><a href="#2-12-实时消费消息" class="headerlink" title="2.12 实时消费消息"></a>2.12 实时消费消息</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-console-consumer.sh --bootstrap-server hadoop101:9092 --topic first</span><br></pre></td></tr></table></figure><h2 id="2-13-从上次故障恢复消费消息"><a href="#2-13-从上次故障恢复消费消息" class="headerlink" title="2.13 从上次故障恢复消费消息"></a>2.13 从上次故障恢复消费消息</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-console-consumer.sh --bootstrap-server hadoop101:9092 --topic first --consumer-property group.id=consoleGroup</span><br></pre></td></tr></table></figure><h1 id="三、Kafka的API"><a href="#三、Kafka的API" class="headerlink" title="三、Kafka的API"></a>三、Kafka的API</h1><h2 id="3-1-Producer-API"><a href="#3-1-Producer-API" class="headerlink" title="3.1 Producer API"></a>3.1 Producer API</h2><h3 id="3-1-1-消息发送流程"><a href="#3-1-1-消息发送流程" class="headerlink" title="3.1.1 消息发送流程"></a>3.1.1 消息发送流程</h3><p>Kafka的Producer发送消息采用的是异步发送的方式。在消息发送的过程中,涉及到了两个线程——main线程和Sender线程,以及一个线程共享变量——RecordAccumulator。main线程将消息发送给RecordAccumulator,Sender线程不断从RecordAccumulator中拉取消息发送到Kafka broker。</p><p><strong>相关参数</strong></p><ul><li><strong>batch.size</strong></li></ul><p>只有数据积累到batch.size之后,sender才会发送数据</p><ul><li><strong>linger.time</strong></li></ul><p>如果数据迟迟未达到batch.size,sender等待linger.time之后就会发送数据</p><h3 id="3-1-2-异步发送API"><a href="#3-1-2-异步发送API" class="headerlink" title="3.1.2 异步发送API"></a>3.1.2 异步发送API</h3><p><strong>(1)不带回调函数的发送消息</strong></p><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdevproducer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.producer.KafkaProducer;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.producer.Producer;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.producer.ProducerRecord;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.Properties;</span><br><span class="line"><span class="keyword">import</span> java.util.concurrent.Future;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">ProducerWithOutCallback</span> </span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title">main</span><span class="params">(String[] args)</span> </span>&#123;</span><br><span class="line"> Properties props = <span class="keyword">new</span> Properties();</span><br><span class="line"> props.put(<span class="string">&quot;bootstrap.servers&quot;</span>, <span class="string">&quot;hadoop101:9092&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;acks&quot;</span>, <span class="string">&quot;all&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;retries&quot;</span>, <span class="number">1</span>);</span><br><span class="line"> props.put(<span class="string">&quot;batch.size&quot;</span>, <span class="number">16384</span>);</span><br><span class="line"> props.put(<span class="string">&quot;linger.ms&quot;</span>, <span class="number">1</span>);</span><br><span class="line"> props.put(<span class="string">&quot;buffer.memory&quot;</span>, <span class="number">33554432</span>);</span><br><span class="line"> props.put(<span class="string">&quot;key.serializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringSerializer&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;value.serializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringSerializer&quot;</span>);</span><br><span class="line"></span><br><span class="line"> Producer producer = <span class="keyword">new</span> KafkaProducer&lt;String, String&gt;(props);</span><br><span class="line"> <span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i &lt; <span class="number">10</span>; i++) &#123;</span><br><span class="line"> Future future = producer.send(<span class="keyword">new</span> ProducerRecord&lt;String, String&gt;(<span class="string">&quot;first&quot;</span>, <span class="string">&quot;message&quot;</span> + i));</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><strong>(2)带回调函数的发送消息</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">package com.lytdevproducer;</span><br><span class="line"></span><br><span class="line">import org.apache.kafka.clients.producer.*;</span><br><span class="line"></span><br><span class="line">import java.util.Properties;</span><br><span class="line">import java.util.concurrent.Future;</span><br><span class="line"></span><br><span class="line">public class ProducerWithCallback &#123;</span><br><span class="line"> public static void main(String[] args) &#123;</span><br><span class="line"> Properties props = new Properties();</span><br><span class="line"> props.put(<span class="string">&quot;bootstrap.servers&quot;</span>, <span class="string">&quot;192.168.10.101:9092&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;acks&quot;</span>, <span class="string">&quot;all&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;retries&quot;</span>, 1);</span><br><span class="line"> props.put(<span class="string">&quot;batch.size&quot;</span>, 16384);</span><br><span class="line"> props.put(<span class="string">&quot;linger.ms&quot;</span>, 1);</span><br><span class="line"> props.put(<span class="string">&quot;buffer.memory&quot;</span>, 33554432);</span><br><span class="line"> props.put(<span class="string">&quot;key.serializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringSerializer&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;value.serializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringSerializer&quot;</span>);</span><br><span class="line"></span><br><span class="line"> Producer&lt;String, String&gt; producer = new KafkaProducer&lt;String, String&gt;(props);</span><br><span class="line"> <span class="keyword">for</span> (int i = 0; i &lt; 10; i++) &#123;</span><br><span class="line"> producer.send(new ProducerRecord&lt;String, String&gt;(<span class="string">&quot;first&quot;</span>, <span class="string">&quot;call&quot;</span> + i), new <span class="function"><span class="title">Callback</span></span>() &#123;</span><br><span class="line"> public void onCompletion(RecordMetadata recordMetadata, Exception e) &#123;</span><br><span class="line"> System.out.println(recordMetadata.toString() + <span class="string">&quot;发送成功,在第&quot;</span> + recordMetadata.partition() + <span class="string">&quot;偏移量&quot;</span> + recordMetadata.offset());</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;);</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h3 id="3-1-3-同步发送API"><a href="#3-1-3-同步发送API" class="headerlink" title="3.1.3 同步发送API"></a>3.1.3 同步发送API</h3><p>kafka消息顺序发送:可以通过发往某一个指定分区+同步发送机制</p><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.chaoyue.dw.kafka.producer;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.producer.*;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.common.serialization.StringSerializer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.Properties;</span><br><span class="line"><span class="keyword">import</span> java.util.concurrent.ExecutionException;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">MyProducer</span> </span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title">main</span><span class="params">(String[] args)</span> </span>&#123;</span><br><span class="line"> Properties props = <span class="keyword">new</span> Properties();</span><br><span class="line"> <span class="comment">//kafka集群,broker-list</span></span><br><span class="line"> props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, <span class="string">&quot;192.168.232.202:9092&quot;</span>);</span><br><span class="line"> props.put(ProducerConfig.ACKS_CONFIG, <span class="string">&quot;all&quot;</span>);</span><br><span class="line"> <span class="comment">//重试次数</span></span><br><span class="line"> props.put(ProducerConfig.RETRIES_CONFIG, <span class="number">3</span>);</span><br><span class="line"> <span class="comment">//批次大小</span></span><br><span class="line"> props.put(ProducerConfig.BATCH_SIZE_CONFIG, <span class="number">16384</span>);</span><br><span class="line"> <span class="comment">//等待时间</span></span><br><span class="line"> props.put(ProducerConfig.LINGER_MS_CONFIG, <span class="number">1</span>);</span><br><span class="line"> <span class="comment">//RecordAccumulator缓冲区大小</span></span><br><span class="line"> props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, <span class="number">33554432</span>);</span><br><span class="line"> props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());</span><br><span class="line"> props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());</span><br><span class="line"> KafkaProducer&lt;String, String&gt; kafkaProducer = <span class="keyword">new</span> KafkaProducer&lt;&gt;(props);</span><br><span class="line"></span><br><span class="line"> <span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">10</span>; i &lt; <span class="number">20</span>; i++) &#123;</span><br><span class="line"> <span class="keyword">try</span> &#123;</span><br><span class="line"> kafkaProducer.send(<span class="keyword">new</span> ProducerRecord&lt;&gt;(<span class="string">&quot;three&quot;</span>,<span class="string">&quot;chaoyue--new-&quot;</span> + i)).get();</span><br><span class="line"> &#125; <span class="keyword">catch</span> (InterruptedException e) &#123;</span><br><span class="line"> e.printStackTrace();</span><br><span class="line"> &#125; <span class="keyword">catch</span> (ExecutionException e) &#123;</span><br><span class="line"> e.printStackTrace();</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;</span><br><span class="line"> kafkaProducer.close();</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h2 id="3-2-Consumer-API"><a href="#3-2-Consumer-API" class="headerlink" title="3.2 Consumer API"></a>3.2 Consumer API</h2><h3 id="3-2-1-自动提交offset"><a href="#3-2-1-自动提交offset" class="headerlink" title="3.2.1 自动提交offset"></a>3.2.1 自动提交offset</h3><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdevconsumer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.Consumer;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.ConsumerRecord;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.ConsumerRecords;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.KafkaConsumer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.Collections;</span><br><span class="line"><span class="keyword">import</span> java.util.Properties;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">AutoCommitConsumer</span> </span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title">main</span><span class="params">(String[] args)</span> </span>&#123;</span><br><span class="line"> Properties props = <span class="keyword">new</span> Properties();</span><br><span class="line"> props.put(<span class="string">&quot;bootstrap.servers&quot;</span>, <span class="string">&quot;192.168.10.101:9092&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;group.id&quot;</span>, <span class="string">&quot;test&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;enable.auto.commit&quot;</span>, <span class="string">&quot;true&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;auto.commit.interval.ms&quot;</span>, <span class="string">&quot;1000&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;key.deserializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;value.deserializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;</span>);</span><br><span class="line"></span><br><span class="line"> Consumer&lt;String, String&gt; consumer = <span class="keyword">new</span> KafkaConsumer&lt;String, String&gt;(props);</span><br><span class="line"> consumer.subscribe(Collections.singleton(<span class="string">&quot;first&quot;</span>));</span><br><span class="line"> <span class="keyword">while</span> (<span class="keyword">true</span>)&#123;</span><br><span class="line"> ConsumerRecords&lt;String, String&gt; records = consumer.poll(<span class="number">1000</span>);</span><br><span class="line"> <span class="keyword">for</span> (ConsumerRecord&lt;String, String&gt; record : records)&#123;</span><br><span class="line"> System.out.println(record.value());</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h3 id="3-2-2-手动同步提交offset"><a href="#3-2-2-手动同步提交offset" class="headerlink" title="3.2.2 手动同步提交offset"></a>3.2.2 手动同步提交offset</h3><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdevconsumer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.Consumer;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.ConsumerRecord;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.ConsumerRecords;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.KafkaConsumer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.Collections;</span><br><span class="line"><span class="keyword">import</span> java.util.Properties;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">SyncCommitConsumer</span> </span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title">main</span><span class="params">(String[] args)</span> </span>&#123;</span><br><span class="line"> Properties props = <span class="keyword">new</span> Properties();</span><br><span class="line"> props.put(<span class="string">&quot;bootstrap.servers&quot;</span>, <span class="string">&quot;192.168.10.101:9092&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;group.id&quot;</span>, <span class="string">&quot;test&quot;</span>);</span><br><span class="line"></span><br><span class="line"> props.put(<span class="string">&quot;key.deserializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;value.deserializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;</span>);</span><br><span class="line"></span><br><span class="line"> Consumer&lt;String, String&gt; consumer = <span class="keyword">new</span> KafkaConsumer&lt;String, String&gt;(props);</span><br><span class="line"> consumer.subscribe(Collections.singleton(<span class="string">&quot;first&quot;</span>));</span><br><span class="line"> <span class="keyword">while</span> (<span class="keyword">true</span>)&#123;</span><br><span class="line"> ConsumerRecords&lt;String, String&gt; records = consumer.poll(<span class="number">1000</span>);</span><br><span class="line"> <span class="keyword">for</span> (ConsumerRecord&lt;String, String&gt; record : records)&#123;</span><br><span class="line"> System.out.println(record.value());</span><br><span class="line"> &#125;</span><br><span class="line"> consumer.commitSync();</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h3 id="3-2-3-手动异步提交offset"><a href="#3-2-3-手动异步提交offset" class="headerlink" title="3.2.3 手动异步提交offset"></a>3.2.3 手动异步提交offset</h3><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdevconsumer;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.clients.consumer.*;</span><br><span class="line"><span class="keyword">import</span> org.apache.kafka.common.TopicPartition;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.Collections;</span><br><span class="line"><span class="keyword">import</span> java.util.Map;</span><br><span class="line"><span class="keyword">import</span> java.util.Properties;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">AyncCommitConsumer</span> </span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title">main</span><span class="params">(String[] args)</span> </span>&#123;</span><br><span class="line"> Properties props = <span class="keyword">new</span> Properties();</span><br><span class="line"> props.put(<span class="string">&quot;bootstrap.servers&quot;</span>, <span class="string">&quot;192.168.10.101:9092&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;group.id&quot;</span>, <span class="string">&quot;test&quot;</span>);</span><br><span class="line"></span><br><span class="line"> props.put(<span class="string">&quot;key.deserializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;</span>);</span><br><span class="line"> props.put(<span class="string">&quot;value.deserializer&quot;</span>, <span class="string">&quot;org.apache.kafka.common.serialization.StringDeserializer&quot;</span>);</span><br><span class="line"></span><br><span class="line"> Consumer&lt;String, String&gt; consumer = <span class="keyword">new</span> KafkaConsumer&lt;String, String&gt;(props);</span><br><span class="line"> consumer.subscribe(Collections.singleton(<span class="string">&quot;first&quot;</span>));</span><br><span class="line"> <span class="keyword">while</span> (<span class="keyword">true</span>)&#123;</span><br><span class="line"> ConsumerRecords&lt;String, String&gt; records = consumer.poll(<span class="number">1000</span>);</span><br><span class="line"> <span class="keyword">for</span> (ConsumerRecord&lt;String, String&gt; record : records)&#123;</span><br><span class="line"> System.out.println(record.value());</span><br><span class="line"> &#125;</span><br><span class="line"> consumer.commitAsync(<span class="keyword">new</span> OffsetCommitCallback() &#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">onComplete</span><span class="params">(Map&lt;TopicPartition, OffsetAndMetadata&gt; map, Exception e)</span> </span>&#123;</span><br><span class="line"> System.out.println(<span class="string">&quot;提交完成&quot;</span>);</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;);</span><br><span class="line"> &#125;</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><h1 id="四、Flume对接kafka"><a href="#四、Flume对接kafka" class="headerlink" title="四、Flume对接kafka"></a>四、Flume对接kafka</h1><h2 id="4-1-配置flume-flume-kafka-conf"><a href="#4-1-配置flume-flume-kafka-conf" class="headerlink" title="4.1 配置flume(flume-kafka.conf)"></a>4.1 配置flume(flume-kafka.conf)</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># define</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1</span><br><span class="line">a1.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># source</span></span><br><span class="line">a1.sources.r1.type = <span class="built_in">exec</span></span><br><span class="line">a1.sources.r1.command = tail -F -c +0 /opt/module/flume/data/flume.log</span><br><span class="line">a1.sources.r1.shell = /bin/bash -c</span><br><span class="line"></span><br><span class="line"><span class="comment"># sink</span></span><br><span class="line">a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink</span><br><span class="line">a1.sinks.k1.kafka.bootstrap.servers = hadoop101:9092,hadoop102:9092,hadoop103:9092</span><br><span class="line">a1.sinks.k1.kafka.topic = first</span><br><span class="line">a1.sinks.k1.kafka.flumeBatchSize = 20</span><br><span class="line">a1.sinks.k1.kafka.producer.acks = 1</span><br><span class="line">a1.sinks.k1.kafka.producer.linger.ms = 1</span><br><span class="line"></span><br><span class="line"><span class="comment"># channel</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># bind</span></span><br><span class="line">a1.sources.r1.channels = c1</span><br><span class="line">a1.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><h2 id="4-2-启动flume"><a href="#4-2-启动flume" class="headerlink" title="4.2 启动flume"></a>4.2 启动flume</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -c conf/ -n a1 -f <span class="built_in">jobs</span>/flume-kafka.conf</span><br></pre></td></tr></table></figure><h2 id="4-3-启动kafka消费者"><a href="#4-3-启动kafka消费者" class="headerlink" title="4.3 启动kafka消费者"></a>4.3 启动kafka消费者</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/kafka-console-consumer.sh --bootstrap-server hadoop101:9092 --from-beginning --topic first</span><br></pre></td></tr></table></figure><h2 id="4-4-向flume-log里追加数据"><a href="#4-4-向flume-log里追加数据" class="headerlink" title="4.4 向flume.log里追加数据"></a>4.4 向flume.log里追加数据</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/flume/data</span><br><span class="line">cat &gt;&gt; flume.log</span><br><span class="line">ddddddddd</span><br></pre></td></tr></table></figure><h1 id="五、Kafka监控"><a href="#五、Kafka监控" class="headerlink" title="五、Kafka监控"></a>五、Kafka监控</h1><h2 id="5-1-修改kafka配置文件"><a href="#5-1-修改kafka配置文件" class="headerlink" title="5.1 修改kafka配置文件"></a>5.1 修改kafka配置文件</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi kafka-server-start.sh</span><br><span class="line"><span class="comment">#修改 KAFKA_HEAP_OPTS</span></span><br><span class="line"><span class="built_in">export</span> KAFKA_HEAP_OPTS=<span class="string">&quot;-server -Xms1g -Xmx1g -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70&quot;</span></span><br><span class="line"><span class="built_in">export</span> JMX_PORT=<span class="string">&quot;9999&quot;</span></span><br></pre></td></tr></table></figure><h2 id="5-2-分发配置文件"><a href="#5-2-分发配置文件" class="headerlink" title="5.2 分发配置文件"></a>5.2 分发配置文件</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">xsync kafka-server-start.sh</span><br></pre></td></tr></table></figure><h2 id="5-3-Eagle安装"><a href="#5-3-Eagle安装" class="headerlink" title="5.3 Eagle安装"></a>5.3 Eagle安装</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">tar -xzvf kafka-eagle-bin-1.3.7.tar.gz -C /opt/module</span><br><span class="line"><span class="built_in">cd</span> /opt/module</span><br><span class="line">mv kafka-eagle-bin-1.3.7 eagle</span><br></pre></td></tr></table></figure><h2 id="5-4-配置环境变量"><a href="#5-4-配置环境变量" class="headerlink" title="5.4 配置环境变量"></a>5.4 配置环境变量</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /etc/profile</span><br><span class="line"><span class="comment"># 添加如下内容</span></span><br><span class="line"><span class="comment">#KE_HOME</span></span><br><span class="line"><span class="built_in">export</span> KE_HOME=/opt/module/eagle</span><br><span class="line"><span class="built_in">export</span> PATH=<span class="variable">$PATH</span>:<span class="variable">$KE_HOME</span>/bin</span><br><span class="line"></span><br><span class="line"><span class="built_in">source</span> /etc/profile</span><br></pre></td></tr></table></figure><h2 id="5-5-分发配置文件"><a href="#5-5-分发配置文件" class="headerlink" title="5.5 分发配置文件"></a>5.5 分发配置文件</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">xsync /etc/profile</span><br><span class="line"><span class="built_in">source</span> /etc/profile</span><br></pre></td></tr></table></figure><h2 id="5-6-修改ke-sh的执行权限"><a href="#5-6-修改ke-sh的执行权限" class="headerlink" title="5.6 修改ke.sh的执行权限"></a>5.6 修改ke.sh的执行权限</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/eagle/bin</span><br><span class="line">chmod a+x ke.sh</span><br></pre></td></tr></table></figure><h2 id="5-7-修改system-config-properties"><a href="#5-7-修改system-config-properties" class="headerlink" title="5.7 修改system-config.properties"></a>5.7 修改system-config.properties</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/eagle/conf</span><br><span class="line">vi system-config.properties</span><br><span class="line"><span class="comment"># 修改如下内容</span></span><br><span class="line"><span class="comment"># multi zookeeper&amp;kafka cluster list</span></span><br><span class="line"><span class="comment">######################################</span></span><br><span class="line">kafka.eagle.zk.cluster.alias=cluster1</span><br><span class="line">cluster1.zk.list=hadoop101:2181,hadoop102:2181,hadoop103:2181</span><br><span class="line"></span><br><span class="line"><span class="comment"># kafka offset storage</span></span><br><span class="line"><span class="comment">######################################</span></span><br><span class="line">cluster1.kafka.eagle.offset.storage=kafka</span><br><span class="line"></span><br><span class="line"><span class="comment"># enable kafka metrics</span></span><br><span class="line"><span class="comment">######################################</span></span><br><span class="line">kafka.eagle.metrics.charts=<span class="literal">true</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># kafka jdbc driver address</span></span><br><span class="line"><span class="comment">######################################</span></span><br><span class="line">kafka.eagle.driver=com.mysql.jdbc.Driver</span><br><span class="line">kafka.eagle.url=jdbc:mysql://hadoop101:3306/ke?useUnicode=<span class="literal">true</span>&amp;characterEncoding=UTF-8&amp;zeroDateTimeBehavior=convertToNull</span><br><span class="line">kafka.eagle.username=root</span><br><span class="line">kafka.eagle.password=123456</span><br></pre></td></tr></table></figure><h2 id="5-8-重启kafka"><a href="#5-8-重启kafka" class="headerlink" title="5.8 重启kafka"></a>5.8 重启kafka</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># kafka.sh 为自定义集群启动/停止脚本</span></span><br><span class="line">kafka.sh stop</span><br><span class="line">kafka.sh start</span><br></pre></td></tr></table></figure><h2 id="5-9-启动Eagle"><a href="#5-9-启动Eagle" class="headerlink" title="5.9 启动Eagle"></a>5.9 启动Eagle</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/eagle/bin</span><br><span class="line">./ke.sh start</span><br></pre></td></tr></table></figure><h2 id="5-10-访问Eagle的页面"><a href="#5-10-访问Eagle的页面" class="headerlink" title="5.10 访问Eagle的页面"></a>5.10 访问Eagle的页面</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">http://192.168.20.101:8048/ke</span><br><span class="line">admin</span><br><span class="line">123456</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 中间件 </category>
</categories>
<tags>
<tag> 中间件 </tag>
<tag> Kafka </tag>
</tags>
</entry>
<entry>
<title>一文彻底搞懂Flume</title>
<link href="/2021/09/12/a99eab3d4a13.html"/>
<url>/2021/09/12/a99eab3d4a13.html</url>
<content type="html"><![CDATA[<p>本文是基于CentOS 7.9系统环境,进行Flume的学习和使用</p><h1 id="一、Flume的简介"><a href="#一、Flume的简介" class="headerlink" title="一、Flume的简介"></a>一、Flume的简介</h1><h2 id="1-1-Flume基本概念"><a href="#1-1-Flume基本概念" class="headerlink" title="1.1 Flume基本概念"></a>1.1 Flume基本概念</h2><p><strong>(1) 什么是Flume</strong></p><p>Flume是Cloudera提供的一个高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统。<br><strong>(2) Flume的目的</strong></p><p>Flume最主要的作业就是,实时读取服务器本地磁盘的数据,将数据写入HDFS</p><h2 id="1-2-Flume基本组件"><a href="#1-2-Flume基本组件" class="headerlink" title="1.2 Flume基本组件"></a>1.2 Flume基本组件</h2><p><img src="https://oscimg.oschina.net/oscnet/20200617165336828.png"><br><strong>(1) Flume工作流程</strong></p><p>Source采集数据并包装成Event,并将Event缓存在Channel中,Sink不断地从Channel获取Event,并解决成数据,最终将数据写入存储或索引系统<br><strong>(2) Agent</strong></p><p>Agent是一个JVM进程,它以事件的形式将数据从源头送至目的。<br>Agent主要有3个部分组成,Source、Channel、Sink<br><strong>(3) Source</strong></p><p>Source是负责接收数据到Flume Agent的组件,采集数据并包装成Event。Source组件可以处理各种类型、各种格式的日志数据,包括avro、thrift、exec、jms、spooling directory、netcat、sequence generator、syslog、http、legacy<br><strong>(4) Sink</strong></p><p>Sink不断地轮询Channel中的事件且批量地移除它们,并将这些事件批量写入到存储或索引系统、或者被发送到另一个Flume Agent。<br>Sink组件目的地包括hdfs、logger、avro、thrift、ipc、file、HBase、solr、自定义<br><strong>(5) Channel</strong></p><p>Channel是位于Source和Sink之间的缓冲区。因此,Channel允许Source和Sink运作在不同的速率上。Channel是线程安全的,可以同时处理几个Source的写入操作和几个Sink的读取操作</p><p>Flume自带两种Channel:Memory Channel和File Channel</p><p>Memory Channel是内存中的队列。Memory Channel在不需要关心数据丢失的情景下适用。如果需要关心数据丢失,那么Memory Channel就不应该使用,因为程序死亡、机器宕机或者重启都会导致数据丢失</p><p>File Channel将所有事件写到磁盘。因此在程序关闭或机器宕机的情况下不会丢失数据</p><p><strong>(6) Event</strong></p><p>传输单元,Flume数据传输的基本单元,以Event的形式将数据从源头送至目的地。Event由Header和Body两部分组成,Header用来存放该event的一些属性,为K-V结构,Body用来存放该条数据,形式为字节数组</p><p><img src="https://oscimg.oschina.net/oscnet/20200618084704461.png"></p><h1 id="二、Flume的安装和入门案例"><a href="#二、Flume的安装和入门案例" class="headerlink" title="二、Flume的安装和入门案例"></a>二、Flume的安装和入门案例</h1><h2 id="2-1-Flume安装"><a href="#2-1-Flume安装" class="headerlink" title="2.1 Flume安装"></a>2.1 Flume安装</h2><p><strong>(1) Flume压缩包解压</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">tar -xzvf apache-flume-1.7.0-bin.tar.gz -C /opt/module/</span><br></pre></td></tr></table></figure><p>(2) 修改Flume名称</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">cd /opt/module/</span><br><span class="line">mv apache-flume-1.7.0-bin flume</span><br></pre></td></tr></table></figure><p>(3) 修改Flume配置文件</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">cd /opt/module/flume/conf</span><br><span class="line">mv flume-env.sh.template flume-env.sh</span><br><span class="line">vi flume-env.sh</span><br><span class="line"># 修改内容如下</span><br><span class="line">export JAVA_HOME=/opt/module/jdk1.8.0_271</span><br></pre></td></tr></table></figure><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">cd /opt/module/flume/conf</span><br><span class="line">vi log4j.properties</span><br><span class="line"># 修改内容如下</span><br><span class="line">flume.log.dir=/opt/module/flume/logs</span><br></pre></td></tr></table></figure><h2 id="2-2-Flume案例-监听数据端口"><a href="#2-2-Flume案例-监听数据端口" class="headerlink" title="2.2 Flume案例-监听数据端口"></a>2.2 Flume案例-监听数据端口</h2><p><img src="https://oscimg.oschina.net/oscnet/20200618113347933.png"><br><strong>(1) 安装nc</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">yum install -y nc</span><br></pre></td></tr></table></figure><p><strong>(2) 安装net-tools</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">yum install -y net-tools</span><br></pre></td></tr></table></figure><p><strong>(3) 检测端口是否被占用</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">netstat -nltp | grep 444444</span><br></pre></td></tr></table></figure><p><strong>(4) 启动flume-agent</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">cd /opt/module/flume</span><br><span class="line">bin/flume-ng agent --name a1 --conf conf/ --conf-file job/flume-netcat-logger.conf -Dflume.root.logger=INFO,console</span><br></pre></td></tr></table></figure><p><strong>(5) 开启另一个终端,发送消息</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">nc localhost 4444</span><br><span class="line">aaa</span><br></pre></td></tr></table></figure><h2 id="2-3-Flume案例-实时监控单个追加文件"><a href="#2-3-Flume案例-实时监控单个追加文件" class="headerlink" title="2.3 Flume案例-实时监控单个追加文件"></a>2.3 Flume案例-实时监控单个追加文件</h2><p><img src="https://img-blog.csdnimg.cn/20200618164021860.png"><br><strong>(1) 拷贝jar包至/opt/module/flume/lib</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">commons-configuration-1.6.jar</span><br><span class="line">hadoop-auth-2.7.2.jar</span><br><span class="line">hadoop-common-2.7.2.jar</span><br><span class="line">hadoop-hdfs-2.7.2.jar</span><br><span class="line">commons-io-2.4.jar</span><br><span class="line">htrace-core-3.1.0-incubating.jar</span><br></pre></td></tr></table></figure><p><strong>(2) 创建flume-file-hdfs.conf文件</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">vi flume-file-hdfs.conf</span><br><span class="line"># Name the components on this agent</span><br><span class="line">a2.sources = r2</span><br><span class="line">a2.sinks = k2</span><br><span class="line">a2.channels = c2</span><br><span class="line"></span><br><span class="line"># Describe/configure the source</span><br><span class="line">a2.sources.r2.type = exec</span><br><span class="line">a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log</span><br><span class="line">a2.sources.r2.shell = /bin/bash -c</span><br><span class="line"></span><br><span class="line"># Describe the sink</span><br><span class="line">a2.sinks.k2.type = hdfs</span><br><span class="line">a2.sinks.k2.hdfs.path = hdfs://hadoop102:9000/flume/%Y%m%d/%H</span><br><span class="line">#上传文件的前缀</span><br><span class="line">a2.sinks.k2.hdfs.filePrefix = logs-</span><br><span class="line">#是否按照时间滚动文件夹</span><br><span class="line">a2.sinks.k2.hdfs.round = true</span><br><span class="line">#多少时间单位创建一个新的文件夹</span><br><span class="line">a2.sinks.k2.hdfs.roundValue = 1</span><br><span class="line">#重新定义时间单位</span><br><span class="line">a2.sinks.k2.hdfs.roundUnit = hour</span><br><span class="line">#是否使用本地时间戳</span><br><span class="line">a2.sinks.k2.hdfs.useLocalTimeStamp = true</span><br><span class="line">#积攒多少个Event才flush到HDFS一次</span><br><span class="line">a2.sinks.k2.hdfs.batchSize = 1000</span><br><span class="line">#设置文件类型,可支持压缩</span><br><span class="line">a2.sinks.k2.hdfs.fileType = DataStream</span><br><span class="line">#多久生成一个新的文件</span><br><span class="line">a2.sinks.k2.hdfs.rollInterval = 60</span><br><span class="line">#设置每个文件的滚动大小</span><br><span class="line">a2.sinks.k2.hdfs.rollSize = 134217700</span><br><span class="line">#文件的滚动与Event数量无关</span><br><span class="line">a2.sinks.k2.hdfs.rollCount = 0</span><br><span class="line"></span><br><span class="line"># Use a channel which buffers events in memory</span><br><span class="line">a2.channels.c2.type = memory</span><br><span class="line">a2.channels.c2.capacity = 1000</span><br><span class="line">a2.channels.c2.transactionCapacity = 100</span><br><span class="line"> </span><br><span class="line"># Bind the source and sink to the channel</span><br><span class="line">a2.sources.r2.channels = c2</span><br><span class="line">a2.sinks.k2.channel = c2</span><br></pre></td></tr></table></figure><p><strong>(3) 启动flume-agent</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a2 -c conf/ -f job/flume-file-hdfs.conf</span><br></pre></td></tr></table></figure><p><strong>(4) 开启另一个终端,执行hive命令</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hive</span><br></pre></td></tr></table></figure><p>2.4 Flume案例-实时监控目录下多个新文件</p><p><img src="https://oscimg.oschina.net/oscnet/20200618164021860.png"><br><strong>(1) 创建flume-dir-hdfs.conf文件</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">vim flume-dir-hdfs.conf</span><br><span class="line"># 添加如下内容</span><br><span class="line">a3.sources = r3</span><br><span class="line">a3.sinks = k3</span><br><span class="line">a3.channels = c3</span><br><span class="line"></span><br><span class="line"># Describe/configure the source</span><br><span class="line">a3.sources.r3.type = spooldir</span><br><span class="line">a3.sources.r3.spoolDir = /opt/module/flume/upload</span><br><span class="line">a3.sources.r3.fileSuffix = .COMPLETED</span><br><span class="line">a3.sources.r3.fileHeader = true</span><br><span class="line">#忽略所有以.tmp结尾的文件,不上传</span><br><span class="line">a3.sources.r3.ignorePattern = ([^ ]*\.tmp)</span><br><span class="line"></span><br><span class="line"># Describe the sink</span><br><span class="line">a3.sinks.k3.type = hdfs</span><br><span class="line">a3.sinks.k3.hdfs.path = hdfs://hadoop102:9000/flume/upload/%Y%m%d/%H</span><br><span class="line">#上传文件的前缀</span><br><span class="line">a3.sinks.k3.hdfs.filePrefix = upload-</span><br><span class="line">#是否按照时间滚动文件夹</span><br><span class="line">a3.sinks.k3.hdfs.round = true</span><br><span class="line">#多少时间单位创建一个新的文件夹</span><br><span class="line">a3.sinks.k3.hdfs.roundValue = 1</span><br><span class="line">#重新定义时间单位</span><br><span class="line">a3.sinks.k3.hdfs.roundUnit = hour</span><br><span class="line">#是否使用本地时间戳</span><br><span class="line">a3.sinks.k3.hdfs.useLocalTimeStamp = true</span><br><span class="line">#积攒多少个Event才flush到HDFS一次</span><br><span class="line">a3.sinks.k3.hdfs.batchSize = 100</span><br><span class="line">#设置文件类型,可支持压缩</span><br><span class="line">a3.sinks.k3.hdfs.fileType = DataStream</span><br><span class="line">#多久生成一个新的文件</span><br><span class="line">a3.sinks.k3.hdfs.rollInterval = 60</span><br><span class="line">#设置每个文件的滚动大小大概是128M</span><br><span class="line">a3.sinks.k3.hdfs.rollSize = 134217700</span><br><span class="line">#文件的滚动与Event数量无关</span><br><span class="line">a3.sinks.k3.hdfs.rollCount = 0</span><br><span class="line"></span><br><span class="line"># Use a channel which buffers events in memory</span><br><span class="line">a3.channels.c3.type = memory</span><br><span class="line">a3.channels.c3.capacity = 1000</span><br><span class="line">a3.channels.c3.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"># Bind the source and sink to the channel</span><br><span class="line">a3.sources.r3.channels = c3</span><br><span class="line">a3.sinks.k3.channel = c3</span><br></pre></td></tr></table></figure><p><strong>(2) 启动flume-agent</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a3 -c conf/ -f job/flume-dir-hdfs.conf</span><br></pre></td></tr></table></figure><p><strong>(3) 开启另一个终端</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">cd /opt/module/flume/</span><br><span class="line">mkdir upload</span><br><span class="line">cp NOTICE upload/</span><br></pre></td></tr></table></figure><h2 id="2-5-Flume案例-实时监控目录下的多个追加文件"><a href="#2-5-Flume案例-实时监控目录下的多个追加文件" class="headerlink" title="2.5 Flume案例-实时监控目录下的多个追加文件"></a>2.5 Flume案例-实时监控目录下的多个追加文件</h2><p>Exec source适用于监控一个实时追加的文件,不能实现断电续传;Spooldir Source适合用于同步新文件,但不适合对实时追加日志的文件进行监听并同步;而Taildir Source适合用于监听多个实时追加的文件,并且能够实现断点续传。</p><p><img src="https://oscimg.oschina.net/oscnet/20200618164251157.png"><br><strong>(1) 创建flume-dir-hdfs.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi flume-taildir-hdfs.conf</span><br><span class="line"><span class="comment"># 添加内容</span></span><br><span class="line">a3.sources = r3</span><br><span class="line">a3.sinks = k3</span><br><span class="line">a3.channels = c3</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a3.sources.r3.type = TAILDIR</span><br><span class="line">a3.sources.r3.positionFile = /opt/module/flume/tail_dir.json</span><br><span class="line">a3.sources.r3.filegroups = f1 f2</span><br><span class="line">a3.sources.r3.filegroups.f1 = /opt/module/flume/files/.*file.*</span><br><span class="line">a3.sources.r3.filegroups.f2 = /opt/module/flume/files/.*<span class="built_in">log</span>.*</span><br><span class="line"><span class="comment"># 不能写成如下的形式,后面的会覆盖前面的</span></span><br><span class="line"><span class="comment"># a3.sources.r3.filegroups.f1 = /opt/module/flume/files/file1.txt</span></span><br><span class="line"><span class="comment"># a3.sources.r3.filegroups.f1 = /opt/module/flume/files/file2.txt</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a3.sinks.k3.type = hdfs</span><br><span class="line">a3.sinks.k3.hdfs.path = hdfs://hadoop102:9000/flume/upload2/%Y%m%d/%H</span><br><span class="line"><span class="comment">#上传文件的前缀</span></span><br><span class="line">a3.sinks.k3.hdfs.filePrefix = upload-</span><br><span class="line"><span class="comment">#是否按照时间滚动文件夹</span></span><br><span class="line">a3.sinks.k3.hdfs.round = <span class="literal">true</span></span><br><span class="line"><span class="comment">#多少时间单位创建一个新的文件夹</span></span><br><span class="line">a3.sinks.k3.hdfs.roundValue = 1</span><br><span class="line"><span class="comment">#重新定义时间单位</span></span><br><span class="line">a3.sinks.k3.hdfs.roundUnit = hour</span><br><span class="line"><span class="comment">#是否使用本地时间戳</span></span><br><span class="line">a3.sinks.k3.hdfs.useLocalTimeStamp = <span class="literal">true</span></span><br><span class="line"><span class="comment">#积攒多少个Event才flush到HDFS一次</span></span><br><span class="line">a3.sinks.k3.hdfs.batchSize = 100</span><br><span class="line"><span class="comment">#设置文件类型,可支持压缩</span></span><br><span class="line">a3.sinks.k3.hdfs.fileType = DataStream</span><br><span class="line"><span class="comment">#多久生成一个新的文件</span></span><br><span class="line">a3.sinks.k3.hdfs.rollInterval = 60</span><br><span class="line"><span class="comment">#设置每个文件的滚动大小大概是128M</span></span><br><span class="line">a3.sinks.k3.hdfs.rollSize = 134217700</span><br><span class="line"><span class="comment">#文件的滚动与Event数量无关</span></span><br><span class="line">a3.sinks.k3.hdfs.rollCount = 0</span><br><span class="line"></span><br><span class="line"><span class="comment"># Use a channel which buffers events in memory</span></span><br><span class="line">a3.channels.c3.type = memory</span><br><span class="line">a3.channels.c3.capacity = 1000</span><br><span class="line">a3.channels.c3.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a3.sources.r3.channels = c3</span><br><span class="line">a3.sinks.k3.channel = c3</span><br></pre></td></tr></table></figure><p><strong>(2) 创建目录和文件</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">cd /opt/module/flume</span><br><span class="line">mkdir files</span><br><span class="line">cp CHANGELOG files/CHANGELOG.log</span><br><span class="line">cp LICENSE files/LICENSE.log</span><br></pre></td></tr></table></figure><p><strong>(3) 启动flume-agent</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a3 -c conf/ -f job/flume-taildir-hdfs.conf</span><br></pre></td></tr></table></figure><p>(4) 开启另一个终端</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">cd /opt/module/flume/files</span><br><span class="line">vi CHANGELOG.log</span><br><span class="line"># 添加如下内容</span><br><span class="line">xxxxx</span><br><span class="line">sssss</span><br><span class="line">wwwww</span><br></pre></td></tr></table></figure><h1 id="三、Flume的进阶"><a href="#三、Flume的进阶" class="headerlink" title="三、Flume的进阶"></a>三、Flume的进阶</h1><h2 id="3-1-Flume事务"><a href="#3-1-Flume事务" class="headerlink" title="3.1 Flume事务"></a>3.1 Flume事务</h2><p><img src="https://oscimg.oschina.net/oscnet/20200618193742769.png"><br><strong>(1) Put事务流程</strong></p><p>doPut:将批数据先写入临时缓存区putList<br> doCommit:检查channel内存队列是否足够合并<br> doRollback:channel内存队列空间不足,回滚数据</p><p><strong>(2) Take事务流程</strong></p><p>doTake:将数据取到临时缓存区takeList,并将数据发送到HDFS<br> doCommit:如果数据全部发送成功,则清除临时缓冲区takeList<br> doRollback:数据发送过程中如果出现异常,rollback将临时缓冲区takeList中的数据归还给channel内存队列</p><h2 id="3-2-Flume-Agent内部原理"><a href="#3-2-Flume-Agent内部原理" class="headerlink" title="3.2 Flume Agent内部原理"></a>3.2 Flume Agent内部原理</h2><p><img src="https://oscimg.oschina.net/oscnet/20200618193807677.png"><br><strong>(1) ChannelSelector</strong></p><p>ChannelSelector的作用就是选出Event将要被发往哪个Channel,其共有两种类型</p><p>Replicating(复制)<br> ReplicatingSelector会将同一个Event发往所有的Channel,<br> 和Multiplexing(多路复用)<br> Multiplexing会根据相应的原则,将不同的Event发往不同的Channel</p><p><strong>(2) SinkProcessor</strong></p><p>SinkProcessor共有三种类型</p><p>DefaultSinkProcessor<br> 对应单个sink,发送至单个sink<br> LoadBalancingSinkProcessor<br> LoadBalancingSinkProcessor对应的是Sink Group,LoadBalancingSinkProcessor可以实现负载均衡的功能<br> FailoverSinkProcessor<br> FailoverSinkProcessor对应的是Sink Group,<br> FailoverSinkProcessor可以错误恢复的功能</p><h1 id="四、Flume的拓扑结构"><a href="#四、Flume的拓扑结构" class="headerlink" title="四、Flume的拓扑结构"></a>四、Flume的拓扑结构</h1><p>4.1 简单串联</p><p><img src="https://oscimg.oschina.net/oscnet/20200618212349829.png"><br>这种模式是将多个flume顺序连接起来了,从最初的source开始到最终sink传送的目的存储系统。</p><p><strong>优点</strong><br> 多个flume并联,可以增加event缓存数量<br><strong>缺点</strong><br> 此模式不建议桥接过多的flume数量, flume数量过多不仅会影响传输速率,而且一旦传输过程中某个节点flume宕机,会影响整个传输系统。</p><h2 id="4-2-复制和多路复用"><a href="#4-2-复制和多路复用" class="headerlink" title="4.2 复制和多路复用"></a>4.2 复制和多路复用</h2><p><img src="https://oscimg.oschina.net/oscnet/20200618214258691.png"><br>Flume支持将事件流向一个或者多个目的地。这种模式可以将相同数据复制到多个channel中,或者将不同数据分发到不同的channel中,sink可以选择传送到不同的目的地。</p><h2 id="4-3-负载均衡和故障转移"><a href="#4-3-负载均衡和故障转移" class="headerlink" title="4.3 负载均衡和故障转移"></a>4.3 负载均衡和故障转移</h2><p><img src="https://oscimg.oschina.net/oscnet/20200618220201601.png"><br>Flume支持使用将多个sink逻辑上分到一个sink组,sink组配合不同的SinkProcessor可以实现负载均衡和错误恢复的功能。</p><h2 id="4-4-聚合"><a href="#4-4-聚合" class="headerlink" title="4.4 聚合"></a>4.4 聚合</h2><p><img src="https://oscimg.oschina.net/oscnet/20200618220241209.png"><br>这种模式是我们最常见的,也非常实用,日常web应用通常分布在上百个服务器,大者甚至上千个、上万个服务器。产生的日志,处理起来也非常麻烦。用flume的这种组合方式能很好的解决这一问题,每台服务器部署一个flume采集日志,传送到一个集中收集日志的flume,再由此flume上传到hdfs、hive、hbase等,进行日志分析。</p><h1 id="五、Flume的企业开发实例"><a href="#五、Flume的企业开发实例" class="headerlink" title="五、Flume的企业开发实例"></a>五、Flume的企业开发实例</h1><h2 id="5-1-复制和多路复用"><a href="#5-1-复制和多路复用" class="headerlink" title="5.1 复制和多路复用"></a>5.1 复制和多路复用</h2><p><img src="https://oscimg.oschina.net/oscnet/20200619100122518.png"><br><strong>(1) 创建flume-file-avro.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi flume-file-avro.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1 k2</span><br><span class="line">a1.channels = c1 c2</span><br><span class="line"><span class="comment"># 将数据流复制给所有channel</span></span><br><span class="line">a1.sources.r1.selector.type = replicating</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a1.sources.r1.type = <span class="built_in">exec</span></span><br><span class="line">a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log</span><br><span class="line">a1.sources.r1.shell = /bin/bash -c</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line"><span class="comment"># sink端的avro是一个数据发送者</span></span><br><span class="line">a1.sinks.k1.type = avro</span><br><span class="line">a1.sinks.k1.hostname = hadoop1021</span><br><span class="line">a1.sinks.k1.port = 4141</span><br><span class="line"></span><br><span class="line">a1.sinks.k2.type = avro</span><br><span class="line">a1.sinks.k2.hostname = hadoop101</span><br><span class="line">a1.sinks.k2.port = 4142</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line">a1.channels.c2.type = memory</span><br><span class="line">a1.channels.c2.capacity = 1000</span><br><span class="line">a1.channels.c2.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a1.sources.r1.channels = c1 c2</span><br><span class="line">a1.sinks.k1.channel = c1</span><br><span class="line">a1.sinks.k2.channel = c2</span><br></pre></td></tr></table></figure><p><strong>(2) 创建flume-avro-hdfs.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi flume-avro-hdfs.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a2.sources = r1</span><br><span class="line">a2.sinks = k1</span><br><span class="line">a2.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line"><span class="comment"># source端的avro是一个数据接收服务</span></span><br><span class="line">a2.sources.r1.type = avro</span><br><span class="line">a2.sources.r1.bind = hadoop101</span><br><span class="line">a2.sources.r1.port = 4141</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a2.sinks.k1.type = hdfs</span><br><span class="line">a2.sinks.k1.hdfs.path = hdfs://hadoop102:9000/flume2/%Y%m%d/%H</span><br><span class="line"><span class="comment">#上传文件的前缀</span></span><br><span class="line">a2.sinks.k1.hdfs.filePrefix = flume2-</span><br><span class="line"><span class="comment">#是否按照时间滚动文件夹</span></span><br><span class="line">a2.sinks.k1.hdfs.round = <span class="literal">true</span></span><br><span class="line"><span class="comment">#多少时间单位创建一个新的文件夹</span></span><br><span class="line">a2.sinks.k1.hdfs.roundValue = 1</span><br><span class="line"><span class="comment">#重新定义时间单位</span></span><br><span class="line">a2.sinks.k1.hdfs.roundUnit = hour</span><br><span class="line"><span class="comment">#是否使用本地时间戳</span></span><br><span class="line">a2.sinks.k1.hdfs.useLocalTimeStamp = <span class="literal">true</span></span><br><span class="line"><span class="comment">#积攒多少个Event才flush到HDFS一次</span></span><br><span class="line">a2.sinks.k1.hdfs.batchSize = 100</span><br><span class="line"><span class="comment">#设置文件类型,可支持压缩</span></span><br><span class="line">a2.sinks.k1.hdfs.fileType = DataStream</span><br><span class="line"><span class="comment">#多久生成一个新的文件</span></span><br><span class="line">a2.sinks.k1.hdfs.rollInterval = 600</span><br><span class="line"><span class="comment">#设置每个文件的滚动大小大概是128M</span></span><br><span class="line">a2.sinks.k1.hdfs.rollSize = 134217700</span><br><span class="line"><span class="comment">#文件的滚动与Event数量无关</span></span><br><span class="line">a2.sinks.k1.hdfs.rollCount = 0</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a2.channels.c1.type = memory</span><br><span class="line">a2.channels.c1.capacity = 1000</span><br><span class="line">a2.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a2.sources.r1.channels = c1</span><br><span class="line">a2.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(3) 创建flume-avro-dir.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi flume-avro-dir.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a3.sources = r1</span><br><span class="line">a3.sinks = k1</span><br><span class="line">a3.channels = c2</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a3.sources.r1.type = avro</span><br><span class="line">a3.sources.r1.bind = hadoop101</span><br><span class="line">a3.sources.r1.port = 4142</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a3.sinks.k1.type = file_roll</span><br><span class="line">a3.sinks.k1.sink.directory = /opt/module/flume/data/flume3</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a3.channels.c2.type = memory</span><br><span class="line">a3.channels.c2.capacity = 1000</span><br><span class="line">a3.channels.c2.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a3.sources.r1.channels = c2</span><br><span class="line">a3.sinks.k1.channel = c2</span><br></pre></td></tr></table></figure><p><strong>(4) 执行配置文件</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a3 -c conf/ -f job/group1/flume-avro-dir.conf</span><br><span class="line">bin/flume-ng agent -n a2 -c conf/ -f job/group1/flume-avro-hdfs.conf</span><br><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/group1/flume-file-avro.conf```</span><br><span class="line"></span><br><span class="line">**(5) 启动Hadoop和Hive**</span><br><span class="line">```bash</span><br><span class="line">sbin/start-dfs.sh</span><br><span class="line">sbin/start-yarn.sh</span><br><span class="line">bin/hive</span><br></pre></td></tr></table></figure><h2 id="5-2-故障转移"><a href="#5-2-故障转移" class="headerlink" title="5.2 故障转移"></a>5.2 故障转移</h2><p><img src="https://oscimg.oschina.net/oscnet/20200619105749441.png"><br><strong>(1) 创建a1.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a1.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.channels = c1</span><br><span class="line">a1.sinks = k1 k2</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a1.sources.r1.type = netcat</span><br><span class="line">a1.sources.r1.bind = localhost</span><br><span class="line">a1.sources.r1.port = 44444</span><br><span class="line"></span><br><span class="line">a1.sinkgroups = g1</span><br><span class="line">a1.sinkgroups.g1.sinks = k1 k2</span><br><span class="line">a1.sinkgroups.g1.processor.type = failover</span><br><span class="line">a1.sinkgroups.g1.processor.priority.k1 = 5</span><br><span class="line">a1.sinkgroups.g1.processor.priority.k2 = 10</span><br><span class="line">a1.sinkgroups.g1.processor.maxpenalty = 10000</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a1.sinks.k1.type = avro</span><br><span class="line">a1.sinks.k1.hostname = hadoop101</span><br><span class="line">a1.sinks.k1.port = 4141</span><br><span class="line"></span><br><span class="line">a1.sinks.k2.type = avro</span><br><span class="line">a1.sinks.k2.hostname = hadoop101</span><br><span class="line">a1.sinks.k2.port = 4142</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a1.sources.r1.channels = c1</span><br><span class="line">a1.sinks.k1.channel = c1</span><br><span class="line">a1.sinks.k2.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(2) 创建a2.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a2.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a2.sources = r1</span><br><span class="line">a2.sinks = k1</span><br><span class="line">a2.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a2.sources.r1.type = avro</span><br><span class="line">a2.sources.r1.bind = hadoop101</span><br><span class="line">a2.sources.r1.port = 4141</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a2.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a2.channels.c1.type = memory</span><br><span class="line">a2.channels.c1.capacity = 1000</span><br><span class="line">a2.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a2.sources.r1.channels = c1</span><br><span class="line">a2.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(3) 创建a3.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a3.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a3.sources = r1</span><br><span class="line">a3.sinks = k1</span><br><span class="line">a3.channels = c2</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a3.sources.r1.type = avro</span><br><span class="line">a3.sources.r1.bind = hadoop101</span><br><span class="line">a3.sources.r1.port = 4142</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a3.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a3.channels.c2.type = memory</span><br><span class="line">a3.channels.c2.capacity = 1000</span><br><span class="line">a3.channels.c2.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a3.sources.r1.channels = c2</span><br><span class="line">a3.sinks.k1.channel = c2</span><br></pre></td></tr></table></figure><p><strong>(4) 执行配置文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a3 -c conf/ -f job/group2/a3.conf -Dflume.root.logger=INFO,console</span><br><span class="line">bin/flume-ng agent -n a2 -c conf/ -f job/group2/a2.conf -Dflume.root.logger=INFO,console</span><br><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/group2/a1.conf</span><br></pre></td></tr></table></figure><p><strong>(5) 开启另一个终端,发送消息</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">nc localhost 4444</span><br><span class="line">aaa</span><br></pre></td></tr></table></figure><p><strong>(6) 杀死a3后,通过故障转移,a2能正常工作</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">kill</span> -9 a3-pid</span><br></pre></td></tr></table></figure><h2 id="5-3-负载均衡"><a href="#5-3-负载均衡" class="headerlink" title="5.3 负载均衡"></a>5.3 负载均衡</h2><p><img src="https://oscimg.oschina.net/oscnet/20200619111514941.png"><br><strong>(1) 创建a1.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a1.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.channels = c1</span><br><span class="line">a1.sinks = k1 k2</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a1.sources.r1.type = netcat</span><br><span class="line">a1.sources.r1.bind = localhost</span><br><span class="line">a1.sources.r1.port = 44444</span><br><span class="line"></span><br><span class="line">a1.sinkgroups = g1</span><br><span class="line">a1.sinkgroups.g1.sinks = k1 k2</span><br><span class="line">a1.sinkgroups.g1.processor.type = load_balance</span><br><span class="line">a1.sinkgroups.g1.processor.backoff = <span class="literal">true</span></span><br><span class="line">a1.sinkgroups.g1.processor.selector = random</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a1.sinks.k1.type = avro</span><br><span class="line">a1.sinks.k1.hostname = hadoop101</span><br><span class="line">a1.sinks.k1.port = 4141</span><br><span class="line"></span><br><span class="line">a1.sinks.k2.type = avro</span><br><span class="line">a1.sinks.k2.hostname = hadoop101</span><br><span class="line">a1.sinks.k2.port = 4142</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a1.sources.r1.channels = c1</span><br><span class="line">a1.sinks.k1.channel = c1</span><br><span class="line">a1.sinks.k2.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(2) 创建a2.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a2.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a2.sources = r1</span><br><span class="line">a2.sinks = k1</span><br><span class="line">a2.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a2.sources.r1.type = avro</span><br><span class="line">a2.sources.r1.bind = hadoop101</span><br><span class="line">a2.sources.r1.port = 4141</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a2.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a2.channels.c1.type = memory</span><br><span class="line">a2.channels.c1.capacity = 1000</span><br><span class="line">a2.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a2.sources.r1.channels = c1</span><br><span class="line">a2.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p>(3) 创建a3.conf文件</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a3.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a3.sources = r1</span><br><span class="line">a3.sinks = k1</span><br><span class="line">a3.channels = c2</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a3.sources.r1.type = avro</span><br><span class="line">a3.sources.r1.bind = hadoop101</span><br><span class="line">a3.sources.r1.port = 4142</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a3.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a3.channels.c2.type = memory</span><br><span class="line">a3.channels.c2.capacity = 1000</span><br><span class="line">a3.channels.c2.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a3.sources.r1.channels = c2</span><br><span class="line">a3.sinks.k1.channel = c2</span><br></pre></td></tr></table></figure><p><strong>(4) 执行配置文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a3 -c conf/ -f job/group2/a3.conf -Dflume.root.logger=INFO,console</span><br><span class="line">bin/flume-ng agent -n a2 -c conf/ -f job/group2/a2.conf -Dflume.root.logger=INFO,console</span><br><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/group2/a1.conf</span><br></pre></td></tr></table></figure><p><strong>(5) 开启另一个终端,不断发送消息</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">nc localhost 4444</span><br><span class="line">aaa</span><br></pre></td></tr></table></figure><h2 id="5-4-聚合"><a href="#5-4-聚合" class="headerlink" title="5.4 聚合"></a>5.4 聚合</h2><p><img src="https://oscimg.oschina.net/oscnet/20200619131332422.png"><br><strong>(1) 创建a1.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a1.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1</span><br><span class="line">a1.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a1.sources.r1.type = <span class="built_in">exec</span></span><br><span class="line">a1.sources.r1.command = tail -F /opt/module/flume/group.log</span><br><span class="line">a1.sources.r1.shell = /bin/bash -c</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a1.sinks.k1.type = avro</span><br><span class="line">a1.sinks.k1.hostname = hadoop103</span><br><span class="line">a1.sinks.k1.port = 4141</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a1.sources.r1.channels = c1</span><br><span class="line">a1.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(2) 创建a2.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a2.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a2.sources = r1</span><br><span class="line">a2.sinks = k1</span><br><span class="line">a2.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a2.sources.r1.type = netcat</span><br><span class="line">a2.sources.r1.bind = hadoop102</span><br><span class="line">a2.sources.r1.port = 44444</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a2.sinks.k1.type = avro</span><br><span class="line">a2.sinks.k1.hostname = hadoop103</span><br><span class="line">a2.sinks.k1.port = 4141</span><br><span class="line"></span><br><span class="line"><span class="comment"># Use a channel which buffers events in memory</span></span><br><span class="line">a2.channels.c1.type = memory</span><br><span class="line">a2.channels.c1.capacity = 1000</span><br><span class="line">a2.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a2.sources.r1.channels = c1</span><br><span class="line">a2.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(3) 创建a3.conf文件</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">vi a3.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a3.sources = r1</span><br><span class="line">a3.sinks = k1</span><br><span class="line">a3.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a3.sources.r1.type = avro</span><br><span class="line">a3.sources.r1.bind = hadoop103</span><br><span class="line">a3.sources.r1.port = 4141</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a3.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the channel</span></span><br><span class="line">a3.channels.c1.type = memory</span><br><span class="line">a3.channels.c1.capacity = 1000</span><br><span class="line">a3.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a3.sources.r1.channels = c1</span><br><span class="line">a3.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(4) 执行配置文件</strong><br><strong>hadoop103</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group4/a3.conf -Dflume.root.logger=INFO,console</span><br></pre></td></tr></table></figure><p><strong>hadoop102</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group4/a2.conf</span><br></pre></td></tr></table></figure><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group4/a1.conf</span><br></pre></td></tr></table></figure><p><strong>(5) 开启另一个终端,不断发送消息</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">nc hadoop102 44444</span><br><span class="line">aaa</span><br></pre></td></tr></table></figure><p><strong>(6) 向group.log文件中,添加内容</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/flume</span><br><span class="line"><span class="built_in">echo</span> 222 &gt;&gt; group.log</span><br></pre></td></tr></table></figure><h2 id="5-5-自定义Interceptor案例"><a href="#5-5-自定义Interceptor案例" class="headerlink" title="5.5 自定义Interceptor案例"></a>5.5 自定义Interceptor案例</h2><p>根据日志不同的类型(type),将日志进行分流,分入到不同的sink<br><strong>(1) 实现一个Interceptor接口</strong></p><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdevflume.interceptor;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.flume.Context;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.Event;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.interceptor.Interceptor;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.List;</span><br><span class="line"><span class="keyword">import</span> java.util.Map;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">MyInterceptor</span> <span class="keyword">implements</span> <span class="title">Interceptor</span> </span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">initialize</span><span class="params">()</span> </span>&#123;</span><br><span class="line"></span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> Event <span class="title">intercept</span><span class="params">(Event event)</span> </span>&#123;</span><br><span class="line"> Map&lt;String, String&gt; headers = event.getHeaders();</span><br><span class="line"> <span class="keyword">byte</span>[] body = event.getBody();</span><br><span class="line"> <span class="keyword">if</span> (body[<span class="number">0</span>] &lt;= <span class="string">&#x27;9&#x27;</span> &amp;&amp; body[<span class="number">0</span>] &gt;= <span class="string">&#x27;0&#x27;</span>) &#123;</span><br><span class="line"> headers.put(<span class="string">&quot;type&quot;</span>, <span class="string">&quot;number&quot;</span>);</span><br><span class="line"> &#125; <span class="keyword">else</span> &#123;</span><br><span class="line"> headers.put(<span class="string">&quot;type&quot;</span>, <span class="string">&quot;not_number&quot;</span>);</span><br><span class="line"> &#125;</span><br><span class="line"> <span class="keyword">return</span> event;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> List&lt;Event&gt; <span class="title">intercept</span><span class="params">(List&lt;Event&gt; events)</span> </span>&#123;</span><br><span class="line"> <span class="keyword">for</span> (Event event : events) &#123;</span><br><span class="line"> intercept(event);</span><br><span class="line"> &#125;</span><br><span class="line"> <span class="keyword">return</span> events;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">close</span><span class="params">()</span> </span>&#123;</span><br><span class="line"></span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="keyword">public</span> <span class="keyword">static</span> <span class="class"><span class="keyword">class</span> <span class="title">MyBuilder</span> <span class="keyword">implements</span> <span class="title">Interceptor</span>.<span class="title">Builder</span></span>&#123;</span><br><span class="line"> <span class="function"><span class="keyword">public</span> Interceptor <span class="title">build</span><span class="params">()</span> </span>&#123;</span><br><span class="line"> <span class="keyword">return</span> <span class="keyword">new</span> MyInterceptor();</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">configure</span><span class="params">(Context context)</span> </span>&#123;</span><br><span class="line"></span><br><span class="line"> &#125;</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><strong>(2) hadoop101创建配置文件a1.conf</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/flume/job/interceptor</span><br><span class="line">vi a1.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1 k2</span><br><span class="line">a1.channels = c1 c2</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a1.sources.r1.type = netcat</span><br><span class="line">a1.sources.r1.bind = localhost</span><br><span class="line">a1.sources.r1.port = 44444</span><br><span class="line">a1.sources.r1.interceptors = i1</span><br><span class="line">a1.sources.r1.interceptors.i1.type = com.lytdevflume.interceptor.MyInterceptor<span class="variable">$MyBuilder</span></span><br><span class="line">a1.sources.r1.selector.type = multiplexing</span><br><span class="line">a1.sources.r1.selector.header = <span class="built_in">type</span></span><br><span class="line">a1.sources.r1.selector.mapping.not_number = c1</span><br><span class="line">a1.sources.r1.selector.mapping.number = c2</span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a1.sinks.k1.type = avro</span><br><span class="line">a1.sinks.k1.hostname = hadoop102</span><br><span class="line">a1.sinks.k1.port = 4141</span><br><span class="line"></span><br><span class="line">a1.sinks.k2.type=avro</span><br><span class="line">a1.sinks.k2.hostname = hadoop103</span><br><span class="line">a1.sinks.k2.port = 4242</span><br><span class="line"></span><br><span class="line"><span class="comment"># Use a channel which buffers events in memory</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Use a channel which buffers events in memory</span></span><br><span class="line">a1.channels.c2.type = memory</span><br><span class="line">a1.channels.c2.capacity = 1000</span><br><span class="line">a1.channels.c2.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a1.sources.r1.channels = c1 c2</span><br><span class="line">a1.sinks.k1.channel = c1</span><br><span class="line">a1.sinks.k2.channel = c2</span><br></pre></td></tr></table></figure><p><strong>(3) hadoop102创建配置文件a1.conf</strong></p><p><strong>hadoop102</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/flume/job/interceptor</span><br><span class="line">vi a1.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1</span><br><span class="line">a1.channels = c1</span><br><span class="line"></span><br><span class="line">a1.sources.r1.type = avro</span><br><span class="line">a1.sources.r1.bind = hadoop102</span><br><span class="line">a1.sources.r1.port = 4141</span><br><span class="line"></span><br><span class="line">a1.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line">a1.sinks.k1.channel = c1</span><br><span class="line">a1.sources.r1.channels = c1</span><br></pre></td></tr></table></figure><p><strong>(4) hadoop103创建配置文件a1.conf</strong></p><p><strong>hadoop103</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/flume/job/interceptor</span><br><span class="line">vi a1.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1</span><br><span class="line">a1.channels = c1</span><br><span class="line"></span><br><span class="line">a1.sources.r1.type = avro</span><br><span class="line">a1.sources.r1.bind = hadoop103</span><br><span class="line">a1.sources.r1.port = 4242</span><br><span class="line"></span><br><span class="line">a1.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line">a1.sinks.k1.channel = c1</span><br><span class="line">a1.sources.r1.channels = c1</span><br></pre></td></tr></table></figure><p><strong>(5) 分别启动flume进程</strong></p><p>** hadoop103**</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/interceptor/a1.conf -Dflume.root.logger=INFO,console</span><br></pre></td></tr></table></figure><p><strong>hadoop102</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/interceptor/a1.conf -Dflume.root.logger=INFO,console</span><br></pre></td></tr></table></figure><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/interceptor/a1.conf -Dflume.root.logger=INFO,console</span><br></pre></td></tr></table></figure><p><strong>(6) 开启另一个终端,不断发送消息</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">nc hadoop102 44444</span><br><span class="line">aaa</span><br><span class="line">111</span><br><span class="line">1ss</span><br><span class="line">s11</span><br></pre></td></tr></table></figure><h2 id="5-6-自定义Source案例"><a href="#5-6-自定义Source案例" class="headerlink" title="5.6 自定义Source案例"></a>5.6 自定义Source案例</h2><p><strong>(1) 实现一个Source类</strong></p><figure class="highlight java"><table><tr><td class="code"><pre><span class="line"><span class="keyword">package</span> com.lytdevflume.source;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.flume.Context;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.Event;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.EventDeliveryException;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.PollableSource;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.conf.Configurable;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.event.SimpleEvent;</span><br><span class="line"><span class="keyword">import</span> org.apache.flume.source.AbstractSource;</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> java.util.HashMap;</span><br><span class="line"></span><br><span class="line"><span class="keyword">public</span> <span class="class"><span class="keyword">class</span> <span class="title">MySource</span> <span class="keyword">extends</span> <span class="title">AbstractSource</span> <span class="keyword">implements</span> <span class="title">Configurable</span>, <span class="title">PollableSource</span> </span>&#123;</span><br><span class="line"></span><br><span class="line"> <span class="keyword">private</span> String prefix;</span><br><span class="line"> <span class="keyword">private</span> <span class="keyword">long</span> interval;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> Status <span class="title">process</span><span class="params">()</span> <span class="keyword">throws</span> EventDeliveryException </span>&#123;</span><br><span class="line"> Status status = <span class="keyword">null</span>;</span><br><span class="line"> <span class="keyword">try</span> &#123;</span><br><span class="line"> <span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">1</span>; i &lt;= <span class="number">5</span>; i++) &#123;</span><br><span class="line"> Event e = <span class="keyword">new</span> SimpleEvent();</span><br><span class="line"> e.setHeaders(<span class="keyword">new</span> HashMap&lt;String, String&gt;());</span><br><span class="line"> e.setBody((prefix + i).getBytes());</span><br><span class="line"> getChannelProcessor().processEvent(e);</span><br><span class="line"> Thread.sleep(interval);</span><br><span class="line"> &#125;</span><br><span class="line"> status = Status.READY;</span><br><span class="line"> &#125; <span class="keyword">catch</span> (InterruptedException e) &#123;</span><br><span class="line"> status = Status.BACKOFF;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> status;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">long</span> <span class="title">getBackOffSleepIncrement</span><span class="params">()</span> </span>&#123;</span><br><span class="line"> <span class="keyword">return</span> <span class="number">2000</span>;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">long</span> <span class="title">getMaxBackOffSleepInterval</span><span class="params">()</span> </span>&#123;</span><br><span class="line"> <span class="keyword">return</span> <span class="number">20000</span>;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">configure</span><span class="params">(Context context)</span> </span>&#123;</span><br><span class="line"> prefix = context.getString(<span class="string">&quot;source.prefix&quot;</span>,<span class="string">&quot;Log&quot;</span>);</span><br><span class="line"> interval = context.getLong(<span class="string">&quot;source.interval&quot;</span>,<span class="number">1000L</span>);</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><strong>(2) hadoop101创建配置文件a1.conf</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cd</span> /opt/module/flume/job/<span class="built_in">source</span></span><br><span class="line">vi a1.conf</span><br><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1</span><br><span class="line">a1.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a1.sources.r1.type = com.lytdevflume.source.MySource</span><br><span class="line">a1.sources.r1.source.prefix= Log</span><br><span class="line">a1.sources.r1.source.interval= 1000</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a1.sinks.k1.type = logger</span><br><span class="line"></span><br><span class="line"><span class="comment"># Use a channel which buffers events in memory</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a1.sources.r1.channels = c1</span><br><span class="line">a1.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(3) 启动flume进程</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/<span class="built_in">source</span>/a1.conf -Dflume.root.logger=INFO,console</span><br></pre></td></tr></table></figure><h2 id="5-7-自定义文件Source案例二"><a href="#5-7-自定义文件Source案例二" class="headerlink" title="5.7 自定义文件Source案例二"></a>5.7 自定义文件Source案例二</h2><p><strong>(1) 实现一个Source类,接收MQTT服务器</strong></p><p>引入org.eclipse.paho.client.mqttv3.* 类包<br> 自定义一个Source类<br> 自定义一个SimpleMqttClient类<br> 该类用于配置MQTT服务器的链接(IP地址、端口号、用户名、密码、主题)、订阅主题消息、订阅消息的回调<br> 接收到回调函数的消息,进行event的封装,然后发送</p><p><strong>(2) 实现一个Source类,接收传感器串口数据</strong></p><p>自定义一个Source类<br> 自定义一个串口类<br> 该类用于配置串口类的链接(串口号、波特率、校验位、数据位、停止位)、数据的回调(数据都是以十六进制进行传递)<br> 接收到回调函数的消息,进行event的封装,然后发送</p><h2 id="5-8-自定义Sink案例"><a href="#5-8-自定义Sink案例" class="headerlink" title="5.8 自定义Sink案例"></a>5.8 自定义Sink案例</h2><p><strong>(1) 实现一个Sink类</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">package com.lytdevflume.sink;</span><br><span class="line"></span><br><span class="line">import org.apache.flume.*;</span><br><span class="line">import org.apache.flume.conf.Configurable;</span><br><span class="line">import org.apache.flume.sink.AbstractSink;</span><br><span class="line"></span><br><span class="line">public class MySink extends AbstractSink implements Configurable &#123;</span><br><span class="line"> private long interval;</span><br><span class="line"> private String prefix;</span><br><span class="line"> private String suffix;</span><br><span class="line"></span><br><span class="line"> public Status process() throws EventDeliveryException &#123;</span><br><span class="line"> Status status = null;</span><br><span class="line"> Channel channel = this.getChannel();</span><br><span class="line"> Transaction transaction = channel.getTransaction();</span><br><span class="line"> transaction.begin();</span><br><span class="line"> try &#123;</span><br><span class="line"> Event event = null;</span><br><span class="line"> <span class="keyword">while</span> ((event = channel.take()) == null) &#123;</span><br><span class="line"> Thread.sleep(interval);</span><br><span class="line"> &#125;</span><br><span class="line"> byte[] body = event.getBody();</span><br><span class="line"> String line = new String(body, <span class="string">&quot;UTF-8&quot;</span>);</span><br><span class="line"> System.out.println(prefix + line + suffix);</span><br><span class="line"> status = Status.READY;</span><br><span class="line"> transaction.commit();</span><br><span class="line"> &#125; catch (Exception e) &#123;</span><br><span class="line"> transaction.rollback();</span><br><span class="line"> status = Status.BACKOFF;</span><br><span class="line"> &#125; finally &#123;</span><br><span class="line"> transaction.close();</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> <span class="built_in">return</span> status;</span><br><span class="line"> &#125;</span><br><span class="line"></span><br><span class="line"> public void configure(Context context) &#123;</span><br><span class="line"> prefix = context.getString(<span class="string">&quot;source.prefix&quot;</span>, <span class="string">&quot;start:&quot;</span>);</span><br><span class="line"> suffix = context.getString(<span class="string">&quot;source.suffix&quot;</span>, <span class="string">&quot;:end&quot;</span>);</span><br><span class="line"> interval = context.getLong(<span class="string">&quot;source.interval&quot;</span>, 1000L);</span><br><span class="line"> &#125;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure><p><strong>(2) hadoop101创建配置文件a1.conf</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment"># Name the components on this agent</span></span><br><span class="line">a1.sources = r1</span><br><span class="line">a1.sinks = k1</span><br><span class="line">a1.channels = c1</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe/configure the source</span></span><br><span class="line">a1.sources.r1.type = netcat</span><br><span class="line">a1.sources.r1.bind = localhost</span><br><span class="line">a1.sources.r1.port = 44444</span><br><span class="line"></span><br><span class="line"><span class="comment"># Describe the sink</span></span><br><span class="line">a1.sinks.k1.type = com.lytdevflume.sink.MySink</span><br><span class="line">a1.sinks.k1.source.prefix = lytdev:</span><br><span class="line">a1.sinks.k1.source.suffix = :lytdev</span><br><span class="line">a1.sinks.k1.source.interval = 1000</span><br><span class="line"></span><br><span class="line"><span class="comment"># Use a channel which buffers events in memory</span></span><br><span class="line">a1.channels.c1.type = memory</span><br><span class="line">a1.channels.c1.capacity = 1000</span><br><span class="line">a1.channels.c1.transactionCapacity = 100</span><br><span class="line"></span><br><span class="line"><span class="comment"># Bind the source and sink to the channel</span></span><br><span class="line">a1.sources.r1.channels = c1</span><br><span class="line">a1.sinks.k1.channel = c1</span><br></pre></td></tr></table></figure><p><strong>(3) 启动flume进程</strong></p><p><strong>hadoop101</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/sink/a1.conf -Dflume.root.logger=INFO,console</span><br></pre></td></tr></table></figure><h1 id="六、Flume数据流监控"><a href="#六、Flume数据流监控" class="headerlink" title="六、Flume数据流监控"></a>六、Flume数据流监控</h1><h2 id="6-1-Flume自带监控工具"><a href="#6-1-Flume自带监控工具" class="headerlink" title="6.1 Flume自带监控工具"></a>6.1 Flume自带监控工具</h2><p>执行如下的命令,并访问 <a href="http://ip:1234/">http://IP:1234</a></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/flume-ng agent -n a1 -c conf/ -f job/sink/a1.conf -Dflume.monitoring.type=http -Dflume.monitoring.port=1234</span><br></pre></td></tr></table></figure><h2 id="6-2-Ganglia"><a href="#6-2-Ganglia" class="headerlink" title="6.2 Ganglia"></a>6.2 Ganglia</h2><p>Ganglia由gmond、gmetad和gweb三部分组成</p><p><strong>gmond(Ganglia Monitoring Daemon)</strong><br> gmond是一种轻量级服务,安装在每台需要收集指标数据的节点主机上。使用gmond,你可以很容易收集很多系统指标数据,如CPU、内存、磁盘、网络和活跃进程的数据等</p><p><strong>gmetad(Ganglia Meta Daemon)</strong><br> gmetad整合所有信息,并将其以RRD格式存储至磁盘的服务</p><p><strong>gweb(Ganglia Web)</strong><br> Ganglia可视化工具,gweb是一种利用浏览器显示gmetad所存储数据的PHP前端。在Web界面中以图表方式展现集群的运行状态下收集的多种不同指标数据</p><h3 id="6-3-Ganglia监控视图"><a href="#6-3-Ganglia监控视图" class="headerlink" title="6.3 Ganglia监控视图"></a>6.3 Ganglia监控视图</h3><table><thead><tr><th>监控视图名称</th><th>描述</th></tr></thead><tbody><tr><td>EventPutAttemptCount</td><td>source尝试写入channel的事件总数量</td></tr><tr><td>EventPutSuccessCount</td><td>成功写入channel且提交的事件总数量</td></tr><tr><td>EventTakeAttemptCount</td><td>sink尝试从channel拉取事件的总数量</td></tr><tr><td>EventTakeSuccessCount</td><td>sink成功读取的事件的总数量</td></tr><tr><td>StartTime</td><td>channel启动的时间(毫秒)</td></tr><tr><td>StopTime</td><td>channel停止的时间(毫秒)</td></tr><tr><td>ChannelSize</td><td>目前channel中事件的总数量</td></tr><tr><td>ChannelFillPercentage</td><td>channel占用百分比</td></tr><tr><td>ChannelCapacity</td><td>channel的容量</td></tr></tbody></table><p>EventPutSuccessCount = EventTakeSuccessCount + ChannelSize<br>否则,丢数据。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> Flume </tag>
</tags>
</entry>
<entry>
<title>一文彻底搞懂Zookeeper</title>
<link href="/2021/07/12/1e41bdf3e092.html"/>
<url>/2021/07/12/1e41bdf3e092.html</url>
<content type="html"><![CDATA[<blockquote><p>本文是基于CentOS 7.9系统环境,进行Zookeeper的学习和使用</p></blockquote><h1 id="1-Zookeeper简介"><a href="#1-Zookeeper简介" class="headerlink" title="1. Zookeeper简介"></a>1. Zookeeper简介</h1><h2 id="1-1-什么是Zookeeper"><a href="#1-1-什么是Zookeeper" class="headerlink" title="1.1 什么是Zookeeper"></a>1.1 什么是Zookeeper</h2><p>Zookeeper是一个开源的分布式的,为分布式应用提供协调服务的Apache项目。本质上,就是文件系统+通知机制</p><h2 id="1-2-Zookeeper工作机制"><a href="#1-2-Zookeeper工作机制" class="headerlink" title="1.2 Zookeeper工作机制"></a>1.2 Zookeeper工作机制</h2><p>Zookeeper从设计模式角度来理解:是一个基于观察者模式设计的分布式服务管理框架,它负责存储和管理大家都关心的数据,然后接受观察者的注册,一旦这些数据的状态发生变化,Zookeeper就负责通知已经在Zookeeper上注册的那些观察者做出相应的反应</p><p><img src="https://img-blog.csdnimg.cn/20200518143448459.png"></p><h2 id="1-3-Zookeeper特点"><a href="#1-3-Zookeeper特点" class="headerlink" title="1.3 Zookeeper特点"></a>1.3 Zookeeper特点</h2><p><img src="https://img-blog.csdnimg.cn/20200927144855252.png"></p><ol><li>Zookeeper:一各领导者(Leader),多个跟随者(Follower)组成的集群</li><li>集群中只要有半数以上节点存活,Zookeeper集群就能正常服务,所以zookeeper适合安装奇数台服务器</li><li>全局数据一致:每个server保存一份相同的数据副本,client无论连接到哪个server,数据都是一致的</li><li>更新请求顺序进行,来自同一个client的更新请求按其发送顺序依次执行</li><li>数据更新原子性,一次数据更新要么成功,要么失败</li><li>实时性,在一定时间范围内,client能读到最新数据</li></ol><h2 id="1-4-Zookeeper数据结构"><a href="#1-4-Zookeeper数据结构" class="headerlink" title="1.4 Zookeeper数据结构"></a>1.4 Zookeeper数据结构</h2><p>与Unix文件系统类似,整体上是一棵树,每一个节点称为znode,每一个znode默认存储1MB数据,每个znode都可以通过其路径唯一标识</p><h2 id="1-5-Zookeeper应用场景"><a href="#1-5-Zookeeper应用场景" class="headerlink" title="1.5 Zookeeper应用场景"></a>1.5 Zookeeper应用场景</h2><p>提供的服务包括:统一命名服务、统一配置管理、统一集群管理、服务器节点动态上下线、软负载均衡等</p><h3 id="1-5-1-统一命名服务"><a href="#1-5-1-统一命名服务" class="headerlink" title="1.5.1 统一命名服务"></a>1.5.1 统一命名服务</h3><p>在分布式环境下,经常需要对应用/服务进行统一命令,便于识别</p><h3 id="1-5-2-统一配置管理"><a href="#1-5-2-统一配置管理" class="headerlink" title="1.5.2 统一配置管理"></a>1.5.2 统一配置管理</h3><ul><li>分布式环境下,配置文件同步非常常见</li></ul><p>(1)一般要求一个集群中,所有节点的配置信息是一致的,比如kafka集群;<br>(2)对配置文件修改后,希望能够快速同步到各个节点上。</p><ul><li>配置管理可交由zookeeper实现</li></ul><p>(1)可将配置信息写入Zookeeper上的一个Znode;<br>(2)各个客户端服务器监听这个Znode;<br>(3)一旦Znode中的数据被修改,Zookeeper将通知各个客户端服务器。</p><h3 id="1-5-3-统一集群管理"><a href="#1-5-3-统一集群管理" class="headerlink" title="1.5.3 统一集群管理"></a>1.5.3 统一集群管理</h3><p>分布式环境下,实时掌握每个节点的状态是必要的<br>(1)可根据节点实时状态做出一些调整。<br>Zookeeper可以实现实时监控节点状态变化<br>(1)可将节点信息写入Zookeeper上的一个Znode;<br>(2)监听这个Znode可获取它的实时状态变化。</p><h3 id="1-5-4-服务器节点动态上下线"><a href="#1-5-4-服务器节点动态上下线" class="headerlink" title="1.5.4 服务器节点动态上下线"></a>1.5.4 服务器节点动态上下线</h3><p>客户端能实时洞察到服务器上下线的变化</p><h3 id="1-5-5-软负载均衡"><a href="#1-5-5-软负载均衡" class="headerlink" title="1.5.5 软负载均衡"></a>1.5.5 软负载均衡</h3><p>在Zookeeper中记录每台服务器的访问数,让访问数最少的服务器去处理最新的客户端请求</p><h2 id="1-6-Zookeeper选举机制"><a href="#1-6-Zookeeper选举机制" class="headerlink" title="1.6 Zookeeper选举机制"></a>1.6 Zookeeper选举机制</h2><ul><li>半数机制:集群中半数以上机器存活,集群可用。所以Zookeeper适合安装奇数台服务器。</li><li>Zookeeper虽然在配置文件中并没有指定Master和Slave。但是,Zookeeper工作时,是有一个节点为Leader,其他则为Follower,Leader是通过内部的选举机制临时产生的。</li><li>以一个简单的例子来说明整个选举的过程。</li></ul><p>假设有五台服务器组成的Zookeeper集群,它们的id从1-5,同时它们都是最新启动的,也就是没有历史数据,在存放数据量这一点上,都是一样的。假设这些服务器依序启动,来看看会发生什么。<br>(1)服务器1启动,发起一次选举;<br>所有的服务器节点都会选自己作为leader, 此时服务器1投自己1票,但是只有它一台服务器启动了,它发出去的报文没有任何响应,所以选举无法完成;<br>服务器1的状态一直是LOOKING状态。<br>(2)服务器2启动,再发起一次选举;<br>服务器1和服务器2分别投自己1票,服务器之间进行通信,互相交换自己的选举结果,由于两者都没有历史选举数据,所以id值较大的服务器2胜出,服务器1更改投票为服务器2,但是由于没有达到超过半数以上的服务器都同意选举它(这个例子中的半数以上是3),所以选举无法完成;<br>所以服务器1、2还是继续保持LOOKING状态。<br>(3)服务器3启动,根据前面的理论分析,服务器3成为服务器1、2、3中的老大,而与上面不同的是,此时有三台服务器选举了它,所以它成为了这次选举的Leader,;<br>服务器1、2还是更改状态为FOLLOWING状态,服务器3更改状态为LEADING。<br>(4)服务器4启动,发起一次选举;<br>此时,服务器1,2,3已经不是LOOKING状态,不会更改投票信息,服务器3会有3票,服务器4会投自己1票<br>根据少数服从多数,服务器4会更改自己的投票结果为服务器3;<br>服务器4更改状态为FOLLOWING状态<br>(5)服务器5启动,同4一样当小弟。</p><h2 id="1-7-Zookeeper节点类型"><a href="#1-7-Zookeeper节点类型" class="headerlink" title="1.7 Zookeeper节点类型"></a>1.7 Zookeeper节点类型</h2><ul><li>持久</li></ul><p>客户端和服务端断开连接后,创建的节点不删除</p><ul><li>短暂</li></ul><p>客户端和服务端断开连接后,创建的节自己删除</p><ul><li>顺序编号</li></ul><p>创建znode时设置顺序标识,znode名称后会附加一个值,顺序号是一个单调递增的计数器,由父节点维护;在分布式系统中,顺序号可以被用于为所有的事件进行全局排序,这样客户端可以通过顺序号推断事件的顺序</p><ol><li>持久化目录节点:客户端与zookeeper断开连接后,该节点依旧存在</li><li>持久化顺序编号目录节点:客户端与zookeeper断开连接后,该节点依旧存在,只是zookeeper在创建该节点时进行顺序编号</li><li>临时目录节点:客户端与zookeeper断开连接后,该节点被删除</li><li>临时顺序编号目录节点:客户端与zookeeper断开连接后,该节点被删除,只是zookeeper在创建该节点时进行顺序编号</li></ol><h2 id="1-8-Zookeeper监听器原理"><a href="#1-8-Zookeeper监听器原理" class="headerlink" title="1.8 Zookeeper监听器原理"></a>1.8 Zookeeper监听器原理</h2><h3 id="1-8-1-常见的监听类型"><a href="#1-8-1-常见的监听类型" class="headerlink" title="1.8.1 常见的监听类型"></a>1.8.1 常见的监听类型</h3><p>监听节点数据的变化</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">get path [watch]</span><br></pre></td></tr></table></figure><p>监听子节点增减的变化</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ls path [watch]</span><br></pre></td></tr></table></figure><h3 id="1-8-2-监听器原理详解"><a href="#1-8-2-监听器原理详解" class="headerlink" title="1.8.2 监听器原理详解"></a>1.8.2 监听器原理详解</h3><p><img src="https://img-blog.csdnimg.cn/20200519104636710.png"></p><ol><li>首先要有一个main()线程</li><li>在main()线程中创建zookeeper客户端,这时就会创建两个线程,一个负责网络连接通信(connect),一个负责监听(listener)</li><li>通过connect线程将注册的监听事件发送给zookeeper</li><li>在zookeeper的注册监听器列表中将注册的监听事件添加到列表中</li><li>zookeeper监听到有数据或路径变化,就会将这个消息发送给listener线程</li><li>listener线程内部调用了process()方法</li></ol><h2 id="1-9-Zookeeper写数据原理"><a href="#1-9-Zookeeper写数据原理" class="headerlink" title="1.9 Zookeeper写数据原理"></a>1.9 Zookeeper写数据原理</h2><p><img src="https://img-blog.csdnimg.cn/20200519111416515.png"> </p><h1 id="2-Zookeeper单节点standalone安装"><a href="#2-Zookeeper单节点standalone安装" class="headerlink" title="2. Zookeeper单节点standalone安装"></a>2. Zookeeper单节点standalone安装</h1><h2 id="2-1-集群规划"><a href="#2-1-集群规划" class="headerlink" title="2.1 集群规划"></a>2.1 集群规划</h2><p>hostname| CPU | 内存| 组件| IP地址<br>|—|—|—|—|—<br>hadoop101| 2 |4G| Zookeeper| 192.168.1.100</p><h2 id="2-2-解压安装"><a href="#2-2-解压安装" class="headerlink" title="2.2 解压安装"></a>2.2 解压安装</h2><p>节点下分别执行</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/</span><br><span class="line">mv zookeeper-3.4.10 zookeeper</span><br></pre></td></tr></table></figure><h2 id="2-3-配置zoo-cfg文件"><a href="#2-3-配置zoo-cfg文件" class="headerlink" title="2.3 配置zoo.cfg文件"></a>2.3 配置zoo.cfg文件</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">mv zoo_sample.cfg zoo.cfg</span><br><span class="line">vi zoo.cfg</span><br><span class="line"><span class="comment"># 修改如下信息</span></span><br><span class="line">dataDir=/opt/module/zookeeper/zk-Data</span><br></pre></td></tr></table></figure><h2 id="2-4-启动集群"><a href="#2-4-启动集群" class="headerlink" title="2.4 启动集群"></a>2.4 启动集群</h2><p>节点下分别执行</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/zkServer.sh start</span><br></pre></td></tr></table></figure><p>成功启动输出信息</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytdev@es201 bin]$ ./zkServer.sh start</span><br><span class="line">ZooKeeper JMX enabled by default</span><br><span class="line">Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg</span><br><span class="line">Starting zookeeper ... STARTED</span><br></pre></td></tr></table></figure><h2 id="2-5-查看集群状态"><a href="#2-5-查看集群状态" class="headerlink" title="2.5 查看集群状态"></a>2.5 查看集群状态</h2><p>节点下分别执行</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/zkServer.sh status</span><br></pre></td></tr></table></figure><p>成功启动输出信息</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytdev@es201 bin]$ ./zkServer.sh status</span><br><span class="line">ZooKeeper JMX enabled by default</span><br><span class="line">Using config: /opt/module/zookeeper/bin/../conf/zoo.cfg</span><br><span class="line">Mode: standalone</span><br></pre></td></tr></table></figure><h1 id="3-Zookeeper分布式安装"><a href="#3-Zookeeper分布式安装" class="headerlink" title="3. Zookeeper分布式安装"></a>3. Zookeeper分布式安装</h1><h2 id="3-1-集群规划"><a href="#3-1-集群规划" class="headerlink" title="3.1 集群规划"></a>3.1 集群规划</h2><p>hostname|CPU|内存|组件IP地址<br>|—|—|—|—|<br>hadoop101|2|4G|Zookeeper|192.168.1.100<br>hadoop104|2|4G|Zookeeper|192.168.1.101<br>hadoop105|2|4G|Zookeeper|192.168.1.102</p><h2 id="3-2-解压安装"><a href="#3-2-解压安装" class="headerlink" title="3.2 解压安装"></a>3.2 解压安装</h2><p>三个节点下分别执行</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/</span><br></pre></td></tr></table></figure><h2 id="3-3-配置服务器编号"><a href="#3-3-配置服务器编号" class="headerlink" title="3.3 配置服务器编号"></a>3.3 配置服务器编号</h2><p><strong>hadoop101 节点下</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">mkdir -p zkData</span><br><span class="line"><span class="built_in">cd</span> zkData</span><br><span class="line">vi myid</span><br><span class="line"><span class="comment"># 添加如下信息</span></span><br><span class="line">1</span><br></pre></td></tr></table></figure><p><strong>hadoop104 节点下</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">mkdir -p zkData</span><br><span class="line"><span class="built_in">cd</span> zkData</span><br><span class="line">vi myid</span><br><span class="line"><span class="comment"># 添加如下信息</span></span><br><span class="line">4</span><br></pre></td></tr></table></figure><p><strong>hadoop105 节点下</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">mkdir -p zkData</span><br><span class="line"><span class="built_in">cd</span> zkData</span><br><span class="line">vi myid</span><br><span class="line"><span class="comment"># 添加如下信息</span></span><br><span class="line">5</span><br></pre></td></tr></table></figure><h2 id="3-4-配置zoo-cfg文件"><a href="#3-4-配置zoo-cfg文件" class="headerlink" title="3.4 配置zoo.cfg文件"></a>3.4 配置zoo.cfg文件</h2><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">mv zoo_sample.cfg zoo.cfg</span><br><span class="line">vi zoo.cfg</span><br><span class="line"><span class="comment"># 修改如下信息</span></span><br><span class="line">dataDir=/opt/module/zookeeper-3.4.10/zkData</span><br><span class="line"><span class="comment"># 添加如下信息</span></span><br><span class="line">server.1=hadoop101:2888:3888</span><br><span class="line">server.4=hadoop104:2888:3888</span><br><span class="line">server.5=hadoop105:2888:3888</span><br></pre></td></tr></table></figure><h2 id="3-5-启动集群"><a href="#3-5-启动集群" class="headerlink" title="3.5 启动集群"></a>3.5 启动集群</h2><p>三个节点下分别执行</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/zkServer.sh start</span><br></pre></td></tr></table></figure><h2 id="3-6-查看集群状态"><a href="#3-6-查看集群状态" class="headerlink" title="3.6 查看集群状态"></a>3.6 查看集群状态</h2><p>三个节点下分别执行</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/zkServer.sh status</span><br></pre></td></tr></table></figure><h1 id="4-Zookeeper的使用"><a href="#4-Zookeeper的使用" class="headerlink" title="4. Zookeeper的使用"></a>4. Zookeeper的使用</h1><h2 id="4-1-客户端命令行操作"><a href="#4-1-客户端命令行操作" class="headerlink" title="4.1 客户端命令行操作"></a>4.1 客户端命令行操作</h2><p>基本语法|功能描述<br>|—|—<br>help|显示所有操作命令<br>ls path [watch]|使用ls命令来查看当前znode中所包含的内容<br>ls2 path [watch]|查看当前节点数据并能看到更新次数等数据<br>create|普通创建<br>create -s|含有序列<br>create -e|临时创建(重启或者超时消失)<br>get path [watch]|获得节点的值<br>set|设置节点的具体值<br>stat|查看节点状态<br>delete|删除节点<br>deleteall|递归删除节点</p><p>** 启动客户端**</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">bin/zkCli.sh</span><br></pre></td></tr></table></figure><p><strong>查看当前znode所包含的内容</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ls /</span><br></pre></td></tr></table></figure><p><strong>查看当前znode节点详细数据</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ls2 /</span><br></pre></td></tr></table></figure><p><strong>创建znode节点</strong></p><p>create /sanguo “lytdev”</p><p><strong>获取znode节点的值</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">get /sanguo</span><br><span class="line">get /sanguo/shuguo</span><br></pre></td></tr></table></figure><p><strong>创建短暂的znode节点</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create -e /sanguo/wuguo <span class="string">&quot;sunquan&quot;</span></span><br></pre></td></tr></table></figure><p><strong>退出客户端</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">quit</span><br></pre></td></tr></table></figure><p><strong>创建带顺序号的znode节点</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">create -s /sanguo/weiguo <span class="string">&quot;caocao&quot;</span></span><br></pre></td></tr></table></figure><p><strong>修改znode节点的值</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">set</span> /sanguo <span class="string">&quot;diaochan&quot;</span></span><br></pre></td></tr></table></figure><p><strong>监听znode节点的值的变化(只能监听一次)</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">get /sanguo watch</span><br></pre></td></tr></table></figure><p><strong>监听znode节点的子节点变化(只能监听一次)</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">ls /sanguo watch</span><br></pre></td></tr></table></figure><p><strong>删除znode节点</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">delete /sanguo/wuguo</span><br></pre></td></tr></table></figure><p><strong>递归删除znode节点</strong></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">rmr /sanguo</span><br></pre></td></tr></table></figure><h1 id="5-Zookeeper的面试重点"><a href="#5-Zookeeper的面试重点" class="headerlink" title="5. Zookeeper的面试重点"></a>5. Zookeeper的面试重点</h1><h2 id="5-1-如何保持数据全局一致性"><a href="#5-1-如何保持数据全局一致性" class="headerlink" title="5.1 如何保持数据全局一致性"></a>5.1 如何保持数据全局一致性</h2><p>zookeeper通过Zab协议保持数据全局一致性</p>]]></content>
<categories>
<category> 中间件 </category>
</categories>
<tags>
<tag> 中间件 </tag>
<tag> Zookeeper </tag>
</tags>
</entry>
<entry>
<title>Hadoop3.X分布式运行环境搭建手记</title>
<link href="/2020/11/07/fd53f751c2eb.html"/>
<url>/2020/11/07/fd53f751c2eb.html</url>
<content type="html"><![CDATA[<h1 id="1-准备虚拟机(关闭防火墙、静态ip、主机名称)"><a href="#1-准备虚拟机(关闭防火墙、静态ip、主机名称)" class="headerlink" title="1.准备虚拟机(关闭防火墙、静态ip、主机名称)"></a>1.准备虚拟机(关闭防火墙、静态ip、主机名称)</h1><h2 id="1-1首先使用VMware安装CentOS7-4虚拟机"><a href="#1-1首先使用VMware安装CentOS7-4虚拟机" class="headerlink" title="1.1首先使用VMware安装CentOS7.4虚拟机"></a>1.1首先使用VMware安装CentOS7.4虚拟机</h2><p>安装好虚拟机后设置好静态IP(<code>192.168.126.120</code>,设置DNS),<code>hostname</code>,执行<code>yum update</code>更新软件</p><p><img src="https://oscimg.oschina.net/oscnet/ff3261acae5d7d4926e17fa3ca3846dff55.jpg"></p><p><img src="https://oscimg.oschina.net/oscnet/e9fd7a37357136f3130d32fc08bd452846a.jpg"></p><p>这里由于进行了规划,所以提前设置了<code>hosts</code>,防止<code>ping hostname</code>的时候不同</p><p><img src="https://oscimg.oschina.net/oscnet/9c98ad2611f176c69bdef23c3bb74ab9498.jpg"></p><h2 id="1-2安装软件"><a href="#1-2安装软件" class="headerlink" title="1.2安装软件"></a>1.2安装软件</h2><p>安装<code>vim</code>,在<code>/usr/lib/jvm</code>目录安装好<code>jdk1.8.0_212</code>,并配置<code>/etc/profile</code>文件</p><p><img src="https://oscimg.oschina.net/oscnet/d68c2578976bb04775a02c15f733bdd35f6.jpg"></p><h2 id="1-3关闭防火墙"><a href="#1-3关闭防火墙" class="headerlink" title="1.3关闭防火墙"></a>1.3关闭防火墙</h2><h2 id="1-4创建hadoop用户"><a href="#1-4创建hadoop用户" class="headerlink" title="1.4创建hadoop用户"></a>1.4创建hadoop用户</h2><p>创建<code>hadoop</code>用户并配置<code>hadoop</code>用户具有<code>root</code>权限</p><p>在根目录”/“创建/data_disk/hadoop/tmp用于存储hadoop的数据</p><figure class="highlight makefile"><table><tr><td class="code"><pre><span class="line">mkdir -P /data/hadoop/tmp</span><br><span class="line">mkdir -P /data_disk/hadoop/data</span><br><span class="line">mkdir -P /data_disk/hadoop/logs</span><br><span class="line"></span><br></pre></td></tr></table></figure><h1 id="2-克隆虚拟机"><a href="#2-克隆虚拟机" class="headerlink" title="2.克隆虚拟机"></a>2.克隆虚拟机</h1><h2 id="2-1克隆两台虚拟机"><a href="#2-1克隆两台虚拟机" class="headerlink" title="2.1克隆两台虚拟机"></a>2.1克隆两台虚拟机</h2><h2 id="2-2更改克隆虚拟机的静态IP、hostname"><a href="#2-2更改克隆虚拟机的静态IP、hostname" class="headerlink" title="2.2更改克隆虚拟机的静态IP、hostname"></a>2.2更改克隆虚拟机的静态IP、hostname</h2><p>虚拟机2的IP:192.168.126.122,hostname:node102</p><p>虚拟机3的IP:192.168.126.123,hostname:node103</p><h1 id="3-编写集群分发脚本xsync"><a href="#3-编写集群分发脚本xsync" class="headerlink" title="3.编写集群分发脚本xsync"></a>3.编写集群分发脚本xsync</h1><p>在/home/hadoop目录下创建bin目录,并在bin目录下xsync创建文件</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line">#1 获取输入参数个数,如果没有参数,直接退出</span><br><span class="line">pcount=$#</span><br><span class="line">if((pcount==0)); then</span><br><span class="line">echo no args;</span><br><span class="line">exit;</span><br><span class="line">fi</span><br><span class="line"></span><br><span class="line">#2 获取文件名称</span><br><span class="line">p1=$1</span><br><span class="line">fname=`basename $p1`</span><br><span class="line">echo fname=$fname</span><br><span class="line"></span><br><span class="line">#3 获取上级目录到绝对路径</span><br><span class="line">pdir=`cd -P $(dirname $p1); pwd`</span><br><span class="line">echo pdir=$pdir</span><br><span class="line"></span><br><span class="line">#4 获取当前用户名称</span><br><span class="line">user=`whoami`</span><br><span class="line"></span><br><span class="line">#5 循环</span><br><span class="line">for((host=101; host&lt;104; host++)); do</span><br><span class="line"> echo ------------------- hadoop-node$host --------------</span><br><span class="line"> rsync -rvl $pdir/$fname $user@node$host:$pdir</span><br><span class="line">done</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>修改脚本<code>xsync</code>具有执行权限<code>chmod 777 xsync</code></p><p>注意:如果将<code>xsync</code>放到<code>/home/hadoop/bin</code>目录下仍然不能实现全局使用,可以将<code>xsync</code>移动到<code>/usr/local/bin</code>目录下。</p><h1 id="4-集群配置"><a href="#4-集群配置" class="headerlink" title="4.集群配置"></a>4.集群配置</h1><h2 id="1-集群规划"><a href="#1-集群规划" class="headerlink" title="1.集群规划"></a>1.集群规划</h2><p><img src="https://oscimg.oschina.net/oscnet/c646ab2c9196c3eacc49b81b10d47f7a191.jpg"></p><h2 id="2-安装hadoop"><a href="#2-安装hadoop" class="headerlink" title="2.安装hadoop"></a>2.安装hadoop</h2><p>在/opt/module下创建hadoop目录,使用ftp同步下载的hadoop3.1.4并解压,并且将<code>/opt/module/hadoop3.1.4</code>所属用户和所属组更改为<code>hadoop</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">chown hadoop:hadoop -R /opt/module/hadoop3.1.4</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>并配置<code>/etc/profile</code>文件,并且执行<code>source /etc/profile</code>立即生效</p><p><img src="https://oscimg.oschina.net/oscnet/ce418a94b46d0f3388b0e0e0287c936de3f.jpg"></p><h2 id="3-修改hadoop配置文件"><a href="#3-修改hadoop配置文件" class="headerlink" title="3.修改hadoop配置文件"></a>3.修改hadoop配置文件</h2><p>hadoop的配置文件都在/opt/module/hadoop3.1.4/etc/hadoop下面</p><p>(1)默认配置文件:</p><p><img src="https://oscimg.oschina.net/oscnet/up-5d1f35c43c4a078188d8f7e25901c4f35a2.png"></p><p>(2)自定义配置文件:</p><p><strong>core-site.xml<strong><strong>、hdfs-site.xml</strong></strong>、yarn-site.xml****、mapred-site.xml</strong>四个配置文件存放在$HADOOP_HOME/etc/hadoop这个路径上,用户可以根据项目需求重新进行修改配置。</p><h3 id="3-1核心配置文件core-site-xml"><a href="#3-1核心配置文件core-site-xml" class="headerlink" title="3.1核心配置文件core-site.xml"></a>3.1核心配置文件core-site.xml</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ vim core-site.xml</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>在该文件中编写如下配置</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="comment">&lt;!-- 指定NameNode的地址 --&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>fs.defaultFS<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>hdfs://node101:8020<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"><span class="comment">&lt;!-- 指定hadoop数据的存储目录 --&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hadoop.tmp.dir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>/data_disk/hadoop/tmp<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"><span class="comment">&lt;!-- 配置HDFS网页登录使用的静态用户为lytfly --&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hadoop.http.staticuser.user<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>lytfly<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="3-2HDFS配置文件"><a href="#3-2HDFS配置文件" class="headerlink" title="3.2HDFS配置文件"></a>3.2HDFS配置文件</h3><p>配置hadoop-env.sh使用JDK1.8,不要使用JDK11,要不然启动Yarn和Hive等会因为JDK不兼容出现莫名其妙的错误。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ vim hadoop-env.sh</span><br><span class="line">export JAVA_HOME=/opt/module/jdk1.8.0.271</span><br></pre></td></tr></table></figure><p>因为jdk11默认没有activation.jar,所以需要手动下载jar放到${HADOOP_HOME}/share/hadoop/yarn/lib。</p><p>配置hdfs-site.xml</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ vim hdfs-site.xml</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>在该文件中编写如下配置,注意端口号!</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="meta">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;</span></span><br><span class="line"><span class="meta">&lt;?xml-stylesheet type=&quot;text/xsl&quot; href=&quot;configuration.xsl&quot;?&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">configuration</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- nn web端访问地址--&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.http-address<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node101:9870<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 2nn web端访问地址--&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.secondary.http-address<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node103:9868<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">configuration</span>&gt;</span></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="3-3YARN配置文件"><a href="#3-3YARN配置文件" class="headerlink" title="3.3YARN配置文件"></a>3.3YARN配置文件</h3><p>配置yarn-site.xml</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ vim yarn-site.xml</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>在该文件中增加如下配置</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="meta">&lt;?xml version=&quot;1.0&quot;?&gt;</span></span><br><span class="line"><span class="meta">&lt;?xml-stylesheet type=&quot;text/xsl&quot; href=&quot;configuration.xsl&quot;?&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">configuration</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 指定MR走shuffle --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>yarn.nodemanager.aux-services<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>mapreduce_shuffle<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> </span><br><span class="line"> <span class="comment">&lt;!-- 指定ResourceManager的地址--&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>yarn.resourcemanager.hostname<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node102<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> </span><br><span class="line"> <span class="comment">&lt;!-- 环境变量的继承 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>yarn.nodemanager.env-whitelist<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">configuration</span>&gt;</span></span><br></pre></td></tr></table></figure><h3 id="3-4MapReduce配置文件"><a href="#3-4MapReduce配置文件" class="headerlink" title="3.4MapReduce配置文件"></a>3.4MapReduce配置文件</h3><p>配置mapred-site.xml</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ cp mapred-site.xml.template mapred-site.xml</span><br><span class="line"></span><br><span class="line">[hadoop@node101 hadoop]$ vim mapred-site.xml</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>在该文件中增加如下配置</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="meta">&lt;?xml version=&quot;1.0&quot;?&gt;</span></span><br><span class="line"><span class="meta">&lt;?xml-stylesheet type=&quot;text/xsl&quot; href=&quot;configuration.xsl&quot;?&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">configuration</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 指定MapReduce程序运行在Yarn上 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>mapreduce.framework.name<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>yarn<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 历史服务器web端地址 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>mapreduce.jobhistory.webapp.address<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node101:19888<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 历史服务器端地址 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>mapreduce.jobhistory.address<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node101:10020<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">configuration</span>&gt;</span></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>至此,hadoop集群所需要修改的配置文件都修改完成了</p><h2 id="4-在集群上分发配置好的hadoop配置文件"><a href="#4-在集群上分发配置好的hadoop配置文件" class="headerlink" title="4.在集群上分发配置好的hadoop配置文件"></a>4.在集群上分发配置好的hadoop配置文件</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ xsync /opt/module/hadoop3.1.4/</span><br></pre></td></tr></table></figure><h1 id="5-集群单点启动"><a href="#5-集群单点启动" class="headerlink" title="5.集群单点启动"></a>5.集群单点启动</h1><p>这样启动方式还是比较麻烦的,在下一节会使用群起集群的功能</p><p>如果集群是第一次启动,需要<strong>格式化****NameNode</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 tmp]$ hdfs namenode -format</span><br></pre></td></tr></table></figure><p>在node101上启动NameNode</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop3.1.4]$ hadoop-daemon.sh start namenode</span><br><span class="line">[hadoop@node101 hadoop3.1.4]$ jps</span><br><span class="line">3461 NameNode</span><br></pre></td></tr></table></figure><p>在node101、node102以及node103上分别启动DataNode</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 tmp]$ hadoop-daemon.sh start datanode</span><br><span class="line">[hadoop@node101 tmp]$ jps</span><br><span class="line">3461 NameNode</span><br><span class="line">3608 Jps</span><br><span class="line">3561 DataNode</span><br><span class="line">[hadoop@node102 tmp]$ hadoop-daemon.sh start datanode</span><br><span class="line">[hadoop@node102 tmp]$ jps</span><br><span class="line">3190 DataNode</span><br><span class="line">3279 Jps</span><br><span class="line">[hadoop@node103 tmp]$ hadoop-daemon.sh start datanode</span><br><span class="line">[hadoop@node103 tmp]$ jps</span><br><span class="line">3237 Jps</span><br><span class="line">3163 DataNode</span><br></pre></td></tr></table></figure><p>启动报错:如果启动DataNode时报错:Initialization failed for Block pool (Datanode Uuid unassigned)</p><p>是因为namenode和datanode的clusterID不一致导致datanode无法启动。删除data、tmp、namenode 数据后,重新格式化即可。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node103 hadoop]$rm -rf tmp</span><br><span class="line">[hadoop@node103 hadoop]$mkdir tmp </span><br></pre></td></tr></table></figure><h1 id="6-配置集群群起功能"><a href="#6-配置集群群起功能" class="headerlink" title="6.配置集群群起功能"></a>6.配置集群群起功能</h1><h2 id="6-1SSH无密登录配置"><a href="#6-1SSH无密登录配置" class="headerlink" title="6.1SSH无密登录配置"></a>6.1SSH无密登录配置</h2><p><img src="https://oscimg.oschina.net/oscnet/33d4e7e98c3328527b925017ac8fc22b6b9.jpg"></p><p>在node101生成公钥是密钥</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ ssh-keygen -t rsa</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>然后敲(三个回车),就会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)</p><p>将公钥拷贝到要免密登录的目标机器上</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 .ssh]$ ssh-copy-id node101</span><br><span class="line">[hadoop@node101 .ssh]$ ssh-copy-id node102</span><br><span class="line">[hadoop@node101 .ssh]$ ssh-copy-id node103</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>同样的操作,需要在node102、node103机器上执行ssh免密设置。</p><p>.ssh文件夹下(~/.ssh)的文件功能解释</p><p><img src="https://oscimg.oschina.net/oscnet/bdbc1b95e271336e45b577e158413ced340.jpg"></p><h2 id="6-2群起集群配置"><a href="#6-2群起集群配置" class="headerlink" title="6.2群起集群配置"></a>6.2群起集群配置</h2><p>配置works,hadoop2.x版本是slaves,注意区分。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ vim /opt/module/hadoop3.1.4/etc/hadoop/works</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>在该文件中增加如下内容:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">node101</span><br><span class="line">node102</span><br><span class="line">node103</span><br></pre></td></tr></table></figure><p>注意:该文件中添加的内容结尾不允许有空格,文件中不允许有空行。</p><p>同步所有节点配置文件</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop]$ xsync works</span><br></pre></td></tr></table></figure><h2 id="6-3启动集群"><a href="#6-3启动集群" class="headerlink" title="6.3启动集群"></a>6.3启动集群</h2><p>如果集群是第一次启动,需要格式化NameNode(注意格式化之前,一定要先停止上次启动的所有namenode和datanode进程,然后再删除data和log数据)</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop3.1.4]$ bin/hdfs namenode -format</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>启动HDFS</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 hadoop3.1.4]$ sbin/start-dfs.sh</span><br><span class="line">[hadoop@node101 hadoop3.1.4]$ jps</span><br><span class="line">4166 NameNode</span><br><span class="line">4482 Jps</span><br><span class="line">4263 DataNode</span><br><span class="line"></span><br><span class="line">[hadoop@node102 hadoop3.1.4]$ jps</span><br><span class="line">3218 DataNode</span><br><span class="line">3288 Jps</span><br><span class="line"></span><br><span class="line">[hadoop@node103 hadoop3.1.4]$ jps</span><br><span class="line">3221 DataNode</span><br><span class="line">3283 SecondaryNameNode</span><br><span class="line">3364 Jps</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>启动YARN</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node102 hadoop3.1.4]$ sbin/start-yarn.sh</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>注意:NameNode和ResourceManger如果不是同一台机器,不能在NameNode上启动 YARN,应该在ResouceManager所在的机器上启动YARN。</p><p>Web端查看NameNode<br>(a)浏览器中输入:<a href="http://node101:9870/">http://node101:9870/</a></p><p><img src="https://oscimg.oschina.net/oscnet/up-8b77094f85c0e5cd3513bab709bba109016.png"></p><h1 id="7-集群测试"><a href="#7-集群测试" class="headerlink" title="7.集群测试"></a>7.集群测试</h1><h2 id="7-1上传文件到集群"><a href="#7-1上传文件到集群" class="headerlink" title="7.1上传文件到集群"></a>7.1上传文件到集群</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[hadoop@node101 data]$ hdfs dfs -mkdir -p /user/lyt/input</span><br><span class="line">[hadoop@node101 data]$ hdfs dfs -put wcinput/wc.input /user/lyt/input</span><br></pre></td></tr></table></figure><h1 id="8-集群启动-停止方式总结"><a href="#8-集群启动-停止方式总结" class="headerlink" title="8.集群启动/停止方式总结"></a>8.集群启动/停止方式总结</h1><h2 id="8-1各个服务组件逐一启动-停止"><a href="#8-1各个服务组件逐一启动-停止" class="headerlink" title="8.1各个服务组件逐一启动/停止"></a>8.1各个服务组件逐一启动/停止</h2><p>(1)分别启动/停止HDFS组件<br> <a href="http://hadoop-daemon.sh/">hadoop-daemon.sh</a> start / stop namenode / datanode / secondarynamenode<br> (2)启动/停止YARN<br> <a href="http://yarn-daemon.sh/">yarn-daemon.sh</a> start / stop resourcemanager / nodemanager</p><h2 id="8-2各个模块分开启动-停止(配置ssh是前提)常用"><a href="#8-2各个模块分开启动-停止(配置ssh是前提)常用" class="headerlink" title="8.2各个模块分开启动/停止(配置ssh是前提)常用"></a>8.2各个模块分开启动/停止(配置ssh是前提)常用</h2><p>(1)整体启动/停止HDFS<br> <a href="http://start-dfs.sh/">start-dfs.sh</a> / <a href="http://stop-dfs.sh/">stop-dfs.sh</a><br> (2)整体启动/停止YARN<br> <a href="http://start-yarn.sh/">start-yarn.sh</a> / <a href="http://stop-yarn.sh/">stop-yarn.sh</a></p><h2 id="8-3配置集群群起脚本"><a href="#8-3配置集群群起脚本" class="headerlink" title="8.3配置集群群起脚本"></a>8.3配置集群群起脚本</h2><p>如果不想单个启动的话,可以编写个shell脚本批量处理,参见<a href="https://my.oschina.net/liuyuantao/blog/5268027">《Hadoop常用脚本》</a></p><h1 id="9-集群时间同步"><a href="#9-集群时间同步" class="headerlink" title="9.集群时间同步"></a>9.集群时间同步</h1><h2 id="9-1时间服务器配置(必须root用户)"><a href="#9-1时间服务器配置(必须root用户)" class="headerlink" title="9.1时间服务器配置(必须root用户)"></a>9.1时间服务器配置(必须root用户)</h2><p>(1)检查ntp是否安装</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node101 hadoop]# rpm -qa|grep ntp</span><br><span class="line">ntp-4.2.6p5-10.el6.centos.x86_64</span><br><span class="line">fontpackages-filesystem-1.41-1.1.el6.noarch</span><br><span class="line">ntpdate-4.2.6p5-10.el6.centos.x86_64</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>如果没有安装,执行yum安装即可</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node101 hadoop]#yum -y install ntp</span><br><span class="line">[root@node101 hadoop]#systemctl enable ntpd</span><br><span class="line">[root@node101 hadoop]#systemctl start ntpd</span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>(2)修改ntp配置文件</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node101 hadoop]# vim /etc/ntp.conf</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>修改内容如下<br>a)修改1</p><p>取消注释,大约在17行(授权192.168.126.0-192.168.126.255网段上的所有机器可以从这台机器上查询和同步时间)</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#restrict 192.168.126.0 mask 255.255.255.0 nomodify notrap为</span><br><span class="line">restrict 192.168.126.0 mask 255.255.255.0 nomodify notrap</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>b)修改2</p><p>注释掉配置,大约在21-24行(集群在局域网中,不使用其他互联网上的时间)</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">server 0.centos.pool.ntp.org iburst</span><br><span class="line">server 1.centos.pool.ntp.org iburst</span><br><span class="line">server 2.centos.pool.ntp.org iburst</span><br><span class="line">server 3.centos.pool.ntp.org iburst为</span><br><span class="line">#server 0.centos.pool.ntp.org iburst</span><br><span class="line">#server 1.centos.pool.ntp.org iburst</span><br><span class="line">#server 2.centos.pool.ntp.org iburst</span><br><span class="line">#server 3.centos.pool.ntp.org iburst</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>c)添加3</p><p>文件末尾添加(当该节点丢失网络连接,依然可以采用本地时间作为时间服务器为集群中的其他节点提供时间同步)</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">server 127.127.1.0</span><br><span class="line">fudge 127.127.1.0 stratum 10</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>(3)修改/etc/sysconfig/ntpd 文件</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node101 hadoop]# vim /etc/sysconfig/ntpd</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>增加内容如下(让硬件时间与系统时间一起同步)</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">SYNC_HWCLOCK=yes</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>(4)重新启动ntpd服务</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node101 hadoop]# service ntpd status</span><br><span class="line">ntpd 已停</span><br><span class="line">[root@node101 hadoop]# service ntpd start</span><br><span class="line">正在启动 ntpd: [确定]</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>(5)设置ntpd服务开机启动</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node101 hadoop]# chkconfig ntpd on</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h2 id="9-2其他机器配置(必须root用户)"><a href="#9-2其他机器配置(必须root用户)" class="headerlink" title="9.2其他机器配置(必须root用户)"></a>9.2其他机器配置(必须root用户)</h2><p>(1)在其他机器配置10分钟与时间服务器同步一次</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node102 hadoop]# crontab -e</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>编写定时任务如下:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">*/10 * * * * /usr/sbin/ntpdate node101</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>(2)修改任意机器时间</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node102 hadoop]# date -s &quot;2019-9-17 11:11:11&quot;</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>(3)十分钟后查看机器是否与时间服务器同步</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@node102 hadoop]# date</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>说明:测试的时候可以将10分钟调整为1分钟,节省时间。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Hadoop分布式环境高可用配置</title>
<link href="/2020/11/07/a77fdbeecf9e.html"/>
<url>/2020/11/07/a77fdbeecf9e.html</url>
<content type="html"><![CDATA[<blockquote><p>前面文章介绍过<a href="https://my.oschina.net/liuyuantao/blog/3107146">Hadoop分布式的配置</a>,但是涉及到高可用,这次使用zookeeper配置Hadoop高可用。</p></blockquote><h1 id="1-环境准备"><a href="#1-环境准备" class="headerlink" title="1.环境准备"></a>1.环境准备</h1><p>1)修改IP<br>2)修改主机名及主机名和IP地址的映射<br>3)关闭防火墙<br>4)ssh免密登录<br>5)创建hadoop用户和用户组<br>6)安装更新安装源、JDK、配置环境变量等</p><h1 id="2-服务器规划"><a href="#2-服务器规划" class="headerlink" title="2.服务器规划"></a>2.服务器规划</h1><table><thead><tr><th>Node1</th><th align="center">Node2</th><th align="right">Node3</th></tr></thead><tbody><tr><td>NameNode</td><td align="center">NameNode</td><td align="right"></td></tr><tr><td>JournalNode</td><td align="center">JournalNode</td><td align="right">JournalNode</td></tr><tr><td>DataNode</td><td align="center">DataNode</td><td align="right">DataNode</td></tr><tr><td>ZK</td><td align="center">ZK</td><td align="right">ZK</td></tr><tr><td>ResourceManager</td><td align="center"></td><td align="right">ResourceManager</td></tr><tr><td>NodeManager</td><td align="center">NodeManager</td><td align="right">NodeManager</td></tr></tbody></table><h1 id="3-配置Zookeeper集群"><a href="#3-配置Zookeeper集群" class="headerlink" title="3.配置Zookeeper集群"></a>3.配置Zookeeper集群</h1><p>参考我的之前的文章<a href="https://my.oschina.net/liuyuantao/blog/3130332">Zookeeper安装和配置说明</a></p><h1 id="4-安装Hadoop"><a href="#4-安装Hadoop" class="headerlink" title="4.安装Hadoop"></a>4.安装Hadoop</h1><p>1)官方下载地址:<a href="http://hadoop.apache.org/">http://hadoop.apache.org/</a><br>2)解压hadoop2.7.2至/usr/local/hadoop2.7<br>3)修改hadoop2.7的所属组和所属者为hadoop</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">chown -R hadoop:hadoop /usr/local/hadoop2.7</span><br></pre></td></tr></table></figure><p>4)配置HADOOP_HOME</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">vim /etc/profile</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="comment">#HADOOP_HOME</span></span><br><span class="line"><span class="built_in">export</span> HADOOP_HOME=/usr/<span class="built_in">local</span>/hadoop2.7</span><br><span class="line"><span class="built_in">export</span> HADOOP_CONF_DIR=<span class="variable">$&#123;HADOOP_HOME&#125;</span>/etc/hadoop</span><br><span class="line"><span class="built_in">export</span> HADOOP_COMMON_LIB_NATIVE_DIR=<span class="variable">$&#123;HADOOP_HOME&#125;</span>/lib/native</span><br><span class="line"><span class="built_in">export</span> PATH=<span class="variable">$HADOOP_HOME</span>/bin:<span class="variable">$HADOOP_HOME</span>/sbin:<span class="variable">$PATH</span></span><br></pre></td></tr></table></figure><h1 id="5-配置Hadoop集群高可用"><a href="#5-配置Hadoop集群高可用" class="headerlink" title="5.配置Hadoop集群高可用"></a>5.配置Hadoop集群高可用</h1><h2 id="5-1配置HDFS集群"><a href="#5-1配置HDFS集群" class="headerlink" title="5.1配置HDFS集群"></a>5.1配置HDFS集群</h2><p>参考文章:<a href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html</a></p><p><strong>hadoop-env.sh</strong></p><p>如果是3.x的话可能不一样</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">export JAVA_HOME=/usr/local/jdk1.8.0_221</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>hadoop-site.xml</strong></p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="meta">&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;</span></span><br><span class="line"><span class="meta">&lt;?xml-stylesheet type=&quot;text/xsl&quot; href=&quot;configuration.xsl&quot;?&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">configuration</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 完全分布式集群名称 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.nameservices<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">value</span>&gt;</span>hadoopCluster<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 集群中NameNode节点都有哪些 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.ha.namenodes.hadoopCluster<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>nn1,nn2<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- nn1的RPC通信地址 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.rpc-address.hadoopCluster.nn1<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node1:9000<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- nn2的RPC通信地址 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.rpc-address.hadoopCluster.nn2<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node2:9000<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- nn1的http通信地址 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.http-address.hadoopCluster.nn1<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node1:50070<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- nn2的http通信地址 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.http-address.hadoopCluster.nn2<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>node2:50070<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- 指定NameNode元数据在JournalNode上的存放位置 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.shared.edits.dir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>qjournal://node1:8485;node2:8485;node3:8485/hadoopCluster<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.ha.fencing.methods<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>sshfence<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- 使用隔离机制时需要ssh无秘钥登录--&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.ha.fencing.ssh.private-key-files<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>/home/hadoop/.ssh/id_rsa<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- 声明journalnode服务器存储目录--&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.journalnode.edits.dir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>/data_disk/hadoop/jn<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- 关闭权限检查--&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.permissions.enable<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>false<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- 访问代理类:client,hadoopCluster,active配置失败自动切换实现方式--&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.client.failover.proxy.provider.hadoopCluster<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.namenode.name.dir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>file:///data_disk/hadoop/name<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">description</span>&gt;</span>为了保证元数据的安全一般配置多个不同目录<span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.datanode.data.dir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>file:///data_disk/hadoop/data<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">description</span>&gt;</span>datanode 的数据存储目录<span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.replication<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>3<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">description</span>&gt;</span>HDFS的数据块的副本存储个数,默认是3<span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"><span class="tag">&lt;/<span class="name">configuration</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>core-site.xml</strong></p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">configuration</span>&gt;</span></span><br><span class="line"> <span class="comment">&lt;!-- 指定HDFS中NameNode的地址 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>fs.defaultFS<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>hdfs://hadoopCluster<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">&lt;!-- 指定Hadoop运行时产生文件的存储目录 --&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hadoop.tmp.dir<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>file:///data_disk/hadoop/tmp<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">configuration</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p><strong>启动hadoop集群</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">(1)在各个JournalNode节点上,输入以下命令启动journalnode服务</span><br><span class="line">sbin/hadoop-daemon.sh start journalnode</span><br><span class="line">(2)在[nn1]上,对其进行格式化,并启动</span><br><span class="line">bin/hdfs namenode -format</span><br><span class="line">sbin/hadoop-daemon.sh start namenode</span><br><span class="line">(3)在[nn2]上,同步nn1的元数据信息</span><br><span class="line">bin/hdfs namenode -bootstrapStandby</span><br><span class="line">(4)启动[nn2]</span><br><span class="line">sbin/hadoop-daemon.sh start namenode</span><br><span class="line">(5)在[nn1]上,启动所有datanode</span><br><span class="line">sbin/hadoop-daemons.sh start datanode</span><br><span class="line">(6)将[nn1]切换为Active</span><br><span class="line">bin/hdfs haadmin -transitionToActive nn1</span><br><span class="line">(7)查看是否Active</span><br><span class="line">bin/hdfs haadmin -getServiceState nn1</span><br></pre></td></tr></table></figure><p>打开浏览器查看namenode的状态</p><p><img src="https://oscimg.oschina.net/oscnet/8310dbed7d375aa31fd2cd1111458b5e854.png"></p><p><img src="https://oscimg.oschina.net/oscnet/1d1dfc072f5a2e83525b85096444c8d35b6.png"></p><h2 id="5-2配置HDFS自动故障转移"><a href="#5-2配置HDFS自动故障转移" class="headerlink" title="5.2配置HDFS自动故障转移"></a>5.2配置HDFS自动故障转移</h2><p>在hdfs-site.xml中增加</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.ha.automatic-failover.enabled<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">value</span>&gt;</span>true<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>在core-site.xml文件中增加</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">name</span>&gt;</span>ha.zookeeper.quorum<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">value</span>&gt;</span>node1:2181,node2:2181,node3:2181<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="5-2-1启动"><a href="#5-2-1启动" class="headerlink" title="5.2.1启动"></a>5.2.1启动</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">(1)关闭所有HDFS服务:</span><br><span class="line">sbin/stop-dfs.sh</span><br><span class="line">(2)启动Zookeeper集群:</span><br><span class="line">bin/zkServer.sh start</span><br><span class="line">(3)初始化HA在Zookeeper中状态:</span><br><span class="line">bin/hdfs zkfc -formatZK</span><br><span class="line">(4)启动HDFS服务:</span><br><span class="line">sbin/start-dfs.sh</span><br><span class="line">(5)在各个NameNode节点上启动DFSZK Failover Controller,先在哪台机器启动,哪个机器的NameNode就是Active NameNode</span><br><span class="line">sbin/hadoop-daemon.sh start zkfc</span><br></pre></td></tr></table></figure><h3 id="5-2-2验证"><a href="#5-2-2验证" class="headerlink" title="5.2.2验证"></a>5.2.2验证</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">(1)将Active NameNode进程<span class="built_in">kill</span></span><br><span class="line"><span class="built_in">kill</span> -9 namenode的进程id</span><br><span class="line">(2)将Active NameNode机器断开网络</span><br><span class="line">service network stop</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>如果kill nn1后nn2没有变成active,可能有以下原因</p><p>(1)ssh免密登录没配置好</p><p>(2)未找到fuster程序,导致无法进行fence,参考<a href="https://blog.csdn.net/w892824196/article/details/90317951">博文</a></p><h2 id="5-3YARN-HA配置"><a href="#5-3YARN-HA配置" class="headerlink" title="5.3YARN-HA配置"></a>5.3YARN-HA配置</h2><p>参考文章:<a href="https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html</a></p><p>yarn-site.xml</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;?xml version=&quot;1.0&quot;?&gt;</span><br><span class="line">&lt;configuration&gt;</span><br><span class="line"></span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.aux-services&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;mapreduce_shuffle&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!--启用resourcemanager ha--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.ha.enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!--声明两台resourcemanager的地址--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.cluster-id&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;cluster-yarn1&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.ha.rm-ids&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;rm1,rm2&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.hostname.rm1&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;node1&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.hostname.rm2&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;node3&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!--指定zookeeper集群的地址--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.zk-address&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;node1:2181,node2:2181,node3:2181&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!--启用自动恢复--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.recovery.enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!--指定resourcemanager的状态信息存储在zookeeper集群--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.store.class&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line">&lt;/configuration&gt;</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="5-3-1启动HDFS"><a href="#5-3-1启动HDFS" class="headerlink" title="5.3.1启动HDFS"></a>5.3.1启动HDFS</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">(1)在各个JournalNode节点上,输入以下命令启动journalnode服务:</span><br><span class="line">sbin/hadoop-daemon.sh start journalnode</span><br><span class="line">(2)在[nn1]上,对其进行格式化,并启动:</span><br><span class="line">bin/hdfs namenode -format</span><br><span class="line">sbin/hadoop-daemon.sh start namenode</span><br><span class="line">(3)在[nn2]上,同步nn1的元数据信息:</span><br><span class="line">bin/hdfs namenode -bootstrapStandby</span><br><span class="line">(4)启动[nn2]:</span><br><span class="line">sbin/hadoop-daemon.sh start namenode</span><br><span class="line">(5)启动所有DataNode</span><br><span class="line">sbin/hadoop-daemons.sh start datanode</span><br><span class="line">(6)将[nn1]切换为Active</span><br><span class="line">bin/hdfs haadmin -transitionToActive nn1</span><br></pre></td></tr></table></figure><h3 id="5-3-2启动YARN"><a href="#5-3-2启动YARN" class="headerlink" title="5.3.2启动YARN"></a>5.3.2启动YARN</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">(1)在node1中执行:</span><br><span class="line">sbin/start-yarn.sh</span><br><span class="line">(2)在node3中执行:</span><br><span class="line">sbin/yarn-daemon.sh start resourcemanager</span><br><span class="line">(3)查看服务状态</span><br><span class="line">bin/yarn rmadmin -getServiceState rm1</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Hadoop架构原理简介</title>
<link href="/2020/11/07/2481fd16d2ba.html"/>
<url>/2020/11/07/2481fd16d2ba.html</url>
<content type="html"><![CDATA[<h1 id="一、概念"><a href="#一、概念" class="headerlink" title="一、概念"></a><strong>一、概念</strong></h1><p>Hadoop诞生于2006年,是一款支持数据密集型分布式应用并以Apache 2.0许可协议发布的开源软件框架。它支持在商品硬件构建的大型集群上运行的应用程序。Hadoop是根据Google公司发表的MapReduce和Google档案系统的论文自行实作而成。</p><p>Hadoop与Google一样,都是小孩命名的,是一个虚构的名字,没有特别的含义。从计算机专业的角度看,Hadoop是一个分布式系统基础架构,由Apache基金会开发。Hadoop的主要目标是对分布式环境下的“大数据”以一种可靠、高效、可伸缩的方式处理。</p><p>Hadoop框架透明地为应用提供可靠性和数据移动。它实现了名为MapReduce的编程范式:应用程序被分割成许多小部分,而每个部分都能在集群中的任意节点上执行或重新执行。</p><p>Hadoop还提供了分布式文件系统,用以存储所有计算节点的数据,这为整个集群带来了非常高的带宽。MapReduce和分布式文件系统的设计,使得整个框架能够自动处理节点故障。它使应用程序与成千上万的独立计算的电脑和PB级的数据。</p><p><img src="https://oscimg.oschina.net/oscnet/723619a14974403bb87e1981ebf7eeb8.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><h1 id="二、组成"><a href="#二、组成" class="headerlink" title="二、组成"></a><strong>二、组成</strong></h1><p><strong>1.Hadoop的核心组件</strong></p><p><img src="https://oscimg.oschina.net/oscnet/be7b302fb1b74ac2bb2b64ce6ccd0f8a.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>分析:Hadoop的核心组件分为:HDFS(分布式文件系统)、MapRuduce(分布式运算编程框架)、YARN(运算资源调度系统)</p><p><strong>2.HDFS的文件系统</strong></p><p><img src="https://oscimg.oschina.net/oscnet/7f049e6a2c854bf6acad68073193e01f.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p><strong>HDFS</strong></p><p>1.定义</p><p>整个Hadoop的体系结构主要是通过HDFS(Hadoop分布式文件系统)来实现对分布式存储的底层支持,并通过MR来实现对分布式并行任务处理的程序支持。</p><p>HDFS是Hadoop体系中数据存储管理的基础。它是一个高度容错的系统,能检测和应对硬件故障,用于在低成本的通用硬件上运行。HDFS简化了文件的一致性模型,通过流式数据访问,提供高吞吐量应用程序数据访问功能,适合带有大型数据集的应用程序。</p><p>2.组成</p><p>HDFS采用主从(Master/Slave)结构模型,一个HDFS集群是由一个NameNode和若干个DataNode组成的。NameNode作为主服务器,管理文件系统命名空间和客户端对文件的访问操作。DataNode管理存储的数据。HDFS支持文件形式的数据。</p><p>从内部来看,文件被分成若干个数据块,这若干个数据块存放在一组DataNode上。NameNode执行文件系统的命名空间,如打开、关闭、重命名文件或目录等,也负责数据块到具体DataNode的映射。DataNode负责处理文件系统客户端的文件读写,并在NameNode的统一调度下进行数据库的创建、删除和复制工作。NameNode是所有HDFS元数据的管理者,用户数据永远不会经过NameNode。</p><p><img src="https://oscimg.oschina.net/oscnet/9255a610825146fd8351ee116af93f73.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>分析:NameNode是管理者,DataNode是文件存储者、Client是需要获取分布式文件系统的应用程序。</p><p><strong>MapReduce</strong></p><p>1.定义</p><p>Hadoop MapReduce是google MapReduce 克隆版。</p><p>MapReduce是一种计算模型,用以进行大数据量的计算。其中Map对数据集上的独立元素进行指定的操作,生成键-值对形式中间结果。Reduce则对中间结果中相同“键”的所有“值”进行规约,以得到最终结果。MapReduce这样的功能划分,非常适合在大量计算机组成的分布式并行环境里进行数据处理。</p><p>2.组成</p><p><img src="https://oscimg.oschina.net/oscnet/7a134f3d35604a79867e6b85aa1561f1.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>分析:</p><p>(1)JobTracker</p><p>JobTracker叫作业跟踪器,运行到主节点(Namenode)上的一个很重要的进程,是MapReduce体系的调度器。用于处理作业(用户提交的代码)的后台程序,决定有哪些文件参与作业的处理,然后把作业切割成为一个个的小task,并把它们分配到所需要的数据所在的子节点。</p><p>Hadoop的原则就是就近运行,数据和程序要在同一个物理节点里,数据在哪里,程序就跑去哪里运行。这个工作是JobTracker做的,监控task,还会重启失败的task(于不同的节点),每个集群只有唯一一个JobTracker,类似单点的NameNode,位于Master节点</p><p>(2)TaskTracker</p><p>TaskTracker叫任务跟踪器,MapReduce体系的最后一个后台进程,位于每个slave节点上,与datanode结合(代码与数据一起的原则),管理各自节点上的task(由jobtracker分配),</p><p>每个节点只有一个tasktracker,但一个tasktracker可以启动多个JVM,运行Map Task和Reduce Task;并与JobTracker交互,汇报任务状态,</p><p>Map Task:解析每条数据记录,传递给用户编写的map(),并执行,将输出结果写入本地磁盘(如果为map-only作业,直接写入HDFS)。</p><p>Reducer Task:从Map Task的执行结果中,远程读取输入数据,对数据进行排序,将数据按照分组传递给用户编写的reduce函数执行。</p><p><strong>Hive</strong></p><p>1.定义</p><p>Hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供完整的sql查询功能,可以将sql语句转换为MapReduce任务进行运行。</p><p>Hive是建立在 Hadoop 上的数据仓库基础构架。它提供了一系列的工具,可以用来进行数据提取转化加载(ETL),这是一种可以存储、查询和分析存储在 Hadoop 中的大规模数据的机制。</p><p>Hive 定义了简单的类 SQL 查询语言,称为 HQL,它允许熟悉 SQL 的用户查询数据。同时,这个语言也允许熟悉 MapReduce 开发者的开发自定义的 mapper 和 reducer 来处理内建的 mapper 和 reducer 无法完成的复杂的分析工作。</p><p>2.组成</p><p><img src="https://oscimg.oschina.net/oscnet/226331e13a734a3a95c33d205400fec3.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>分析:Hive架构包括:CLI(Command Line Interface)、JDBC/ODBC、Thrift Server、WEB GUI、Metastore和Driver(Complier、Optimizer和Executor),这些组件分为两大类:服务端组件和客户端组件</p><p>3.客户端与服务端组件</p><p>(1)客户端组件:</p><p>CLI:Command Line Interface,命令行接口。</p><p>Thrift客户端:上面的架构图里没有写上Thrift客户端,但是Hive架构的许多客户端接口是建立在Thrift客户端之上,包括JDBC和ODBC接口。</p><p>WEBGUI:Hive客户端提供了一种通过网页的方式访问Hive所提供的服务。这个接口对应Hive的HWI组件(Hive Web Interface),使用前要启动HWI服务。</p><p>(2)服务端组件:</p><p>Driver组件:该组件包括Complier、Optimizer和Executor,它的作用是将HiveQL(类SQL)语句进行解析、编译优化,生成执行计划,然后调用底层的MapReduce计算框架</p><p>Metastore组件:元数据服务组件,这个组件存储Hive的元数据,Hive的元数据存储在关系数据库里,Hive支持的关系数据库有Derby和Mysql。元数据对于Hive十分重要,因此Hive支持把Metastore服务独立出来,安装到远程的服务器集群里,从而解耦Hive服务和Metastore服务,保证Hive运行的健壮性;</p><p>Thrift服务:Thrift是Facebook开发的一个软件框架,它用来进行可扩展且跨语言的服务的开发,Hive集成了该服务,能让不同的编程语言调用Hive的接口。</p><p>4.Hive与传统数据库的异同</p><p>(1)查询语言</p><p>由于 SQL 被广泛的应用在数据仓库中,因此专门针对Hive的特性设计了类SQL的查询语言HQL。熟悉SQL开发的开发者可以很方便的使用Hive进行开发。</p><p>(2)数据存储位置</p><p>Hive是建立在Hadoop之上的,所有Hive的数据都是存储在HDFS中的。而数据库则可以将数据保存在块设备或者本地文件系统中。</p><p>(3)数据格式</p><p>Hive中没有定义专门的数据格式,数据格式可以由用户指定,用户定义数据格式需要指定三个属性:列分隔符(通常为空格、”\t”、”\\x001″)、行分隔符(”\n”)以及读取文件数据的方法(Hive中默认有三个文件格式TextFile,SequenceFile以及RCFile)。</p><p>(4)数据更新</p><p>由于Hive是针对数据仓库应用设计的,而数据仓库的内容是读多写少的。因此,Hive中不支持</p><p>对数据的改写和添加,所有的数据都是在加载的时候中确定好的。而数据库中的数据通常是需要经常进行修改的,因此可以使用INSERT INTO … VALUES添加数据,使用UPDATE … SET修改数据。</p><p>(5)索引</p><p>Hive在加载数据的过程中不会对数据进行任何处理,甚至不会对数据进行扫描,因此也没有对数据中的某些Key建立索引。Hive要访问数据中满足条件的特定值时,需要暴力扫描整个数据,因此访问延迟较高。由于MapReduce的引入, Hive可以并行访问数据,因此即使没有索引,对于大数据量的访问,Hive仍然可以体现出优势。数据库中,通常会针对一个或者几个列建立索引,因此对于少量的特定条件的数据的访问,数据库可以有很高的效率,较低的延迟。由于数据的访问延迟较高,决定了Hive不适合在线数据查询。</p><p>(6)执行</p><p>Hive中大多数查询的执行是通过Hadoop提供的MapReduce来实现的(类似select * from tbl的查询不需要MapReduce)。而数据库通常有自己的执行引擎。</p><p>(7)执行延迟</p><p>Hive在查询数据的时候,由于没有索引,需要扫描整个表,因此延迟较高。另外一个导致Hive执行延迟高的因素是MapReduce框架。由于MapReduce本身具有较高的延迟,因此在利用MapReduce执行Hive查询时,也会有较高的延迟。相对的,数据库的执行延迟较低。当然,这个低是有条件的,即数据规模较小,当数据规模大到超过数据库的处理能力的时候,Hive的并行计算显然能体现出优势。</p><p>(8)可扩展性</p><p>由于Hive是建立在Hadoop之上的,因此Hive的可扩展性是和Hadoop的可扩展性是一致的(世界上最大的Hadoop集群在Yahoo!,2009年的规模在4000台节点左右)。而数据库由于ACID语义的严格限制,扩展行非常有限。目前最先进的并行数据库Oracle在理论上的扩展能力也只有100台左右。</p><p>(9)数据规模</p><p>由于Hive建立在集群上并可以利用MapReduce进行并行计算,因此可以支持很大规模的数据;对应的,数据库可以支持的数据规模较小。</p><p><strong>Hbase</strong></p><p>1.定义</p><p>HBase – Hadoop Database,是一个高可靠性、高性能、面向列、可伸缩的分布式存储系统,利用HBase技术可在廉价PC Server上搭建起大规模结构化存储集群。</p><p>HBase是Google Bigtable的开源实现,类似Google Bigtable利用GFS作为其文件存储系统,HBase利用Hadoop HDFS作为其文件存储系统;</p><p>Google运行MapReduce来处理Bigtable中的海量数据,HBase同样利用Hadoop MapReduce来处理HBase中的海量数据;</p><p>Google Bigtable利用 Chubby作为协同服务,HBase利用Zookeeper作为协同服务。</p><p>2.组成</p><p><img src="http://p1.pstatp.com/large/pgc-image/ba9bef1377ce41baa6f2a4a668a73d3c" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p><img src="https://oscimg.oschina.net/oscnet/249273704f7d40c59fbfa155a187459c.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>分析:从上图可以看出:Hbase主要由Client、Zookeeper、HMaster和HRegionServer组成,由Hstore作存储系统。</p><ul><li> Client</li></ul><p>HBase Client使用HBase的RPC机制与HMaster和HRegionServer进行通信,对于管理类操作,Client与 HMaster进行RPC;对于数据读写类操作,Client与HRegionServer进行RPC</p><ul><li> Zookeeper</li></ul><p>Zookeeper Quorum 中除了存储了 -ROOT- 表的地址和 HMaster 的地址,HRegionServer 也会把自己以 Ephemeral 方式注册到 Zookeeper 中,使得 HMaster 可以随时感知到各个HRegionServer 的健康状态。</p><ul><li> HMaster</li></ul><p>HMaster 没有单点问题,HBase 中可以启动多个 HMaster ,通过 Zookeeper 的 Master Election 机制保证总有一个 Master 运行,HMaster 在功能上主要负责 Table和Region的管理工作:</p><ul><li> 管理用户对 Table 的增、删、改、查操作</li><li> 管理 HRegionServer 的负载均衡,调整 Region 分布</li><li> 在 Region Split 后,负责新 Region 的分配</li><li> 在 HRegionServer 停机后,负责失效 HRegionServer 上的 Regions 迁移</li></ul><p>HStore存储是HBase存储的核心了,其中由两部分组成,一部分是MemStore,一部分是StoreFiles。</p><p>MemStore是Sorted Memory Buffer,用户写入的数据首先会放入MemStore,当MemStore满了以后会Flush成一个StoreFile(底层实现是HFile), 当StoreFile文件数量增长到一定阈值,会触发Compact合并操作,将多个 StoreFiles 合并成一个 StoreFile,合并过程中会进行版本合并和数据删除。</p><p>因此可以看出HBase其实只有增加数据,所有的更新和删除操作都是在后续的 compact 过程中进行的,这使得用户的写操作只要进入内存中就可以立即返回,保证了 HBase I/O 的高性能。</p><p>当StoreFiles Compact后,会逐步形成越来越大的StoreFile,当单个 StoreFile 大小超过一定阈值后,会触发Split操作,同时把当前 Region Split成2个Region,父 Region会下线,新Split出的2个孩子Region会被HMaster分配到相应的HRegionServer 上,使得原先1个Region的压力得以分流到2个Region上。</p><h1 id="三、Hadoop的应用实例"><a href="#三、Hadoop的应用实例" class="headerlink" title="三、Hadoop的应用实例"></a><strong>三、Hadoop的应用实例</strong></h1><p><strong>1.回顾Hadoop的整体架构</strong></p><p><img src="https://oscimg.oschina.net/oscnet/823fb4ac566d49a3ac6ae4d72bd229f8.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p><strong>2.Hadoop的应用——流量查询系统</strong></p><p>(1)流量查询系统总体框架</p><p><img src="https://oscimg.oschina.net/oscnet/007b6e6f21b24a09a8e1adc420ddb58a.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>(2)流量查询系统总体流程</p><p><img src="https://oscimg.oschina.net/oscnet/3e99c621-72a9-407a-8bc7-740e218a7471.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>(3)流量查询系统数据预处理功能框架</p><p><img src="https://oscimg.oschina.net/oscnet/1d5e1a68-ab36-4100-80f6-fad86f05dffa.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>(4)流量查询系统数据预处理流程</p><p><img src="https://oscimg.oschina.net/oscnet/c72eee8c174f4cee90c5edc8e5ab39e7.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>(5)流量查询NoSQL数据库功能框架</p><p><img src="https://oscimg.oschina.net/oscnet/480fe6bf79ca418983acb4c042ba4f9e.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>(6)流量查询服务功能框架</p><p><img src="https://oscimg.oschina.net/oscnet/baf61004-fc33-4309-aa4d-017005f8c001.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p><p>(7)实时流计算数据处理流程图</p><p><img src="https://oscimg.oschina.net/oscnet/64126a8b-6769-4c96-827b-601898e6f67a.jpg" alt="10分钟零基础就可搞懂的Hadoop架构原理,阿里架构师详解"></p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Hadoop集群常用脚本</title>
<link href="/2020/11/07/2bd2be228806.html"/>
<url>/2020/11/07/2bd2be228806.html</url>
<content type="html"><![CDATA[<h1 id="hadoop群起脚本"><a href="#hadoop群起脚本" class="headerlink" title="hadoop群起脚本"></a>hadoop群起脚本</h1><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line"></span><br><span class="line">if [ $# -lt 1 ]</span><br><span class="line">then</span><br><span class="line"> echo &quot;No Args Input...&quot;</span><br><span class="line"> exit ;</span><br><span class="line">fi</span><br><span class="line"></span><br><span class="line">case $1 in</span><br><span class="line">&quot;start&quot;)</span><br><span class="line"> echo &quot; =================== 启动 hadoop集群 ===================&quot;</span><br><span class="line"></span><br><span class="line"> echo &quot; --------------- 启动 hdfs ---------------&quot;</span><br><span class="line"> ssh node101 &quot;/opt/module/hadoop3.1.4/sbin/start-dfs.sh&quot;</span><br><span class="line"> echo &quot; --------------- 启动 yarn ---------------&quot;</span><br><span class="line"> ssh node102 &quot;/opt/module/hadoop3.1.4/sbin/start-yarn.sh&quot;</span><br><span class="line"> echo &quot; --------------- 启动 historyserver ---------------&quot;</span><br><span class="line"> ssh node101 &quot;/opt/module/hadoop3.1.4/bin/mapred --daemon start historyserver&quot;</span><br><span class="line">;;</span><br><span class="line">&quot;stop&quot;)</span><br><span class="line"> echo &quot; =================== 关闭 hadoop集群 ===================&quot;</span><br><span class="line"></span><br><span class="line"> echo &quot; --------------- 关闭 historyserver ---------------&quot;</span><br><span class="line"> ssh node101 &quot;/opt/module/hadoop3.1.4/bin/mapred --daemon stop historyserver&quot;</span><br><span class="line"> echo &quot; --------------- 关闭 yarn ---------------&quot;</span><br><span class="line"> ssh node102 &quot;/opt/module/hadoop3.1.4/sbin/stop-yarn.sh&quot;</span><br><span class="line"> echo &quot; --------------- 关闭 hdfs ---------------&quot;</span><br><span class="line"> ssh node101 &quot;/opt/module/hadoop3.1.4/sbin/stop-dfs.sh&quot;</span><br><span class="line">;;</span><br><span class="line">*)</span><br><span class="line"> echo &quot;Input Args Error...&quot;</span><br><span class="line">;;</span><br><span class="line">esac</span><br></pre></td></tr></table></figure><h1 id="查看所有机器运行的情况"><a href="#查看所有机器运行的情况" class="headerlink" title="查看所有机器运行的情况"></a>查看所有机器运行的情况</h1><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line"></span><br><span class="line">for host in node101 node102 node103</span><br><span class="line">do</span><br><span class="line"> echo =============== $host ===============</span><br><span class="line"> ssh $host jps </span><br><span class="line">done</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> 脚本 </tag>
</tags>
</entry>
<entry>
<title>MapTask工作机制</title>
<link href="/2020/11/07/af7eb1570222.html"/>
<url>/2020/11/07/af7eb1570222.html</url>
<content type="html"><![CDATA[<p><img src="https://oscimg.oschina.net/oscnet/up-34c625af0d544d0ceb3d92511a1e0c918bc.png"></p><p><strong>(1)Read阶段:</strong>MapTask通过InputFormat获得的RecordReader,从输入InputSplit中解析出一个个key/value。</p><p><strong>(2)Map****阶段:</strong>该节点主要是将解析出的key/value交给用户编写map()函数处理,并产生一系列新的key/value。</p><p><strong>(3)Collect****收集阶段:</strong>在用户编写map()函数中,当数据处理完成后,一般会调用OutputCollector.collect()输出结果。在该函数内部,它会将生成的key/value分区(调用Partitioner),并写入一个环形内存缓冲区中。</p><p><strong>(4)Spill****阶段:</strong>即“溢写”,当环形缓冲区满后,MapReduce会将数据写到本地磁盘上,生成一个临时文件。需要注意的是,将数据写入本地磁盘之前,先要对数据进行一次本地排序,并在必要时对数据进行合并、压缩等操作。</p><p><strong>溢写阶段详情:</strong></p><pre><code> 步骤1:利用快速排序算法对缓存区内的数据进行排序,排序方式是,先按照分区编号Partition进行排序,然后按照key进行排序。这样,经过排序后,数据以分区为单位聚集在一起,且同一分区内所有数据按照key有序。 步骤2:按照分区编号由小到大依次将每个分区中的数据写入任务工作目录下的临时文件output/spillN.out(N表示当前溢写次数)中。如果用户设置了Combiner,则写入文件之前,对每个分区中的数据进行一次聚集操作。 步骤3:将分区数据的元信息写到内存索引数据结构SpillRecord中,其中每个分区的元信息包括在临时文件中的偏移量、压缩前数据大小和压缩后数据大小。如果当前内存索引大小超过1MB,则将内存索引写到文件output/spillN.out.index中。</code></pre><p><strong>(5)Merge****阶段:</strong>当所有数据处理完成后,MapTask对所有临时文件进行一次合并,以确保最终只会生成一个数据文件。</p><p>当所有数据处理完后,MapTask会将所有临时文件合并成一个大文件,并保存到文件output/file.out中,同时生成相应的索引文件output/file.out.index。</p><p> 在进行文件合并过程中,MapTask以分区为单位进行合并。对于某个分区,它将采用多轮递归合并的方式。每轮合并mapreduce.task.io.sort.factor(默认10)个文件,并将产生的文件重新加入待合并列表中,对文件排序后,重复以上过程,直到最终得到一个大文件。</p><p> 让每个MapTask最终只生成一个数据文件,可避免同时打开大量文件和同时读取大量小文件产生的随机读取带来的开销。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> Yarn </tag>
</tags>
</entry>
<entry>
<title>ReduceTask工作机制</title>
<link href="/2020/11/07/ecfe110a7276.html"/>
<url>/2020/11/07/ecfe110a7276.html</url>
<content type="html"><![CDATA[<p><img src="https://oscimg.oschina.net/oscnet/up-f1c3a0b81ab5e3b0a7b61478606bcc7be44.png"></p><p><strong>(1)Copy阶段:</strong>ReduceTask从各个MapTask上远程拷贝一片数据,并针对某一片数据,如果其大小超过一定阈值,则写到磁盘上,否则直接放到内存中。</p><p><strong>(2)Sort****阶段:</strong>在远程拷贝数据的同时,ReduceTask启动了两个后台线程对内存和磁盘上的文件进行合并,以防止内存使用过多或磁盘上文件过多。按照MapReduce语义,用户编写reduce()函数输入数据是按key进行聚集的一组数据。为了将key相同的数据聚在一起,Hadoop采用了基于排序的策略。由于各个MapTask已经实现对自己的处理结果进行了局部排序,因此,ReduceTask只需对所有数据进行一次归并排序即可。</p><p><strong>(3)Reduce阶段:</strong>reduce()函数将计算结果写到HDFS上。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> Yarn </tag>
</tags>
</entry>
<entry>
<title>Yarn工作机制</title>
<link href="/2020/11/07/947729c8efa7.html"/>
<url>/2020/11/07/947729c8efa7.html</url>
<content type="html"><![CDATA[<p><img src="https://oscimg.oschina.net/oscnet/up-70c2f401ba4415958505e78cca120abc427.png"></p><p>(1)MR程序提交到客户端所在的节点。<br>(2)YarnRunner向ResourceManager申请一个Application。<br>(3)RM将该应用程序的资源路径返回给YarnRunner。<br>(4)该程序将运行所需资源提交到HDFS上。<br>(5)程序资源提交完毕后,申请运行mrAppMaster。<br>(6)RM将用户的请求初始化成一个Task。<br>(7)其中一个NodeManager领取到Task任务。<br>(8)该NodeManager创建容器Container,并产生MRAppmaster。<br>(9)Container从HDFS上拷贝资源到本地。<br>(10)MRAppmaster向RM 申请运行MapTask资源。<br>(11)RM将运行MapTask任务分配给另外两个NodeManager,另两个NodeManager分别领取任务并创建容器。<br>(12)MR向两个接收到任务的NodeManager发送程序启动脚本,这两个NodeManager分别启动MapTask,MapTask对数据分区排序。<br>(13)MrAppMaster等待所有MapTask运行完毕后,向RM申请容器,运行ReduceTask。<br>(14)ReduceTask向MapTask获取相应分区的数据。<br>(15)程序运行完毕后,MR会向RM申请注销自己。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Yarn </tag>
</tags>
</entry>
<entry>
<title>作业提交过程之HDFS</title>
<link href="/2020/11/07/8bc5bf1e47da.html"/>
<url>/2020/11/07/8bc5bf1e47da.html</url>
<content type="html"><![CDATA[<p><img src="https://oscimg.oschina.net/oscnet/up-5e59cb5a26ed324a28f500d70fe4e642254.png"></p><h1 id="作业提交全过程详解"><a href="#作业提交全过程详解" class="headerlink" title="作业提交全过程详解"></a>作业提交全过程详解</h1><h2 id="(1)作业提交"><a href="#(1)作业提交" class="headerlink" title="(1)作业提交"></a>(1)作业提交</h2><p>第1步:Client调用job.waitForCompletion方法,向整个集群提交MapReduce作业。<br>第2步:Client向RM申请一个作业id。<br>第3步:RM给Client返回该job资源的提交路径和作业id。<br>第4步:Client提交jar包、切片信息和配置文件到指定的资源提交路径。<br>第5步:Client提交完资源后,向RM申请运行MrAppMaster。</p><h2 id="(2)作业初始化"><a href="#(2)作业初始化" class="headerlink" title="(2)作业初始化"></a>(2)作业初始化</h2><p>第6步:当RM收到Client的请求后,将该job添加到容量调度器中。<br>第7步:某一个空闲的NM领取到该Job。<br>第8步:该NM创建Container,并产生MRAppmaster。<br>第9步:下载Client提交的资源到本地。</p><h2 id="(3)任务分配"><a href="#(3)任务分配" class="headerlink" title="(3)任务分配"></a>(3)任务分配</h2><p>第10步:MrAppMaster向RM申请运行多个MapTask任务资源。<br>第11步:RM将运行MapTask任务分配给另外两个NodeManager,另两个NodeManager分别领取任务并创建容器。</p><h2 id="(4)任务运行"><a href="#(4)任务运行" class="headerlink" title="(4)任务运行"></a>(4)任务运行</h2><p>第12步:MR向两个接收到任务的NodeManager发送程序启动脚本,这两个NodeManager分别启动MapTask,MapTask对数据分区排序。<br>第13步:MrAppMaster等待所有MapTask运行完毕后,向RM申请容器,运行ReduceTask。<br>第14步:ReduceTask向MapTask获取相应分区的数据。<br>第15步:程序运行完毕后,MR会向RM申请注销自己。</p><h2 id="(5)进度和状态更新"><a href="#(5)进度和状态更新" class="headerlink" title="(5)进度和状态更新"></a>(5)进度和状态更新</h2><p>YARN中的任务将其进度和状态(包括counter)返回给应用管理器, 客户端每秒(通过mapreduce.client.progressmonitor.pollinterval设置)向应用管理器请求进度更新, 展示给用户。</p><h2 id="(6)作业完成"><a href="#(6)作业完成" class="headerlink" title="(6)作业完成"></a>(6)作业完成</h2><p>除了向应用管理器请求作业进度外, 客户端每5秒都会通过调用waitForCompletion()来检查作业是否完成。时间间隔可以通过mapreduce.client.completion.pollinterval来设置。作业完成之后, 应用管理器和Container会清理工作状态。作业的信息会被作业历史服务器存储以备之后用户核查。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
<tag> Yarn </tag>
</tags>
</entry>
<entry>
<title>MapReduce开发总结</title>
<link href="/2020/11/07/5add4ff74271.html"/>
<url>/2020/11/07/5add4ff74271.html</url>
<content type="html"><![CDATA[<h1 id="1-)输入数据接口:InputFormat"><a href="#1-)输入数据接口:InputFormat" class="headerlink" title="1****)输入数据接口:InputFormat"></a><strong>1****)输入数据接口:InputFormat</strong></h1><p>(1)默认使用的实现类是:TextInputFormat</p><p>(2)TextInputFormat的功能逻辑是:一次读一行文本,然后将该行的起始偏移量作为key,行内容作为value返回。</p><p>(3)CombineTextInputFormat可以把多个小文件合并成一个切片处理,提高处理效率。</p><h1 id="2-)逻辑处理接口:Mapper"><a href="#2-)逻辑处理接口:Mapper" class="headerlink" title="2****)逻辑处理接口:Mapper"></a><strong>2****)逻辑处理接口:Mapper</strong></h1><p>用户根据业务需求实现其中三个方法:map() setup() cleanup ()</p><h1 id="3)Partitioner分区"><a href="#3)Partitioner分区" class="headerlink" title="3)Partitioner分区"></a><strong>3<strong><strong>)Partitioner</strong></strong>分区</strong></h1><p>(1)有默认实现 HashPartitioner,逻辑是根据key的哈希值和numReduces来返回一个分区号;key.hashCode()&amp;Integer.MAXVALUE % numReduces</p><p>(2)如果业务上有特别的需求,可以自定义分区。</p><h1 id="4)Comparable排序"><a href="#4)Comparable排序" class="headerlink" title="4)Comparable排序"></a><strong>4<strong><strong>)Comparable</strong></strong>排序</strong></h1><p>(1)当我们用自定义的对象作为key来输出时,就必须要实现WritableComparable接口,重写其中的compareTo()方法。</p><p>(2)部分排序:对最终输出的每一个文件进行内部排序。</p><p>(3)全排序:对所有数据进行排序,通常只有一个Reduce。</p><p>(4)二次排序:排序的条件有两个。</p><h1 id="5)Combiner合并"><a href="#5)Combiner合并" class="headerlink" title="5)Combiner合并"></a><strong>5<strong><strong>)Combiner</strong></strong>合并</strong></h1><p>Combiner合并可以提高程序执行效率,减少IO传输。但是使用时必须不能影响原有的业务处理结果。</p><h1 id="6-)逻辑处理接口:Reducer"><a href="#6-)逻辑处理接口:Reducer" class="headerlink" title="6****)逻辑处理接口:Reducer"></a><strong>6****)逻辑处理接口:Reducer</strong></h1><p>用户根据业务需求实现其中三个方法:reduce() setup() cleanup ()</p><h1 id="7-)输出数据接口:OutputFormat"><a href="#7-)输出数据接口:OutputFormat" class="headerlink" title="7****)输出数据接口:OutputFormat"></a><strong>7****)输出数据接口:OutputFormat</strong></h1><p>(1)默认实现类是TextOutputFormat,功能逻辑是:将每一个KV对,向目标文本文件输出一行。</p><p>(2)用户还可以自定义OutputFormat。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> Yarn </tag>
</tags>
</entry>
<entry>
<title>Hadoop数据压缩使用介绍</title>
<link href="/2020/11/07/bc127bac6366.html"/>
<url>/2020/11/07/bc127bac6366.html</url>
<content type="html"><![CDATA[<h1 id="一、压缩原则"><a href="#一、压缩原则" class="headerlink" title="一、压缩原则"></a><strong>一、压缩原则</strong></h1><p>(1)运算密集型的Job,少用压缩</p><p>(2)IO密集型的Job,多用压缩</p><h1 id="二、压缩算法比较"><a href="#二、压缩算法比较" class="headerlink" title="二、压缩算法比较"></a>二、压缩算法比较</h1><p><img src="https://oscimg.oschina.net/oscnet/up-86ab6bc8ecce66e39bcd1750701b1be88d7.png"></p><p>三、压缩位置选择</p><p><img src="https://oscimg.oschina.net/oscnet/up-f1ae1539eaceaa70955891089aa6900f66a.png"></p><p>四、压缩参数配置</p><p>1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器</p><p><img src="https://oscimg.oschina.net/oscnet/up-ea10476ea79c42bea1731820bbf3fed587e.png"></p><p>2)要在Hadoop中启用压缩,可以配置如下参数</p><p><img src="https://oscimg.oschina.net/oscnet/up-2f3183ef038d9eea6a525f9653c3695b501.png"></p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>NameNode和SecondaryNameNode的工作机制</title>
<link href="/2020/11/07/47c26d75ab86.html"/>
<url>/2020/11/07/47c26d75ab86.html</url>
<content type="html"><![CDATA[<p><img src="https://oscimg.oschina.net/oscnet/up-e9146688167ee453fc810bd2ad75aa10659.png"></p><h1 id="1)第一阶段:NameNode启动"><a href="#1)第一阶段:NameNode启动" class="headerlink" title="1)第一阶段:NameNode启动"></a><strong>1)第一阶段:NameNode启动</strong></h1><p> (1)第一次启动NameNode格式化后,创建Fsimage和Edits文件。如果不是第一次启动,直接加载编辑日志和镜像文件到内存。</p><p>(2)客户端对元数据进行增删改的请求。</p><p>(3)NameNode记录操作日志,更新滚动日志。</p><p>(4)NameNode在内存中对元数据进行增删改。</p><h1 id="2)第二阶段:Secondary-NameNode工作"><a href="#2)第二阶段:Secondary-NameNode工作" class="headerlink" title="2)第二阶段:Secondary NameNode工作"></a><strong>2)第二阶段:Secondary NameNode工作</strong></h1><p>(1)Secondary NameNode询问NameNode是否需要CheckPoint。直接带回NameNode是否检查结果。</p><p>(2)Secondary NameNode请求执行CheckPoint。</p><p>(3)NameNode滚动正在写的Edits日志。</p><p>(4)将滚动前的编辑日志和镜像文件拷贝到Secondary NameNode。</p><p>(5)Secondary NameNode加载编辑日志和镜像文件到内存,并合并。</p><p>(6)生成新的镜像文件fsimage.chkpoint。</p><p>(7)拷贝fsimage.chkpoint到NameNode。</p><p>(8)NameNode将fsimage.chkpoint重新命名成fsimage。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>DataNode工作机制</title>
<link href="/2020/11/07/7d14a19c4606.html"/>
<url>/2020/11/07/7d14a19c4606.html</url>
<content type="html"><![CDATA[<p><img src="https://oscimg.oschina.net/oscnet/up-0a831a2854d1859bc355a98a741ee74452b.png"></p><p>(1)一个数据块在DataNode上以文件形式存储在磁盘上,包括两个文件,一个是数据本身,一个是元数据包括数据块的长度,块数据的校验和,以及时间戳。</p><p>(2)DataNode启动后向NameNode注册,通过后,周期性(6小时)的向NameNode上报所有的块信息。</p><p>DN向NN汇报当前解读信息的时间间隔,默认6小时;</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.blockreport.intervalMsec<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>21600000<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">description</span>&gt;</span>Determines block reporting interval in milliseconds.<span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br></pre></td></tr></table></figure><p>DN扫描自己节点块信息列表的时间,默认6小时</p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>dfs.datanode.directoryscan.interval<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>21600s<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">description</span>&gt;</span>Interval in seconds for Datanode to scan data directories and reconcile the difference between blocks in memory and on the disk.</span><br><span class="line"> Support multiple time unit suffix(case insensitive), as described</span><br><span class="line"> in dfs.heartbeat.interval.</span><br><span class="line"> <span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br></pre></td></tr></table></figure><p>(3)心跳是每3秒一次,心跳返回结果带有NameNode给该DataNode的命令如复制块数据到另一台机器,或删除某个数据块。如果超过10分钟没有收到某个DataNode的心跳,则认为该节点不可用。</p><p>(4)集群运行中可以安全加入和退出一些机器。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>NameNode内存生产配置</title>
<link href="/2020/11/07/76f2b2981271.html"/>
<url>/2020/11/07/76f2b2981271.html</url>
<content type="html"><![CDATA[<h1 id="Hadoop2-x系列,配置NameNode内存"><a href="#Hadoop2-x系列,配置NameNode内存" class="headerlink" title="Hadoop2.x系列,配置NameNode内存"></a>Hadoop2.x系列,配置NameNode内存</h1><p>NameNode内存默认2000m,如果服务器内存4G,NameNode内存可以配置3g。在hadoop-env.sh文件中配置如下。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">HADOOP_NAMENODE_OPTS=-Xmx3072m</span><br></pre></td></tr></table></figure><h1 id="Hadoop3-x系列,配置NameNode内存"><a href="#Hadoop3-x系列,配置NameNode内存" class="headerlink" title="Hadoop3.x系列,配置NameNode内存"></a>Hadoop3.x系列,配置NameNode内存</h1><p><strong>经验分享:</strong></p><blockquote><p>namenode最小值1G,每增加100万个block,增加1G内存。<br>datanode最小值4G,block或者副本数升高,都应该调大datanode的值。<br>一个datanode上的副本总数低于400万时调为4G,超过400万后,每增加100万,增加1G。</p></blockquote><p>具体修改:hadoop-env.sh</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">export HDFS_NAMENODE_OPTS=&quot;-Dhadoop.security.logger=INFO,RFAS -Xmx1024m&quot;</span><br><span class="line"></span><br><span class="line">export HDFS_DATANODE_OPTS=&quot;-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m&quot;</span><br></pre></td></tr></table></figure><p>namenode大约在280行。</p><p> <img src="https://oscimg.oschina.net/oscnet/up-9144464900899783cb2148ae49ec593740d.png"></p><p>datanode大约在310行。</p><p><img src="https://oscimg.oschina.net/oscnet/up-1e45f4a972bf756a586979af4c934940a46.png"></p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>hadoop开启回收站配置</title>
<link href="/2020/11/07/2e976324e14a.html"/>
<url>/2020/11/07/2e976324e14a.html</url>
<content type="html"><![CDATA[<p>开启回收站功能,可以将删除的文件在不超时的情况下,恢复原数据,起到防止误删除、备份等作用。</p><p><img src="https://oscimg.oschina.net/oscnet/up-1d5d94516aaffad5582342bb6d060d71fe8.png"></p><p><strong>开启回收站功能参数说明</strong></p><p><strong>(1)默认值<code>fs.trash.interval</code> = 0,0表示禁用回收站;其他值表示设置文件的存活时间。</strong></p><p><strong>(2)默认值<code>fs.trash.checkpoint.interval</code> = 0,检查回收站的间隔时间。如果该值为0,则该值设置和<code>fs.trash.interval</code>的参数值相等。</strong></p><p><strong>(3)要求<code>fs.trash.checkpoint.interval</code> &lt;= <code>fs.trash.interval</code>。</strong></p><p><strong>启用回收站</strong></p><p>修改<code>core-site.xml</code>,配置垃圾回收时间为30分钟。</p><p>回收站目录在<code>HDFS</code>集群中的路径:<code>/user/&#123;username&#125;/.Trash/….</code></p><p><strong>注意:通过网页上直接删除的文件也不会走回收站。</strong></p><p>只有在命令行利用<code>hadoop fs -rm</code>命令删除的文件才会走回收站。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>Hadoop集群数据均衡之磁盘间数据均衡</title>
<link href="/2020/11/07/2296d5b58502.html"/>
<url>/2020/11/07/2296d5b58502.html</url>
<content type="html"><![CDATA[<p>生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性)</p><p><img src="https://oscimg.oschina.net/oscnet/up-91c922f361630fb00034d9eeb4d7ccb04e0.png"></p><p>plan后面带的节点的名字必须是已经存在的,并且是需要均衡的节点。</p><p>如果节点不存在,会报如下错误:</p><p><img src="https://oscimg.oschina.net/oscnet/up-09fe2e7ad9a5d10fb74705bd5887036b973.png"></p><p>如果节点只有一个硬盘的话,不会创建均衡计划:</p><p><img src="https://oscimg.oschina.net/oscnet/up-576b6c7da8f968a2df69e67bdf042b327d7.png"></p><h2 id="(1)生成均衡计划"><a href="#(1)生成均衡计划" class="headerlink" title="(1)生成均衡计划"></a>(1)生成均衡计划</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hdfs diskbalancer -plan hadoop102</span><br></pre></td></tr></table></figure><h2 id="(2)执行均衡计划"><a href="#(2)执行均衡计划" class="headerlink" title="(2)执行均衡计划"></a>(2)执行均衡计划</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hdfs diskbalancer -execute hadoop102.plan.json</span><br></pre></td></tr></table></figure><h2 id="(3)查看当前均衡任务的执行情况"><a href="#(3)查看当前均衡任务的执行情况" class="headerlink" title="(3)查看当前均衡任务的执行情况"></a>(3)查看当前均衡任务的执行情况</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hdfs diskbalancer -query hadoop103</span><br></pre></td></tr></table></figure><h2 id="(4)取消均衡任务"><a href="#(4)取消均衡任务" class="headerlink" title="(4)取消均衡任务"></a>(4)取消均衡任务</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hdfs diskbalancer -cancel hadoop102.plan.json</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>HDFS—集群扩容及缩容</title>
<link href="/2020/11/07/5f9b1cc664c1.html"/>
<url>/2020/11/07/5f9b1cc664c1.html</url>
<content type="html"><![CDATA[<p>白名单:表示在白名单的主机IP地址可以,用来存储数据。</p><p><img src="https://oscimg.oschina.net/oscnet/up-acb08d98e431429592cdb8802409bb29a19.png"></p><p>配置白名单步骤如下:</p><p><strong>1)在NameNode节点的/opt/module/hadoop-3.1.4/etc/hadoop目录下分别创建whitelist 和blacklist文件</strong></p><p><strong>(1)创建白名单</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ vim whitelist</span><br></pre></td></tr></table></figure><p>在whitelist中添加如下主机名称,假如集群正常工作的节点为102 103</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hadoop102</span><br><span class="line">hadoop103</span><br></pre></td></tr></table></figure><p><strong>(2)创建黑名单</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ touch blacklist</span><br></pre></td></tr></table></figure><p><strong>保持空的就可以</strong></p><p><strong>2)在hdfs-site.xml配置文件中增加dfs.hosts配置参数</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!-- 白名单 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.hosts&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;/opt/module/hadoop-3.1.4/etc/hadoop/whitelist&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 黑名单 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.hosts.exclude&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;/opt/module/hadoop-3.1.4/etc/hadoop/blacklist&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><p><strong>3)分发配置文件whitelist,hdfs-site.xml</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ xsync hdfs-site.xml whitelist</span><br></pre></td></tr></table></figure><p><strong>4)第一次添加白名单必须重启集群,不是第一次,只需要刷新NameNode节点即可</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ myhadoop.sh stop</span><br><span class="line"></span><br><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ myhadoop.sh start</span><br></pre></td></tr></table></figure><p><strong>5)在web浏览器上查看DN,<a href="http://hadoop102:9870/dfshealth.html#tab-datanode">http://hadoop102:9870/dfshealth.html#tab-datanode</a></strong></p><p><img src="https://oscimg.oschina.net/oscnet/up-3bdcbaa4cce76c967e637d60a17c02bcb2b.png"></p><p><strong>至此白名单添加成功。下面进行白名单的实现:</strong></p><p><strong>6)在hadoop104上执行上传数据数据失败</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop104 hadoop-3.1.4]$ hadoop fs -put NOTICE.txt /</span><br></pre></td></tr></table></figure><p><strong>7)二次修改白名单,增加hadoop104</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ vim whitelist</span><br></pre></td></tr></table></figure><p>修改为如下内容:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hadoop102</span><br><span class="line">hadoop103</span><br><span class="line">hadoop104</span><br></pre></td></tr></table></figure><p><strong>8)刷新NameNode</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hdfs dfsadmin -refreshNodes</span><br></pre></td></tr></table></figure><p><strong>9)在web浏览器上查看DN,<a href="http://hadoop102:9870/dfshealth.html#tab-datanode">http://hadoop102:9870/dfshealth.html#tab-datanode</a></strong></p><p><img src="https://oscimg.oschina.net/oscnet/up-06c710c7275694da6c01baa875aee82cad8.png"></p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>Hadoop服务器间数据均衡</title>
<link href="/2020/11/07/54ccdf4e0331.html"/>
<url>/2020/11/07/54ccdf4e0331.html</url>
<content type="html"><![CDATA[<p>在企业开发中,如果经常在hadoop102和hadoop104上提交任务,且副本数为2,由于数据本地性原则,就会导致hadoop102和hadoop104数据过多,hadoop103存储的数据量小。</p><p>另一种情况,就是新服役的服务器数据量比较少,需要执行集群均衡命令。</p><p><img src="https://oscimg.oschina.net/oscnet/up-9a4fb4e45e0902123918f03c0bff131a3b8.png"></p><p><strong>开启数据均衡命令:</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop105 hadoop-3.1.4]$ sbin/start-balancer.sh -threshold 10</span><br></pre></td></tr></table></figure><p>对于参数10,代表的是集群中各个节点的磁盘空间利用率相差不超过10%,可根据实际情况进行调整。</p><p><strong>停止数据均衡命令:</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop105 hadoop-3.1.4]$ sbin/stop-balancer.sh</span><br></pre></td></tr></table></figure><p>注意:由于HDFS需要启动单独的Rebalance Server来执行Rebalance操作,<a href="http://所以尽量不要在namenode上执行start-balancer.sh/">所以尽量不要在NameNode上执行start-balancer.sh</a>,而是找一台比较空闲的机器。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Hadoop服役新服务器和推移旧服务器</title>
<link href="/2020/11/07/cadaa92fbbe2.html"/>
<url>/2020/11/07/cadaa92fbbe2.html</url>
<content type="html"><![CDATA[<h1 id="服役新服务器"><a href="#服役新服务器" class="headerlink" title="服役新服务器"></a>服役新服务器</h1><h2 id="1)需求"><a href="#1)需求" class="headerlink" title="1)需求"></a><strong>1)需求</strong></h2><p>随着公司业务的增长,数据量越来越大,原有的数据节点的容量已经不能满足存储数据的需求,需要在原有集群基础上动态添加新的数据节点。</p><h2 id="2)环境准备"><a href="#2)环境准备" class="headerlink" title="2)环境准备"></a><strong>2)环境准备</strong></h2><p>新服务器安装<code>Hadoop</code>相关的环境,配置好<code>hostname</code>和<code>hosts</code>。<br>拷贝<code>/etc/profile.d/my_env.sh</code>并执行<code>source /etc/profile</code></p><blockquote><p>注:通常采用镜像拷贝或者克隆的方式在新机器上克隆就机器的镜像。如果是使用旧镜像的话,克隆完成后记得删除<code>Hadoop</code>目录下的<code>data</code>和<code>logs</code>。</p></blockquote><h2 id="3)服役新节点具体步骤"><a href="#3)服役新节点具体步骤" class="headerlink" title="3)服役新节点具体步骤"></a><strong>3)服役新节点具体步骤</strong></h2><p>(1)直接启动<code>DataNode</code>,即可关联到集群</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop105 hadoop-3.1.4]$ hdfs --daemon start datanode</span><br><span class="line">[lytfly@hadoop105 hadoop-3.1.4]$ yarn --daemon start nodemanager</span><br></pre></td></tr></table></figure><h2 id="4)在白名单中增加新服役的服务器"><a href="#4)在白名单中增加新服役的服务器" class="headerlink" title="4)在白名单中增加新服役的服务器"></a><strong>4)在白名单中增加新服役的服务器</strong></h2><p>(1)在白名单<code>whitelist</code>和<code>workers</code>文件中增加<code>hadoop104</code>、<code>hadoop105</code>节点,并重启集群</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ vim whitelist</span><br></pre></td></tr></table></figure><p><strong>whitelist修改为如下内容:</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hadoop102</span><br><span class="line">hadoop103</span><br><span class="line">hadoop104</span><br><span class="line">hadoop105</span><br></pre></td></tr></table></figure><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ vim workers</span><br></pre></td></tr></table></figure><p><strong>workers修改为如下内容:</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hadoop102</span><br><span class="line">hadoop103</span><br><span class="line">hadoop104</span><br><span class="line">hadoop105</span><br></pre></td></tr></table></figure><p>(2)分发</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ xsync whitelist workers</span><br></pre></td></tr></table></figure><p>(3)刷新<code>NameNode</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hdfs dfsadmin -refreshNodes</span><br></pre></td></tr></table></figure><h2 id="5)新服务器添加后执行集群中节点间的数据均衡"><a href="#5)新服务器添加后执行集群中节点间的数据均衡" class="headerlink" title="5)新服务器添加后执行集群中节点间的数据均衡"></a><strong>5)新服务器添加后执行集群中节点间的数据均衡</strong></h2><p>新服务器hadoop105数据少,需要执行数据均衡操作,参见《<a href="https://my.oschina.net/liuyuantao/blog/5270636">Hadoop服务器间数据均衡</a>》</p><h1 id="黑名单退役服务器"><a href="#黑名单退役服务器" class="headerlink" title="黑名单退役服务器"></a>黑名单退役服务器</h1><p>黑名单:表示在黑名单的主机IP地址不可以用来存储数据。</p><h2 id="1)编辑-opt-module-hadoop-3-1-4-etc-hadoop目录下的blacklist文件"><a href="#1)编辑-opt-module-hadoop-3-1-4-etc-hadoop目录下的blacklist文件" class="headerlink" title="1)编辑/opt/module/hadoop-3.1.4/etc/hadoop目录下的blacklist文件"></a><strong>1)编辑<code>/opt/module/hadoop-3.1.4/etc/hadoop</code>目录下的<code>blacklist</code>文件</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop] vim blacklist</span><br></pre></td></tr></table></figure><p>添加如下主机名称(要退役的节点)</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hadoop105</span><br></pre></td></tr></table></figure><blockquote><p>注意:如果黑名单中没有配置,需要在<code>hdfs-site.xml</code>配置文件中增加<code>dfs.hosts</code>配置参数</p></blockquote><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!-- 黑名单 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.hosts.exclude&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;/opt/module/hadoop-3.1.4/etc/hadoop/blacklist&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="2)分发配置文件blacklist,hdfs-site-xml"><a href="#2)分发配置文件blacklist,hdfs-site-xml" class="headerlink" title="2)分发配置文件blacklist,hdfs-site.xml"></a><strong>2)分发配置文件<code>blacklist</code>,<code>hdfs-site.xml</code></strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ xsync hdfs-site.xml blacklist</span><br></pre></td></tr></table></figure><h2 id="3)第一次添加黑名单必须重启集群,不是第一次,只需要刷新NameNode节点即可"><a href="#3)第一次添加黑名单必须重启集群,不是第一次,只需要刷新NameNode节点即可" class="headerlink" title="3)第一次添加黑名单必须重启集群,不是第一次,只需要刷新NameNode节点即可"></a><strong>3)第一次添加黑名单必须重启集群,不是第一次,只需要刷新<code>NameNode</code>节点即可</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hdfs dfsadmin -refreshNodes</span><br><span class="line">Refresh nodes successful</span><br></pre></td></tr></table></figure><h2 id="4)检查Web浏览器,退役节点的状态为decommission-in-progress(退役中),说明数据节点正在复制块到其他节点"><a href="#4)检查Web浏览器,退役节点的状态为decommission-in-progress(退役中),说明数据节点正在复制块到其他节点" class="headerlink" title="4)检查Web浏览器,退役节点的状态为decommission in progress(退役中),说明数据节点正在复制块到其他节点"></a><strong>4)检查Web浏览器,退役节点的状态为<code>decommission in progress</code>(退役中),说明数据节点正在复制块到其他节点</strong></h2><p><img src="https://oscimg.oschina.net/oscnet/up-aef8605c4e9e787f4e8913a79b770e9fa3e.png"><br><img src="https://oscimg.oschina.net/oscnet/up-1a507db65f9bc23262dd9aabd1bfe4f855c.png"></p><h2 id="5)等待退役节点状态为decommissioned(所有块已经复制完成),停止该节点及节点资源管理器。注意:如果副本数是3,服役的节点小于等于3,是不能退役成功的,需要修改副本数后才能退役"><a href="#5)等待退役节点状态为decommissioned(所有块已经复制完成),停止该节点及节点资源管理器。注意:如果副本数是3,服役的节点小于等于3,是不能退役成功的,需要修改副本数后才能退役" class="headerlink" title="5)等待退役节点状态为decommissioned(所有块已经复制完成),停止该节点及节点资源管理器。注意:如果副本数是3,服役的节点小于等于3,是不能退役成功的,需要修改副本数后才能退役"></a><strong>5)等待退役节点状态为<code>decommissioned</code>(所有块已经复制完成),停止该节点及节点资源管理器。注意:如果副本数是3,服役的节点小于等于3,是不能退役成功的,需要修改副本数后才能退役</strong></h2><p><img src="https://oscimg.oschina.net/oscnet/up-c12a46ffbda6ebfc6920b4402c43ab172f6.png"></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop105 hadoop-3.1.4]$ hdfs --daemon stop datanode</span><br><span class="line">stopping datanode</span><br><span class="line"></span><br><span class="line">[lytfly@hadoop105 hadoop-3.1.4]$ yarn --daemon stop nodemanager</span><br><span class="line">stopping nodemanager</span><br></pre></td></tr></table></figure><h2 id="6)如果数据不均衡,可以用命令实现集群的再平衡"><a href="#6)如果数据不均衡,可以用命令实现集群的再平衡" class="headerlink" title="6)如果数据不均衡,可以用命令实现集群的再平衡"></a><strong>6)如果数据不均衡,可以用命令实现集群的再平衡</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ sbin/start-balancer.sh -threshold 10</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>HDFS—存储优化(纠删码)</title>
<link href="/2020/11/07/a8fda6be4c8d.html"/>
<url>/2020/11/07/a8fda6be4c8d.html</url>
<content type="html"><![CDATA[<h1 id="纠删码原理"><a href="#纠删码原理" class="headerlink" title="纠删码原理"></a>纠删码原理</h1><p><code>HDFS</code>默认情况下,一个文件有3个副本,这样提高了数据的可靠性,但也带来了2倍的冗余开销。</p><p><code>Hadoop3.x</code>引入了纠删码,采用计算的方式,可以节省约50%左右的存储空间。</p><p>此种方式节约了空间,但是会增加<code>cpu</code>的计算。</p><p>纠删码策略是给具体一个路径设置。所有往此路径下存储的文件,都会执行此策略。</p><p>默认只开启对<code>RS-6-3-1024k</code>策略的支持,如要使用别的策略需要提前启用。</p><p><img src="https://oscimg.oschina.net/oscnet/up-87ba27811580330155a29254c76223dd59f.png"></p><h2 id="1)纠删码操作相关的命令"><a href="#1)纠删码操作相关的命令" class="headerlink" title="1)纠删码操作相关的命令"></a><strong>1</strong>)纠删码操作相关的命令</h2><p><img src="https://oscimg.oschina.net/oscnet/up-589b0a76b3db25975b17805f25a3a5ea800.png"></p><h2 id="2)查看当前支持的纠删码策略"><a href="#2)查看当前支持的纠删码策略" class="headerlink" title="2)查看当前支持的纠删码策略"></a><strong>2</strong>)查看当前支持的纠删码策略</h2><p><img src="https://oscimg.oschina.net/oscnet/up-14fb9211fe77014f9c53f118d5d20bfdcb1.png"></p><h2 id="3)纠删码策略解释"><a href="#3)纠删码策略解释" class="headerlink" title="3)纠删码策略解释"></a><strong>3</strong>)纠删码策略解释</h2><p><strong><code>RS-3-2-1024k</code>***_:</strong>使用<code>RS</code>编码,每3个数据单元,生成2个校验单元,共5个单元,也就是说:这5个单元中,只要有任意的3个单元存在(不管是数据单元还是校验单元,只要总数=3),就可以得到原始数据。每个单元的大小是1024k=1024_1024=1048576。</p><p><img src="https://oscimg.oschina.net/oscnet/up-afca18dbfd54eed98fd8acbb474c1d66726.png"></p><p><strong><code>RS-10-4-1024k</code>***_:</strong>使用RS编码,每10个数据单元(cell),生成4个校验单元,共14个单元,也就是说:这14个单元中,只要有任意的10个单元存在(不管是数据单元还是校验单元,只要总数=10),就可以得到原始数据。每个单元的大小是1024k=1024_1024=1048576。</p><p><strong><code>RS-6-3-1024k</code>***_:</strong>使用RS编码,每6个数据单元,生成3个校验单元,共9个单元,也就是说:这9个单元中,只要有任意的6个单元存在(不管是数据单元还是校验单元,只要总数=6),就可以得到原始数据。每个单元的大小是1024k=1024_1024=1048576。</p><p>**<code>RS-LEGACY-6-3-1024k</code>**<strong>:</strong>策略和上面的<code>RS-6-3-1024k</code>一样,只是编码的算法用的是<code>rs-legacy</code>。</p><p><strong><code>XOR-2-1-1024k</code>***_:</strong>使用XOR编码(速度比RS编码快),每2个数据单元,生成1个校验单元,共3个单元,也就是说:这3个单元中,只要有任意的2个单元存在(不管是数据单元还是校验单元,只要总数= 2),就可以得到原始数据。每个单元的大小是1024k=1024_1024=1048576。</p><h2 id="4)具体步骤"><a href="#4)具体步骤" class="headerlink" title="4)具体步骤"></a><strong>4)具体步骤</strong></h2><h3 id="(1)开启对RS-3-2-1024k策略的支持"><a href="#(1)开启对RS-3-2-1024k策略的支持" class="headerlink" title="(1)开启对RS-3-2-1024k策略的支持"></a>(1)开启对<code>RS-3-2-1024k</code>策略的支持</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@node101 hadoop]$ hdfs ec -enablePolicy -policy RS-3-2-1024k</span><br><span class="line">Erasure coding policy RS-3-2-1024k is enabled</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="(2)在HDFS创建目录,并设置RS-3-2-1024k策略"><a href="#(2)在HDFS创建目录,并设置RS-3-2-1024k策略" class="headerlink" title="(2)在HDFS创建目录,并设置RS-3-2-1024k策略"></a>(2)在<code>HDFS</code>创建目录,并设置<code>RS-3-2-1024k</code>策略</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@node101 hadoop]$ hdfs dfs -mkdir /input</span><br><span class="line"></span><br><span class="line">[lytfly@node101 hadoop]$ hdfs ec -setPolicy -path /input -policy RS-3-2-1024k</span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="(3)上传文件,并查看文件编码后的存储情况"><a href="#(3)上传文件,并查看文件编码后的存储情况" class="headerlink" title="(3)上传文件,并查看文件编码后的存储情况"></a>(3)上传文件,并查看文件编码后的存储情况</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@node1 hadoop-3.1.4]$ hdfs dfs -put web.log /input</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>注:你所上传的文件需要大于2M才能看出效果。(低于2M,只有一个数据单元和两个校验单元) (4)查看存储路径的数据单元和校验单元,并作破坏实验</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>异构存储(冷热数据分离)</title>
<link href="/2020/11/07/3f8298fdea6c.html"/>
<url>/2020/11/07/3f8298fdea6c.html</url>
<content type="html"><![CDATA[<p>异构存储主要解决不同的数据,存储在不同类型的硬盘中,达到最佳性能的问题。</p><p><img src="https://oscimg.oschina.net/oscnet/up-70960f4ba467fd934e460b7a8c958819f2f.png"></p><h1 id="异构存储Shell操作"><a href="#异构存储Shell操作" class="headerlink" title="异构存储Shell操作"></a>异构存储Shell操作</h1><h2 id="(1)查看当前有哪些存储策略可以用"><a href="#(1)查看当前有哪些存储策略可以用" class="headerlink" title="(1)查看当前有哪些存储策略可以用"></a><strong>(1)查看当前有哪些存储策略可以用</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hdfs storagepolicies -listPolicies</span><br></pre></td></tr></table></figure><h2 id="(2)为指定路径(数据存储目录)设置指定的存储策略"><a href="#(2)为指定路径(数据存储目录)设置指定的存储策略" class="headerlink" title="(2)为指定路径(数据存储目录)设置指定的存储策略"></a><strong>(2)为指定路径(数据存储目录)设置指定的存储策略</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hdfs storagepolicies -setStoragePolicy -path xxx -policy xxx</span><br></pre></td></tr></table></figure><h2 id="(3)获取指定路径(数据存储目录或文件)的存储策略"><a href="#(3)获取指定路径(数据存储目录或文件)的存储策略" class="headerlink" title="(3)获取指定路径(数据存储目录或文件)的存储策略"></a><strong>(3)获取指定路径(数据存储目录或文件)的存储策略</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hdfs storagepolicies -getStoragePolicy -path xxx</span><br></pre></td></tr></table></figure><h2 id="(4)取消存储策略;执行改命令之后该目录或者文件,以其上级的目录为准,如果是根目录,那么就是HOT"><a href="#(4)取消存储策略;执行改命令之后该目录或者文件,以其上级的目录为准,如果是根目录,那么就是HOT" class="headerlink" title="(4)取消存储策略;执行改命令之后该目录或者文件,以其上级的目录为准,如果是根目录,那么就是HOT"></a><strong>(4)取消存储策略;执行改命令之后该目录或者文件,以其上级的目录为准,如果是根目录,那么就是HOT</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hdfs storagepolicies -unsetStoragePolicy -path xxx</span><br></pre></td></tr></table></figure><h2 id="(5)查看文件块的分布"><a href="#(5)查看文件块的分布" class="headerlink" title="(5)查看文件块的分布"></a><strong>(5)查看文件块的分布</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">bin/hdfs fsck xxx -files -blocks -locations</span><br></pre></td></tr></table></figure><h2 id="(6)查看集群节点"><a href="#(6)查看集群节点" class="headerlink" title="(6)查看集群节点"></a><strong>(6)查看集群节点</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">hadoop dfsadmin -report</span><br></pre></td></tr></table></figure><h1 id="配置文件信息"><a href="#配置文件信息" class="headerlink" title="配置文件信息"></a><strong>配置文件信息</strong></h1><h2 id="(1)为hadoop102节点的hdfs-site-xml添加如下信息"><a href="#(1)为hadoop102节点的hdfs-site-xml添加如下信息" class="headerlink" title="(1)为hadoop102节点的hdfs-site.xml添加如下信息"></a><strong>(1)为hadoop102节点的hdfs-site.xml添加如下信息</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.replication&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.storage.policy.enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.datanode.data.dir&lt;/name&gt; </span><br><span class="line"> &lt;value&gt;[SSD]file:///opt/module/hadoop-3.1.4/hdfsdata/ssd,[RAM_DISK]file:///opt/module/hadoop-3.1.4/hdfsdata/ram_disk&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(2)为hadoop103节点的hdfs-site-xml添加如下信息"><a href="#(2)为hadoop103节点的hdfs-site-xml添加如下信息" class="headerlink" title="(2)为hadoop103节点的hdfs-site.xml添加如下信息"></a><strong>(2)为hadoop103节点的hdfs-site.xml添加如下信息</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.replication&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.storage.policy.enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.datanode.data.dir&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;[SSD]file:///opt/module/hadoop-3.1.4/hdfsdata/ssd,[DISK]file:///opt/module/hadoop-3.1.4/hdfsdata/disk&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(3)为hadoop104节点的hdfs-site-xml添加如下信息"><a href="#(3)为hadoop104节点的hdfs-site-xml添加如下信息" class="headerlink" title="(3)为hadoop104节点的hdfs-site.xml添加如下信息"></a><strong>(3)为hadoop104节点的hdfs-site.xml添加如下信息</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.replication&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.storage.policy.enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.datanode.data.dir&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;[RAM_DISK]file:///opt/module/hdfsdata/ram_disk,[DISK]file:///opt/module/hadoop-3.1.4/hdfsdata/disk&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(4)为hadoop105节点的hdfs-site-xml添加如下信息"><a href="#(4)为hadoop105节点的hdfs-site-xml添加如下信息" class="headerlink" title="(4)为hadoop105节点的hdfs-site.xml添加如下信息"></a><strong>(4)为hadoop105节点的hdfs-site.xml添加如下信息</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.replication&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.storage.policy.enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.datanode.data.dir&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;[ARCHIVE]file:///opt/module/hadoop-3.1.4/hdfsdata/archive&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(5)为hadoop106节点的hdfs-site-xml添加如下信息"><a href="#(5)为hadoop106节点的hdfs-site-xml添加如下信息" class="headerlink" title="(5)为hadoop106节点的hdfs-site.xml添加如下信息"></a><strong>(5)为hadoop106节点的hdfs-site.xml添加如下信息</strong></h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.replication&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.storage.policy.enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.datanode.data.dir&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;[ARCHIVE]file:///opt/module/hadoop-3.1.4/hdfsdata/archive&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><p>注意:当我们将目录设置为COLD并且我们未配置ARCHIVE存储目录的情况下,不可以向该目录直接上传文件,会报出异常。</p><h1 id="数据准备"><a href="#数据准备" class="headerlink" title="数据准备"></a><strong>数据准备</strong></h1><h2 id="(1)启动集群"><a href="#(1)启动集群" class="headerlink" title="(1)启动集群"></a>(1)启动集群</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hdfs namenode -format</span><br><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ myhadoop.sh start</span><br></pre></td></tr></table></figure><h2 id="(1)并在HDFS上创建文件目录"><a href="#(1)并在HDFS上创建文件目录" class="headerlink" title="(1)并在HDFS上创建文件目录"></a>(1)并在HDFS上创建文件目录</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hadoop fs -mkdir /hdfsdata</span><br></pre></td></tr></table></figure><h2 id="(2)并将文件资料上传"><a href="#(2)并将文件资料上传" class="headerlink" title="(2)并将文件资料上传"></a>(2)并将文件资料上传</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hadoop fs -put /opt/module/hadoop-3.1.4/NOTICE.txt /hdfsdata</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Hadoop企业开发案例调优场景</title>
<link href="/2020/11/07/40a8b3f860f2.html"/>
<url>/2020/11/07/40a8b3f860f2.html</url>
<content type="html"><![CDATA[<h1 id="需求"><a href="#需求" class="headerlink" title="需求"></a>需求</h1><p>(1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。<br>(2)需求分析:<br>1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster<br>平均每个节点运行10个 / 3台 ≈ 3个任务(4 3 3)</p><h1 id="HDFS参数调优"><a href="#HDFS参数调优" class="headerlink" title="HDFS参数调优"></a>HDFS参数调优</h1><h2 id="(1)修改:hadoop-env-sh"><a href="#(1)修改:hadoop-env-sh" class="headerlink" title="(1)修改:hadoop-env.sh"></a>(1)修改:hadoop-env.sh</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">export HDFS_NAMENODE_OPTS=&quot;-Dhadoop.security.logger=INFO,RFAS -Xmx1024m&quot;</span><br><span class="line"></span><br><span class="line">export HDFS_DATANODE_OPTS=&quot;-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m&quot;</span><br></pre></td></tr></table></figure><h2 id="(2)修改hdfs-site-xml"><a href="#(2)修改hdfs-site-xml" class="headerlink" title="(2)修改hdfs-site.xml"></a>(2)修改hdfs-site.xml</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!-- NameNode有一个工作线程池,默认值是10 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;dfs.namenode.handler.count&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;21&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(3)修改core-site-xml"><a href="#(3)修改core-site-xml" class="headerlink" title="(3)修改core-site.xml"></a>(3)修改core-site.xml</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!-- 配置垃圾回收时间为60分钟 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;fs.trash.interval&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;60&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(4)分发配置"><a href="#(4)分发配置" class="headerlink" title="(4)分发配置"></a>(4)分发配置</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ xsync hadoop-env.sh hdfs-site.xml core-site.xml</span><br></pre></td></tr></table></figure><h1 id="MapReduce参数调优"><a href="#MapReduce参数调优" class="headerlink" title="MapReduce参数调优"></a>MapReduce参数调优</h1><h2 id="(1)修改mapred-site-xml"><a href="#(1)修改mapred-site-xml" class="headerlink" title="(1)修改mapred-site.xml"></a>(1)修改mapred-site.xml</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!-- 环形缓冲区大小,默认100m --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.task.io.sort.mb&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;100&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 环形缓冲区溢写阈值,默认0.8 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.map.sort.spill.percent&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;0.80&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- merge合并次数,默认10个 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.task.io.sort.factor&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;10&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- maptask内存,默认1g; maptask堆内存大小默认和该值大小一致mapreduce.map.java.opts --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.map.memory.mb&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;-1&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;The amount of memory to request from the scheduler for each map task. If this is not specified or is non-positive, it is inferred from mapreduce.map.java.opts and mapreduce.job.heap.memory-mb.ratio. If java-opts are also not specified, we set it to 1024.</span><br><span class="line"> &lt;/description&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- matask的CPU核数,默认1个 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.map.cpu.vcores&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;1&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- matask异常重试次数,默认4次 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.map.maxattempts&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;4&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 每个Reduce去Map中拉取数据的并行数。默认值是5 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.reduce.shuffle.parallelcopies&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;5&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- Buffer大小占Reduce可用内存的比例,默认值0.7 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.reduce.shuffle.input.buffer.percent&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;0.70&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- Buffer中的数据达到多少比例开始写入磁盘,默认值0.66。 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.reduce.shuffle.merge.percent&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;0.66&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- reducetask内存,默认1g;reducetask堆内存大小默认和该值大小一致mapreduce.reduce.java.opts --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.reduce.memory.mb&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;-1&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;The amount of memory to request from the scheduler for each reduce task. If this is not specified or is non-positive, it is inferred</span><br><span class="line"> from mapreduce.reduce.java.opts and mapreduce.job.heap.memory-mb.ratio.</span><br><span class="line"> If java-opts are also not specified, we set it to 1024.</span><br><span class="line"> &lt;/description&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- reducetask的CPU核数,默认1个 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.reduce.cpu.vcores&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- reducetask失败重试次数,默认4次 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.reduce.maxattempts&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;4&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 当MapTask完成的比例达到该值后才会为ReduceTask申请资源。默认是0.05 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.job.reduce.slowstart.completedmaps&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;0.05&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 如果程序在规定的默认10分钟内没有读到数据,将强制超时退出 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.task.timeout&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;600000&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(2)分发配置"><a href="#(2)分发配置" class="headerlink" title="(2)分发配置"></a>(2)分发配置</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ xsync mapred-site.xml</span><br></pre></td></tr></table></figure><h1 id="Yarn参数调优"><a href="#Yarn参数调优" class="headerlink" title="Yarn参数调优"></a>Yarn参数调优</h1><h2 id="(1)修改yarn-site-xml配置参数"><a href="#(1)修改yarn-site-xml配置参数" class="headerlink" title="(1)修改yarn-site.xml配置参数"></a>(1)修改yarn-site.xml配置参数</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!-- 选择调度器,默认容量 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.scheduler.class&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- ResourceManager处理调度器请求的线程数量,默认50;如果提交的任务数大于50,可以增加该值,但是不能超过3台 * 4线程 = 12线程(去除其他应用程序实际不能超过8) --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.resourcemanager.scheduler.client.thread-count&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;8&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 是否让yarn自动检测硬件进行配置,默认是false,如果该节点有很多其他应用程序,建议手动配置。如果该节点没有其他应用程序,可以采用自动 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.resource.detect-hardware-capabilities&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;false&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 是否将虚拟核数当作CPU核数,默认是false,采用物理CPU核数 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.resource.count-logical-processors-as-cores&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;false&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 虚拟核数和物理核数乘数,默认是1.0 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.resource.pcores-vcores-multiplier&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;1.0&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- NodeManager使用内存数,默认8G,修改为4G内存 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.resource.memory-mb&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;4096&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- nodemanager的CPU核数,不按照硬件环境自动设定时默认是8个,修改为4个 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.resource.cpu-vcores&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;4&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 容器最小内存,默认1G --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.scheduler.minimum-allocation-mb&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;1024&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 容器最大内存,默认8G,修改为2G --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.scheduler.maximum-allocation-mb&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2048&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 容器最小CPU核数,默认1个 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.scheduler.minimum-allocation-vcores&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;1&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 容器最大CPU核数,默认4个,修改为2个 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.scheduler.maximum-allocation-vcores&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 虚拟内存检查,默认打开,修改为关闭 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.vmem-check-enabled&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;false&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- 虚拟内存和物理内存设置比例,默认2.1 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.nodemanager.vmem-pmem-ratio&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2.1&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(2)分发配置-1"><a href="#(2)分发配置-1" class="headerlink" title="(2)分发配置"></a>(2)分发配置</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop]$ xsync yarn-site.xml</span><br></pre></td></tr></table></figure><h1 id="执行程序"><a href="#执行程序" class="headerlink" title="执行程序"></a>执行程序</h1><h2 id="(1)重启集群"><a href="#(1)重启集群" class="headerlink" title="(1)重启集群"></a>(1)重启集群</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ sbin/stop-yarn.sh</span><br><span class="line">[lytfly@hadoop103 hadoop-3.1.4]$ sbin/start-yarn.sh</span><br></pre></td></tr></table></figure><h2 id="(2)执行WordCount程序"><a href="#(2)执行WordCount程序" class="headerlink" title="(2)执行WordCount程序"></a>(2)执行WordCount程序</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.4.jar wordcount /input /output</span><br></pre></td></tr></table></figure><h2 id="(3)观察Yarn任务执行页面"><a href="#(3)观察Yarn任务执行页面" class="headerlink" title="(3)观察Yarn任务执行页面"></a>(3)观察Yarn任务执行页面</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">http://hadoop103:8088/cluster/apps</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Hadoop小文件优化方法</title>
<link href="/2020/11/07/3a181e232503.html"/>
<url>/2020/11/07/3a181e232503.html</url>
<content type="html"><![CDATA[<h1 id="Hadoop小文件弊端"><a href="#Hadoop小文件弊端" class="headerlink" title="Hadoop小文件弊端"></a>Hadoop小文件弊端</h1><p><img src="https://oscimg.oschina.net/oscnet/up-1681b15baa2d4bc66ceb813b9d6fcd4ad55.png"></p><p>每个文件均按块存储,每个块的元数据存储在NameNode的内存中,因此HDFS存储小文件会非常低效。因为大量的小文件会耗尽NameNode中的大部分内存。但注意,存储小文件所需要的磁盘容量和数据块的大小无关。例如,一个1MB的文件设置为128MB的块存储,实际使用的是1MB的磁盘空间,而不是128MB。</p><p>HDFS上每个文件都要在NameNode上创建对应的元数据,这个元数据的大小约为150byte,这样当小文件比较多的时候,就会产生很多的元数据文件,一方面会大量占用NameNode的内存空间,另一方面就是元数据文件过多,使得寻址索引速度变慢。<br>小文件过多,在进行MR计算时,会生成过多切片,需要启动过多的MapTask。每个MapTask处理的数据量小,导致MapTask的处理时间比启动时间还小,白白消耗资源。</p><h1 id="Hadoop小文件解决方案"><a href="#Hadoop小文件解决方案" class="headerlink" title="Hadoop小文件解决方案"></a>Hadoop小文件解决方案</h1><h2 id="1)(数据源头)"><a href="#1)(数据源头)" class="headerlink" title="1)(数据源头)"></a>1)(数据源头)</h2><p>在数据采集的时候,就将小文件或小批数据合成大文件再上传HDFS</p><h2 id="2)Hadoop-Archive(存储方向)"><a href="#2)Hadoop-Archive(存储方向)" class="headerlink" title="2)Hadoop Archive(存储方向)"></a>2)Hadoop Archive(存储方向)</h2><p>是一个高效的将小文件放入HDFS块中的文件存档工具,能够将多个小文件打包成一个HAR文件,从而达到减少NameNode的内存使用。具体说来,HDFS存档文件对内还是一个一个独立文件,对NameNode而言却是一个整体,减少了NameNode的内存。</p><p><img src="https://oscimg.oschina.net/oscnet/up-652c8cb7f70da24b184d8b8347dc715d836.png"></p><h2 id="3)CombineTextInputFormat(计算方向)"><a href="#3)CombineTextInputFormat(计算方向)" class="headerlink" title="3)CombineTextInputFormat(计算方向)"></a>3)CombineTextInputFormat(计算方向)</h2><p>CombineTextInputFormat用于将多个小文件在切片过程中生成一个单独的切片或者少量的切片。 </p><h2 id="4)开启uber模式,实现JVM重用(计算方向)"><a href="#4)开启uber模式,实现JVM重用(计算方向)" class="headerlink" title="4)开启uber模式,实现JVM重用(计算方向)"></a>4)开启uber模式,实现JVM重用(计算方向)</h2><p>默认情况下,每个Task任务都需要启动一个JVM来运行,如果Task任务计算的数据量很小,我们可以让同一个Job的多个Task运行在一个JVM中,不必为每个Task都开启一个JVM。</p><p><strong>(1)未开启uber模式,在/input路径上上传多个小文件并执行wordcount程序</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.4.jar wordcount /input /output2</span><br></pre></td></tr></table></figure><p><strong>(2)观察控制台</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">2020-02-14 16:13:50,607 INFO mapreduce.Job: Job job_1613281510851_0002 running in uber mode : false</span><br></pre></td></tr></table></figure><p><strong>(3)观察<a href="http://hadoop103:8088/cluster">http://hadoop103:8088/cluster</a></strong></p><p><img src="https://oscimg.oschina.net/oscnet/up-1e21dcacebd1948de7967cd9fc0de5ee825.png"> <img src="https://oscimg.oschina.net/oscnet/up-19e96a8d0061a351578be961c10ff5a007a.png"><br><img src="https://oscimg.oschina.net/oscnet/up-db7d922e877c934e1f3330cfe2514df2c72.png"><br><strong>(4)开启uber模式,在mapred-site.xml中添加如下配置</strong> </p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!-- 开启uber模式,默认关闭 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.job.ubertask.enable&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br><span class="line">&lt;!-- uber模式中最大的mapTask数量,可向下修改 --&gt; </span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.job.ubertask.maxmaps&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;9&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;!-- uber模式中最大的reduce数量,可向下修改 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.job.ubertask.maxreduces&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;1&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;!-- uber模式中最大的输入数据量,默认使用dfs.blocksize 的值,可向下修改 --&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;mapreduce.job.ubertask.maxbytes&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;128&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><h2 id="(5)分发配置"><a href="#(5)分发配置" class="headerlink" title="(5)分发配置"></a>(5)分发配置</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[atguigu@hadoop102 hadoop]$ xsync mapred-site.xml</span><br></pre></td></tr></table></figure><h2 id="(6)再次执行wordcount程序"><a href="#(6)再次执行wordcount程序" class="headerlink" title="(6)再次执行wordcount程序"></a>(6)再次执行wordcount程序</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[atguigu@hadoop102 hadoop-3.1.4]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.4.jar wordcount /input /output2</span><br></pre></td></tr></table></figure><h2 id="(7)观察控制台"><a href="#(7)观察控制台" class="headerlink" title="(7)观察控制台"></a>(7)观察控制台</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">2020-02-14 16:28:36,198 INFO mapreduce.Job: Job job_1613281510851_0003 running in uber mode : true</span><br></pre></td></tr></table></figure><h2 id="(8)观察http-hadoop103-8088-cluster"><a href="#(8)观察http-hadoop103-8088-cluster" class="headerlink" title="(8)观察http://hadoop103:8088/cluster"></a>(8)观察<a href="http://hadoop103:8088/cluster">http://hadoop103:8088/cluster</a></h2><h2 id=""><a href="#" class="headerlink" title=""></a><img src="https://oscimg.oschina.net/oscnet/up-e9a26a3c20f10dadd4c75d67ca8662589ec.png"></h2>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Hadoop-Yarn常用的调优参数</title>
<link href="/2020/11/07/9cc4868ea813.html"/>
<url>/2020/11/07/9cc4868ea813.html</url>
<content type="html"><![CDATA[<h1 id="调优参数列表"><a href="#调优参数列表" class="headerlink" title="调优参数列表"></a>调优参数列表</h1><h2 id="(1)Resourcemanager相关"><a href="#(1)Resourcemanager相关" class="headerlink" title="(1)Resourcemanager相关"></a>(1)Resourcemanager相关</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">yarn.resourcemanager.scheduler.client.thread-count ResourceManager处理调度器请求的线程数量</span><br><span class="line"></span><br><span class="line">yarn.resourcemanager.scheduler.class 配置调度器</span><br></pre></td></tr></table></figure><h2 id="(2)Nodemanager相关"><a href="#(2)Nodemanager相关" class="headerlink" title="(2)Nodemanager相关"></a>(2)Nodemanager相关</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">yarn.nodemanager.resource.memory-mb NodeManager使用内存数</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.resource.system-reserved-memory-mb NodeManager为系统保留多少内存,和上一个参数二者取一即可</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.resource.cpu-vcores NodeManager使用CPU核数</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.resource.count-logical-processors-as-cores 是否将虚拟核数当作CPU核数</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.resource.pcores-vcores-multiplier 虚拟核数和物理核数乘数,例如:4核8线程,该参数就应设为2</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.resource.detect-hardware-capabilities 是否让yarn自己检测硬件进行配置</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.pmem-check-enabled 是否开启物理内存检查限制container</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.vmem-check-enabled 是否开启虚拟内存检查限制container</span><br><span class="line"></span><br><span class="line">yarn.nodemanager.vmem-pmem-ratio 虚拟内存物理内存比例</span><br></pre></td></tr></table></figure><h2 id="(3)Container容器相关"><a href="#(3)Container容器相关" class="headerlink" title="(3)Container容器相关"></a>(3)Container容器相关</h2><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">yarn.scheduler.minimum-allocation-mb 容器最小内存</span><br><span class="line"></span><br><span class="line">yarn.scheduler.maximum-allocation-mb 容器最大内存</span><br><span class="line"></span><br><span class="line">yarn.scheduler.minimum-allocation-vcores 容器最小核数</span><br><span class="line"></span><br><span class="line">yarn.scheduler.maximum-allocation-vcores 容器最大核数</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> Yarn </tag>
</tags>
</entry>
<entry>
<title>安装Hive3.1.2</title>
<link href="/2020/11/07/ec88b39f0ddd.html"/>
<url>/2020/11/07/ec88b39f0ddd.html</url>
<content type="html"><![CDATA[<p>下载地址:<a href="https://downloads.apache.org/hive/hive-3.1.2/">https://downloads.apache.org/hive/hive-3.1.2/</a></p><p><strong>解压在指定目录</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">tar -zxvf /opt/software/apache-hive-3.1.2-bin.tar.gz -C /opt/module/</span><br><span class="line">mv /opt/module/apache-hive-3.1.2-bin/ /opt/module/hive3.1.2</span><br></pre></td></tr></table></figure><p><strong>修改/etc/profile.d/my_env.sh,添加环境变量</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#HIVE_HOME</span><br><span class="line">export HIVE_HOME=/opt/module/hive3.1.2</span><br><span class="line">export PATH=$PATH:$HIVE_HOME/bin</span><br></pre></td></tr></table></figure><p>3.1.2版本执行bin/schematool -dbType derby -initSchema的时候会报一个错误:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]</span><br><span class="line">Exception in thread &quot;main&quot; java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V</span><br><span class="line"> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)</span><br></pre></td></tr></table></figure><p>删除掉hive里面的guava-19.0.jar,将hadoop里面的guava-27.0-jre.jar复制到hive里即可。</p><p><strong>默认的hive使用内置的derby,不好用,这里修改为mysql。</strong></p><p><code>vim $HIVE_HOME/conf/hive-site.xml</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;!--jdbc连接的URL--&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionURL&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;&lt;![CDATA[jdbc:mysql://139.199.106.57:3306/hive_metastore?characterEncoding=utf8&amp;serverTimezone=Asia/Shanghai&amp;useSSL=false&amp;tinyInt1isBit=false]]&gt;&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;!--jdbc连接的Driver--&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionDriverName&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;com.mysql.cj.jdbc.Driver&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;!--jdbc连接的username--&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionUserName&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;pupbook&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;!--jdbc连接的password--&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionPassword&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;pbFeZ^SCWmKg@Jar&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;!--Hive元数据存储版本的验证--&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.metastore.schema.verification&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;false&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;!--元数据存储授权--&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.metastore.event.db.notification.api.auth&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;false&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure><p>初始化 Hive 元数据库 schematool -initSchema -dbType mysql -verbose</p><p><strong>完整的hive-site.xml的配置</strong></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;configuration&gt;</span><br><span class="line"> &lt;!--jdbc连接的URL--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionURL&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;&lt;![CDATA[jdbc:mysql://139.199.106.57:3306/hive_metastore?characterEncoding=utf8&amp;serverTimezone=Asia/Shanghai&amp;useSSL=false&amp;tinyInt1isBit=false]]&gt;&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"> &lt;!--jdbc连接的Driver--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionDriverName&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;com.mysql.cj.jdbc.Driver&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"> &lt;!--jdbc连接的username--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionUserName&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;pupbook&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"> &lt;!--jdbc连接的password--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;javax.jdo.option.ConnectionPassword&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;pbFeZ^SCWmKg@Jar&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"> &lt;!--Hive元数据存储版本的验证--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.metastore.schema.verification&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;false&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"> &lt;!--元数据存储授权--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.metastore.event.db.notification.api.auth&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;false&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!--Hive默认在HDFS的工作目录--&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.metastore.warehouse.dir&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;/user/hive/warehouse&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!-- 指定存储元数据要连接的地址 --&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.metastore.uris&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;thrift://node101:9083&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!-- 打印表头 --&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.cli.print.header&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"> &lt;!-- 打印当前库 --&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.cli.print.current.db&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;true&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"></span><br><span class="line"> &lt;!-- 指定 hiveserver2 连接的 host --&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.server2.thrift.bind.host&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;node101&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line"> &lt;!-- 指定 hiveserver2 连接的端口号 --&gt;</span><br><span class="line"> &lt;property&gt;</span><br><span class="line"> &lt;name&gt;hive.server2.thrift.port&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;10000&lt;/value&gt;</span><br><span class="line"> &lt;/property&gt;</span><br><span class="line">&lt;/configuration&gt;</span><br></pre></td></tr></table></figure><p>创建hive启动脚本 <code>vim ~/bin.myhive.sh</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line">HIVE_LOG_DIR=/data_disk/hive/logs</span><br><span class="line"># 配置hive日志目录</span><br><span class="line">if [ ! -d $HIVE_LOG_DIR ]</span><br><span class="line">then</span><br><span class="line">mkdir -p $HIVE_LOG_DIR</span><br><span class="line">fi</span><br><span class="line"></span><br><span class="line">#检查进程是否运行正常,参数 1 为进程名,参数 2 为进程端口</span><br><span class="line">function check_process()</span><br><span class="line">&#123;</span><br><span class="line"> pid=$(ps -ef 2&gt;/dev/null | grep -v grep | grep -i $1 | awk &#x27;&#123;print $2&#125;&#x27;)</span><br><span class="line"> ppid=$(netstat -nltp 2&gt;/dev/null | grep $2 | awk &#x27;&#123;print $7&#125;&#x27; | cut -d &#x27;/&#x27; -f 1)</span><br><span class="line"> echo $pid</span><br><span class="line"> [[ &quot;$pid&quot; =~ &quot;$ppid&quot; ]] &amp;&amp; [ &quot;$ppid&quot; ] &amp;&amp; return 0 || return 1</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">function hive_start()</span><br><span class="line">&#123;</span><br><span class="line"> metapid=$(check_process HiveMetastore 9083)</span><br><span class="line"> cmd=&quot;nohup /opt/module/hive3.1.2/bin/hive --service metastore &gt;$HIVE_LOG_DIR/metastore.log 2&gt;&amp;1 &amp;&quot;</span><br><span class="line"> cmd=$cmd&quot; sleep 3; hdfs dfsadmin -safemode wait &gt;/dev/null 2&gt;&amp;1&quot;</span><br><span class="line"> [ -z &quot;$metapid&quot; ] &amp;&amp; eval $cmd || echo &quot;Metastroe 服务已启动&quot;</span><br><span class="line"> server2pid=$(check_process HiveServer2 10000)</span><br><span class="line"> cmd=&quot;nohup /opt/module/hive3.1.2/bin/hive --service hiveserver2 &gt;$HIVE_LOG_DIR/hiveServer2.log 2&gt;&amp;1 &amp;&quot;</span><br><span class="line"> [ -z &quot;$server2pid&quot; ] &amp;&amp; eval $cmd || echo &quot;HiveServer2 服务已启动&quot;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">function hive_stop()</span><br><span class="line">&#123;</span><br><span class="line"> metapid=$(check_process HiveMetastore 9083)</span><br><span class="line"> [ &quot;$metapid&quot; ] &amp;&amp; kill $metapid || echo &quot;Metastore 服务未启动&quot;</span><br><span class="line"> server2pid=$(check_process HiveServer2 10000)</span><br><span class="line"> [ &quot;$server2pid&quot; ] &amp;&amp; kill $server2pid || echo &quot;HiveServer2 服务未启动&quot;</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">case $1 in</span><br><span class="line">&quot;start&quot;)</span><br><span class="line"> hive_start</span><br><span class="line"> ;;</span><br><span class="line">&quot;stop&quot;)</span><br><span class="line"> hive_stop</span><br><span class="line"> ;;</span><br><span class="line">&quot;restart&quot;)</span><br><span class="line"> hive_stop</span><br><span class="line"> sleep 3</span><br><span class="line"> hive_start</span><br><span class="line"> ;;</span><br><span class="line">&quot;status&quot;)</span><br><span class="line"> check_process HiveMetastore 9083 &gt;/dev/null &amp;&amp; echo &quot;Metastore 服务运行正常&quot; || echo &quot;Metastore 服务运行异常&quot;</span><br><span class="line"> check_process HiveServer2 10000 &gt;/dev/null &amp;&amp; echo &quot;HiveServer2 服务运行正常&quot; || echo &quot;HiveServer2 服务运行异常&quot;</span><br><span class="line"> ;;</span><br><span class="line">*)</span><br><span class="line"> echo Invalid Args!</span><br><span class="line"> echo &#x27;Usage: &#x27;$(basename $0)&#x27; start|stop|restart|status&#x27;</span><br><span class="line"> ;;</span><br><span class="line">esac</span><br></pre></td></tr></table></figure><p><code>chmod +x myhive.sh</code></p>]]></content>
<tags>
<tag> Hadoop </tag>
<tag> Hive </tag>
</tags>
</entry>
<entry>
<title>记一次HiveServer2启动不起来问题</title>
<link href="/2020/11/07/fed9c14bf1ec.html"/>
<url>/2020/11/07/fed9c14bf1ec.html</url>
<content type="html"><![CDATA[<p>执行bin/hive –service hiveserver2<br>等待30秒后,Hadoop已经退出安全模式</p><p>但是绑定的10000端口未成功。</p><p>各种原因都尝试了,还是不行,查看hive日志。</p><p>出现权限的问题,我也很纳闷。</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">java.lang.RuntimeException: Error applying authorization policy on hive configuration: The dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx------</span><br></pre></td></tr></table></figure><p><img src="https://oscimg.oschina.net/oscnet/up-d53b671bdbcd201845e0f686b5057858118.png"></p><p>删除tmp目录后重新启动就可以了。</p>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> Hive </tag>
</tags>
</entry>
<entry>
<title>使用DBeaver链接Hive</title>
<link href="/2020/11/07/b95db6da5460.html"/>
<url>/2020/11/07/b95db6da5460.html</url>
<content type="html"><![CDATA[<p>1.<code>Hive</code>开启<code>HiveMetastore</code>和<code>HiveServer2</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">nohup /opt/module/hive3.1.2/bin/hive --service metastore &gt;$HIVE_LOG_DIR/metastore.log 2&gt;&amp;1 &amp;</span><br><span class="line">nohup /opt/module/hive3.1.2/bin/hive --service hiveserver2 &gt;$HIVE_LOG_DIR/hiveServer2.log 2&gt;&amp;1 &amp;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>2.配置链接的用户名和密码</p><p><code>vim conf/hive-site.xml</code></p><figure class="highlight xml"><table><tr><td class="code"><pre><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hive.server2.thrift.client.user<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>lytfly<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">description</span>&gt;</span>Username to use against thrift client<span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;<span class="name">property</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">name</span>&gt;</span>hive.server2.thrift.client.password<span class="tag">&lt;/<span class="name">name</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">value</span>&gt;</span>199000<span class="tag">&lt;/<span class="name">value</span>&gt;</span></span><br><span class="line"> <span class="tag">&lt;<span class="name">description</span>&gt;</span>Password to use against thrift client<span class="tag">&lt;/<span class="name">description</span>&gt;</span></span><br><span class="line"><span class="tag">&lt;/<span class="name">property</span>&gt;</span></span><br><span class="line"></span><br></pre></td></tr></table></figure><p>确保添加的用户lytfly对HDFS文件系统<code>/tmp/hive</code>有权限,否会报以下错误:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">java.lang.RuntimeException: Error applying authorization policy on hive configuration: The dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx------</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><img src="https://oscimg.oschina.net/oscnet/up-5ffcc9e548737400a2800513ade085ce8ad.png"></p><p>解决方案:<a href="https://my.oschina.net/liuyuantao/blog/5276881">https://my.oschina.net/liuyuantao/blog/5276881</a></p><p>在<code>beeline</code>进行连接时,如果出现以下问题</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">Error: Could not open client transport with JDBC Uri: jdbc:hive2://node101:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.eption(org.apache.hadoop.security.authorize.AuthorizationException): User: lytfly is not allowed to impersonate lytfly (state=08S01,code=0)</span><br><span class="line"></span><br></pre></td></tr></table></figure><p><img src="https://oscimg.oschina.net/oscnet/up-830071c62cea7713b5cce38b8701a0e2680.png"></p><p>这是因为在<code>hadoop</code>的<code>core-site.xml</code>缺少了以下配置:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line">&lt;name&gt;hadoop.proxyuser.lytfly.groups&lt;/name&gt;</span><br><span class="line">&lt;value&gt;*&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line">&lt;property&gt;</span><br><span class="line">&lt;name&gt;hadoop.proxyuser.lytfly.hosts&lt;/name&gt;</span><br><span class="line">&lt;value&gt;*&lt;/value&gt;</span><br><span class="line">&lt;/property&gt;</span><br><span class="line"></span><br></pre></td></tr></table></figure><p>重启<code>Hadoop</code>集群,重启<code>Hive</code>的<code>HiveMetastore</code>和<code>HiveServer2</code>即可。</p><p>错误一:提交一个Hive查询任务,用到了MapReduce,报下面错误</p><p>ERROR : Job Submission failed with exception ‘org.apache.hadoop.security.authorize.AuthorizationException(User: lytfly is not allowed to impersonate lytfly)’<br>org.apache.hadoop.security.authorize.AuthorizationException: User: lytfly is not allowed to impersonate lytfly</p><p>开始以前是权限的问题,尝试各种方法都不行,后来调整yarn调度的内存占用就可以了。</p><p>解决方法:调整hadoop配置文件yarn-site.xml中值:</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">&lt;property&gt;</span><br><span class="line"> &lt;name&gt;yarn.scheduler.minimum-allocation-mb&lt;/name&gt;</span><br><span class="line"> &lt;value&gt;2048&lt;/value&gt;</span><br><span class="line"> &lt;description&gt;default value is 1024&lt;/description&gt;</span><br><span class="line">&lt;/property&gt;</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hive </tag>
</tags>
</entry>
<entry>
<title>Spring Cloud全家桶主要组件及简要介绍</title>
<link href="/2020/11/07/4a3534643ea9.html"/>
<url>/2020/11/07/4a3534643ea9.html</url>
<content type="html"><![CDATA[<p>一、微服务简介</p><p>微服务是最近的一两年的时间里是很火的一个概念。感觉不学习一下都快跟不上时代的步伐了,下边做一下简单的总结和介绍。</p><p>何为微服务?简而言之,微服务架构风格这种开发方法,是以开发一组小型服务的方式来开发一个独立的应用系统的。其中每个小型服务都运行在自己的进程中,并经常采用HTTP资源API这样轻量的机制来相互通信。这些服务围绕业务功能进行构建,并能通过全自动的部署机制来进行独立部署。这些微服务可以使用不同的语言来编写,并且可以使用不同的数据存储技术。对这些微服务我们仅做最低限度的集中管理。</p><p>一个微服务一般完成某个特定的功能,比如下单管理、客户管理等等。每一个微服务都是微型六角形应用,都有自己的业务逻辑和适配器。一些微服务还会发布API给其它微服务和应用客户端使用。其它微服务完成一个Web UI,运行时,每一个实例可能是一个云VM或者是Docker容器。</p><p>比如,一个前面描述系统可能的分解如下:</p><p><img src="https://oscimg.oschina.net/oscnet/98a0cd141b4e42da88ac780bc2bfdff3.jpg" alt="Spring Cloud全家桶主要组件及简要介绍"></p><p>总的来说,微服务的主旨是将一个原本独立的系统拆分成多个小型服务,这些小型服务都在各自独立的进程中运行,服务之间通过基于HTTP的RESTful API进行通信协作,并且每个服务都维护着自身的数据存储、业务开发、自动化测试以及独立部署机制。</p><p>二、微服务的特征</p><p>1、每个微服务可独立运行在自己的进程里;</p><p>2、一系列独立运行的微服务共同构建起了整个系统;</p><p>3、每个服务为独立的业务开发,一个微服务一般完成某个特定的功能,比如:订单管理、用户管理等;</p><p>4、微服务之间通过一些轻量的通信机制进行通信,例如通过REST API或者RPC的方式进行调用。</p><p>三、微服务的优缺点</p><p>1、易于开发和维护</p><p>2、启动较快</p><p>3、局部修改容易部署</p><p>4、技术栈不受限</p><p>5、按需伸缩</p><p>6、DevOps</p><p>四、常见微服务框架</p><p>1、服务治理框架</p><p>(1)Dubbo、Dubbox(当当网对Dubbo的扩展)</p><p>最近的好消息是Dubbo已近重新开始进行运维啦!</p><p>(2)Netflix的Eureka、Apache的Consul等。</p><p>Spring Cloud Eureka是对Netflix的Eureka的进一步封装。</p><p>2、分布式配置管理</p><p>(1)百度的Disconf</p><p><img src="https://oscimg.oschina.net/oscnet/08f219dc211e44c1999ded518f6753b4.jpg" alt="Spring Cloud全家桶主要组件及简要介绍"></p><p>(2)360的QConf</p><p>(3)Spring Cloud组件中的Config</p><p>(3)淘宝的Diamond</p><p>3、批量任务框架</p><p>(1)Spring Cloud组件中的Task</p><p>(2)LTS</p><p>4、服务追踪框架</p><p>。。。</p><p>五、Spring Cloud全家桶组件</p><p>在介绍Spring Cloud 全家桶之前,首先要介绍一下Netflix ,Netflix 是一个很伟大的公司,在Spring Cloud项目中占着重要的作用,Netflix 公司提供了包括Eureka、Hystrix、Zuul、Archaius等在内的很多组件,在微服务架构中至关重要,Spring在Netflix 的基础上,封装了一系列的组件,命名为:Spring Cloud Eureka、Spring Cloud Hystrix、Spring Cloud Zuul等,下边对各个组件进行分别得介绍:</p><p>(1)Spring Cloud Eureka</p><p>我们使用微服务,微服务的本质还是各种API接口的调用,那么我们怎么产生这些接口、产生了这些接口之后如何进行调用那?如何进行管理哪?</p><p>答案就是Spring Cloud Eureka,我们可以将自己定义的API 接口注册到Spring Cloud Eureka上,Eureka负责服务的注册于发现,如果学习过Zookeeper的话,就可以很好的理解,Eureka的角色和 Zookeeper的角色差不多,都是服务的注册和发现,构成Eureka体系的包括:服务注册中心、服务提供者、服务消费者。</p><p><img src="https://oscimg.oschina.net/oscnet/a696b86e3b244942a8ebe7283850010d.jpg" alt="Spring Cloud全家桶主要组件及简要介绍"></p><p>上图中描述了(图片来源于网络):</p><p>1、两台Eureka服务注册中心构成的服务注册中心的主从复制集群;</p><p>2、然后服务提供者向注册中心进行注册、续约、下线服务等;</p><p>3、服务消费者向Eureka注册中心拉去服务列表并维护在本地(这也是客户端发现模式的机制体现!);</p><p>4、然后服务消费者根据从Eureka服务注册中心获取的服务列表选取一个服务提供者进行消费服务。</p><p>(2)Spring Cloud Ribbon</p><p>在上Spring Cloud Eureka描述了服务如何进行注册,注册到哪里,服务消费者如何获取服务生产者的服务信息,但是Eureka只是维护了服务生产者、注册中心、服务消费者三者之间的关系,真正的服务消费者调用服务生产者提供的数据是通过Spring Cloud Ribbon来实现的。</p><p>在(1)中提到了服务消费者是将服务从注册中心获取服务生产者的服务列表并维护在本地的,这种客户端发现模式的方式是服务消费者选择合适的节点进行访问服务生产者提供的数据,这种选择合适节点的过程就是Spring Cloud Ribbon完成的。</p><p>Spring Cloud Ribbon客户端负载均衡器由此而来。</p><p>(3)Spring Cloud Feign</p><p>上述(1)、(2)中我们已经使用最简单的方式实现了服务的注册发现和服务的调用操作,如果具体的使用Ribbon调用服务的话,你就可以感受到使用Ribbon的方式还是有一些复杂,因此Spring Cloud Feign应运而生。</p><p>Spring Cloud Feign 是一个声明web服务客户端,这使得编写Web服务客户端更容易,使用Feign 创建一个接口并对它进行注解,它具有可插拔的注解支持包括Feign注解与JAX-RS注解,Feign还支持可插拔的编码器与解码器,Spring Cloud 增加了对 Spring MVC的注解,Spring Web 默认使用了HttpMessageConverters, Spring Cloud 集成 Ribbon 和 Eureka 提供的负载均衡的HTTP客户端 Feign。</p><p>简单的可以理解为:Spring Cloud Feign 的出现使得Eureka和Ribbon的使用更为简单。</p><p>(4)Spring Cloud Hystrix</p><p>我们在(1)、(2)、(3)中知道了使用Eureka进行服务的注册和发现,使用Ribbon实现服务的负载均衡调用,还知道了使用Feign可以简化我们的编码。但是,这些还不足以实现一个高可用的微服务架构。</p><p>例如:当有一个服务出现了故障,而服务的调用方不知道服务出现故障,若此时调用放的请求不断的增加,最后就会等待出现故障的依赖方 相应形成任务的积压,最终导致自身服务的瘫痪。</p><p>Spring Cloud Hystrix正是为了解决这种情况的,防止对某一故障服务持续进行访问。Hystrix的含义是:断路器,断路器本身是一种开关装置,用于我们家庭的电路保护,防止电流的过载,当线路中有电器发生短路的时候,断路器能够及时切换故障的电器,防止发生过载、发热甚至起火等严重后果。</p><p>(5)Spring Cloud Config</p><p>对于微服务还不是很多的时候,各种服务的配置管理起来还相对简单,但是当成百上千的微服务节点起来的时候,服务配置的管理变得会复杂起来。</p><p>分布式系统中,由于服务数量巨多,为了方便服务配置文件统一管理,实时更新,所以需要分布式配置中心组件。在Spring Cloud中,有分布式配置中心组件Spring Cloud Config ,它支持配置服务放在配置服务的内存中(即本地),也支持放在远程Git仓库中。在Cpring Cloud Config 组件中,分两个角色,一是Config Server,二是Config Client。</p><p>Config Server用于配置属性的存储,存储的位置可以为Git仓库、SVN仓库、本地文件等,Config Client用于服务属性的读取。</p><p><img src="https://oscimg.oschina.net/oscnet/e4cb7660933c443ebc2a1eeccda4cad9.jpg" alt="Spring Cloud全家桶主要组件及简要介绍"></p><p>(6)Spring Cloud Zuul</p><p>我们使用Spring Cloud Netflix中的Eureka实现了服务注册中心以及服务注册与发现;而服务间通过Ribbon或Feign实现服务的消费以及均衡负载;通过Spring Cloud Config实现了应用多环境的外部化配置以及版本管理。为了使得服务集群更为健壮,使用Hystrix的融断机制来避免在微服务架构中个别服务出现异常时引起的故障蔓延。</p><p><img src="https://oscimg.oschina.net/oscnet/ac56e80dc9e141b89e1164ad6be34c57.jpg" alt="Spring Cloud全家桶主要组件及简要介绍"></p><p>先来说说这样架构需要做的一些事儿以及存在的不足:</p><p>1、首先,破坏了服务无状态特点。为了保证对外服务的安全性,我们需要实现对服务访问的权限控制,而开放服务的权限控制机制将会贯穿并污染整个开放服务的业务逻辑,这会带来的最直接问题是,破坏了服务集群中REST API无状态的特点。从具体开发和测试的角度来说,在工作中除了要考虑实际的业务逻辑之外,还需要额外可续对接口访问的控制处理。</p><p>2、其次,无法直接复用既有接口。当我们需要对一个即有的集群内访问接口,实现外部服务访问时,我们不得不通过在原有接口上增加校验逻辑,或增加一个代理调用来实现权限控制,无法直接复用原有的接口。</p><p>面对类似上面的问题,我们要如何解决呢?下面进入本文的正题:服务网关!</p><p>为了解决上面这些问题,我们需要将权限控制这样的东西从我们的服务单元中抽离出去,而最适合这些逻辑的地方就是处于对外访问最前端的地方,我们需要一个更强大一些的均衡负载器,它就是本文将来介绍的:服务网关。</p><p>服务网关是微服务架构中一个不可或缺的部分。通过服务网关统一向外系统提供REST API的过程中,除了具备服务路由、均衡负载功能之外,它还具备了权限控制等功能。Spring Cloud Netflix中的Zuul就担任了这样的一个角色,为微服务架构提供了前门保护的作用,同时将权限控制这些较重的非业务逻辑内容迁移到服务路由层面,使得服务集群主体能够具备更高的可复用性和可测试性。</p><p>(7)Spring Cloud Bus</p><p>在(5)Spring Cloud Config中,我们知道的配置文件可以通过Config Server存储到Git等地方,通过Config Client进行读取,但是我们的配置文件不可能是一直不变的,当我们的配置文件放生变化的时候如何进行更新哪?</p><p>一种最简单的方式重新一下Config Client进行重新获取,但Spring Cloud绝对不会让你这样做的,Spring Cloud Bus正是提供一种操作使得我们在不关闭服务的情况下更新我们的配置。</p><p>Spring Cloud Bus官方意义:消息总线。</p><p>当然动态更新服务配置只是消息总线的一个用处,还有很多其他用处。</p><p><img src="https://oscimg.oschina.net/oscnet/7e76a78a37c4411ab79308d144128a86.jpg" alt="Spring Cloud全家桶主要组件及简要介绍"></p>]]></content>
<categories>
<category> Spring&amp;SpringBoot </category>
</categories>
</entry>
<entry>
<title>使用SecondaryNameNode恢复NameNode的数据</title>
<link href="/2020/09/28/5425819fb73d.html"/>
<url>/2020/09/28/5425819fb73d.html</url>
<content type="html"><![CDATA[<p><img src="https://oscimg.oschina.net/oscnet/up-3b629c3742503f1d69ff7221c9ace166ee0.png"></p><h1 id="1)需求:"><a href="#1)需求:" class="headerlink" title="1)需求:"></a>1)需求:</h1><p>NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode</p><p>此种方式恢复的数据可能存在小部分数据的丢失。</p><h1 id="2)故障模拟"><a href="#2)故障模拟" class="headerlink" title="2)故障模拟"></a>2)故障模拟</h1><p>(1)kill -9 NameNode进程</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 current]$ kill -9 19886</span><br></pre></td></tr></table></figure><p>(2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/name)</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ rm -rf /opt/module/hadoop-3.1.4/data/dfs/name/*</span><br></pre></td></tr></table></figure><h1 id="3)问题解决"><a href="#3)问题解决" class="headerlink" title="3)问题解决"></a>3)问题解决</h1><p>(1)拷贝SecondaryNameNode中数据到原NameNode存储数据目录</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 dfs]$ scp -r lytfly@hadoop104:/opt/module/hadoop-3.1.4/data/dfs/namesecondary/* ./name/</span><br></pre></td></tr></table></figure><p>(2)重新启动NameNode</p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[lytfly@hadoop102 hadoop-3.1.4]$ hdfs --daemon start namenode</span><br></pre></td></tr></table></figure>]]></content>
<categories>
<category> 大数据 </category>
</categories>
<tags>
<tag> Hadoop </tag>
<tag> HDFS </tag>
</tags>
</entry>
<entry>
<title>Hadoop生态圈</title>
<link href="/2020/09/07/f6e086c72cd4.html"/>
<url>/2020/09/07/f6e086c72cd4.html</url>
<content type="html"><![CDATA[<p><a href="https://static.oschina.net/uploads/space/2017/0525/091319_9tNE_2772739.png"><img src="https://static.oschina.net/uploads/space/2017/0525/091319_9tNE_2772739.png"></a></p>]]></content>
<tags>
<tag> Hadoop </tag>
</tags>
</entry>
<entry>
<title>Springboot以jar包方式启动、关闭、重启脚本</title>
<link href="/2019/11/09/9f4bb5de3c47.html"/>
<url>/2019/11/09/9f4bb5de3c47.html</url>
<content type="html"><![CDATA[<h2 id="启动"><a href="#启动" class="headerlink" title="启动"></a>启动</h2><p>编写启动脚本<code>startup.sh</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line"></span><br><span class="line">echo Starting application</span><br><span class="line"></span><br><span class="line">nohup java -jar activiti_demo-0.0.1-SNAPSHOT.jar &amp;</span><br></pre></td></tr></table></figure><p>授权<br><code>chmod +x startup.sh</code></p><h2 id="关闭"><a href="#关闭" class="headerlink" title="关闭"></a>关闭</h2><p>编写关闭脚本<code>stop.sh</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line"></span><br><span class="line">PID=$(ps -ef | grep activiti_demo-0.0.1-SNAPSHOT.jar | grep -v grep | awk &#x27;&#123; print $2 &#125;&#x27;)</span><br><span class="line"></span><br><span class="line">if \[ -z &quot;$PID&quot; \]</span><br><span class="line"></span><br><span class="line">then</span><br><span class="line"></span><br><span class="line">echo Application is already stopped</span><br><span class="line"></span><br><span class="line">else</span><br><span class="line"></span><br><span class="line">echo kill $PID</span><br><span class="line"></span><br><span class="line">kill $PID</span><br><span class="line"></span><br><span class="line">fi</span><br></pre></td></tr></table></figure><p>授权</p><p><code>chmod +x stop.sh</code></p><h2 id="重启"><a href="#重启" class="headerlink" title="重启"></a>重启</h2><p>编写重启脚本<code>restart.sh</code></p><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">#!/bin/bash</span><br><span class="line"></span><br><span class="line">echo Stopping application</span><br><span class="line"></span><br><span class="line">source ./stop.sh</span><br><span class="line"></span><br><span class="line">echo Starting application</span><br><span class="line"></span><br><span class="line">source ./startup.sh</span><br></pre></td></tr></table></figure><p>授权</p><p><code>chmod +x restart.sh</code></p>]]></content>
<categories>
<category> Spring&amp;SpringBoot </category>
</categories>
</entry>
</search>
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/liuyuantao/liuyuantao.git
git@gitee.com:liuyuantao/liuyuantao.git
liuyuantao
liuyuantao
liuyuantao
master

搜索帮助

Bbcd6f05 5694891 0cc6727d 5694891