注意,tomcat和nutch路径需要修改成自己的
# nutch更目录
NUTCH_HOME=/cygdrive/e/java/CoreJava/IndexSearchAbout/nutch-1.0
# tomcat目录
CATALINA_HOME=/cygdrive/d/JavaTools/apache-tomcat-6.0.14
还有批量将crawled/替换为你的索引存储目录。
将该shell代码保存到你的爬虫nutch更目录下,可任意命名(如:runbot)
然后在cygwin里直接输入一下文件名就可以运行
#!/bin/sh
# runbot script to run the Nutch bot for crawling and re-crawling.
# Usage: bin/runbot [safe]
# If executed in 'safe' mode, it doesn't delete the temporary
# directories generated during crawl. This might be helpful for
# analysis and recovery in case a crawl fails.
#
# Author: Susam Pal
#
# 增量采集时候特别注意,如果在同一台机器上运行Crawl和Searcher,
# 由于tomcat处于启动状态,tomcat线程占用着索引文件,所以在增量
# 爬取时候蜘蛛需要删除旧索引后从新生成新索引(Crawl/index文件夹),
# 由于索引文件夹被TOMCAT占用着,所以蜘蛛操作不了程序就报错了。
# 判断crawl/index文件夹是否被占用简单的方法就是直接手动删除一
# 下index(删除前做一下这文件夹的备份,呵呵),删除不了说明被占用了。
#
# 下边这段增量爬取脚本逻辑上是解决了线程占用的问题,不过可能由于机器不能及时关闭java.exe,所以很多时候都抛异常了“提示什么dir exist
# 1,注入爬取入口
# 2,逐个深度进行爬取
# 3,合并爬取下来的内容
# 4,将数据段相关链接写入linkdb
# 5,生成indexes
# 6,去重
# 7,合并索引
# 8,停止tomcat,释放操作索引的线程,Crawl更新线程后启动TOMCAT
#
# 参数设置
depth=5
threads=10
adddays=1
topN=30 #Comment this statement if you don't want to set topN value
# Arguments for rm and mv
RMARGS="-rf"
MVARGS="--verbose"
# Parse arguments
# 模式,yes启动,在索引操作时候会做备份,否则反之,直接更新索引!
safe=yes
# nutch更目录
NUTCH_HOME=/cygdrive/e/java/CoreJava/IndexSearchAbout/nutch-1.0
# tomcat目录
CATALINA_HOME=/cygdrive/d/JavaTools/apache-tomcat-6.0.14
if [ -z "$NUTCH_HOME" ]
then
echo runbot: $0 could not find environment variable NUTCH_HOME
echo runbot: NUTCH_HOME=$NUTCH_HOME has been set by the script
else
echo runbot: $0 found environment variable NUTCH_HOME=$NUTCH_HOME
fi
if [ -z "$CATALINA_HOME" ]
then
echo runbot: $0 could not find environment variable NUTCH_HOME
echo runbot: CATALINA_HOME=$CATALINA_HOME has been set by the script
else
echo runbot: $0 found environment variable CATALINA_HOME=$CATALINA_HOME
fi
if [ -n "$topN" ]
then
topN="-topN $topN"
else
topN=""
fi
steps=8
# 1,注入爬取入口
echo "----- Inject (Step 1 of $steps) -----"
$NUTCH_HOME/bin/nutch inject crawled/crawldb urls/url.txt
# 2,逐个深度进行爬取
echo "----- Generate, Fetch, Parse, Update (Step 2 o $steps) -----"
for((i=0; i <= $depth; i++))
do
echo "--- Beginning crawl at depth `expr $i + 1` of $depth ---"
$NUTCH_HOME/bin/nutch generate crawled/crawldb crawled/segments $topN \
-adddays $adddays
if [ $? -ne 0 ]
then
echo "runbot: Stopping at depth $depth. No more URLs to fetcfh."
break
fi
segment=`ls -d crawled/segments/* | tail -1`
$NUTCH_HOME/bin/nutch fetch $segment -threads $threads
if [ $? -ne 0 ]
then
echo "runbot: fetch $segment at depth `expr $i + 1` failed."
echo "runbot: Deleting segment $segment."
rm $RMARGS $segment
continue
fi
$NUTCH_HOME/bin/nutch updatedb crawled/crawldb $segment
done
# 3,合并爬取下来的内容
echo "----- Merge Segments (Step 3 of $steps) -----"
#将多个数据段合并到一个数据中并且保存至MERGEDsegments
$NUTCH_HOME/bin/nutch mergesegs crawled/MERGEDsegments crawled/segments/*
#rm $RMARGS crawled/segments
rm $RMARGS crawled/BACKUPsegments
mv $MVARGS crawled/segments crawled/BACKUPsegments
mkdir crawled/segments
mv $MVARGS crawled/MERGEDsegments/* crawled/segments
rm $RMARGS crawled/MERGEDsegments
# 4,将数据段相关链接写入linkdb
echo "----- Invert Links (Step 4 of $steps) -----"
$NUTCH_HOME/bin/nutch invertlinks crawled/linkdb crawled/segments/*
# 5,生成indexes
echo "----- Index (Step 5 of $steps) -----"
$NUTCH_HOME/bin/nutch index crawled/NEWindexes crawled/crawldb crawled/linkdb crawled/segments/*
# 6,去重
echo "----- Dedup (Step 6 of $steps) -----"
$NUTCH_HOME/bin/nutch dedup crawled/NEWindexes
# 7,合并索引
echo "----- Merge Indexes (Step 7 of $steps) -----"
$NUTCH_HOME/bin/nutch merge crawled/NEWindex crawled/NEWindexes
# 8,停止tomcat,释放操作索引的线程,Crawl更新线程后启动TOMCAT
# 需要先停止tomcat,否则tomcat占用着索引文件夹index,不能对索引文件进行更新!(异常:什么文件以存在之类的,dir exists……)
echo "----- Loading New Index (Step 8 of $steps) -----"
#${CATALINA_HOME}/bin/shutdown.sh
#如果是安全模式则先备份后删除索引
if [ "$safe" != "yes" ]
then
rm $RMARGS crawled/NEWindexes
rm $RMARGS crawled/index
else
rm $RMARGS crawled/BACKUPindexes
rm $RMARGS crawled/BACKUPindex
mv $MVARGS crawled/NEWindexes crawled/BACKUPindexes
mv $MVARGS crawled/index crawled/BACKUPindex
rm $RMARGS crawled/NEWindexes
rm $RMARGS crawled/index
fi
#需要先删除旧索引(在上边已经完成)后在生成新索引
mv $MVARGS crawled/NEWindex crawled/index
#索引更新完成后启动tomcat
#${CATALINA_HOME}/bin/startup.sh
echo "runbot: FINISHED: Crawl completed!"
echo ""
分享到:
相关推荐
Nutch开源搜索引擎增量索引recrawl的终极解决办法续
Nutch开源搜索引擎增量索引recrawl的终极解决办法
nutch安装指南,nutch教程,nutch网络爬取
( Nutch,第1部分:爬行(译文) ( Nutch,第1部分:爬行(译文)
1.2研究nutch的原因...1 1.3 nutch的目标..1 1.4 nutch VS lucene.....2 2. nutch的安装与配置.....3 2.1 JDK的安装与配置.3 2.2 nutch的安装与配置........5 2.3 tomcat的安装与配置......5 3. nutch初体验7 3.1 ...
nutch插件,安装nutch插件,mysql与nutch
nutch 爬虫数据nutch 爬虫数据nutch 爬虫数据nutch 爬虫数据nutch 爬虫数据nutch 爬虫数据nutch 爬虫数据nutch 爬虫数据nutch 爬虫数据
Nutch是一个优秀的开放源代码的Web...分析开源搜索引擎Nutch代码,研究了Nutch的页面排序方法。在Nutch原有的结构基础上提出了3种修改Nutch 排序的方法,对每种方法的实现进行了阐述,最后对这些方法的特点进行了比较
Nutch搜索引擎·Nutch简单应用(第3期) 1.1 Nutch 命令详解 1.2 Nutch 简单应用
资源名称:Nutch相关框架视频教程资源目录:【】Nutch相关框架视频教程1_杨尚川【】Nutch相关框架视频教程2_杨尚川【】Nutch相关框架视频教程3_杨尚川【】Nutch相关框架视频教程4_杨尚川【】Nutch相关框架视频教程5_...
基于Nutch的搜索引擎系统的研究与实现
学习nutch 源码解读 轻松入门 搭建自己的nutch搜索引擎
基于nutch的搜索系统研究 硕士论文……………………………………………………………………………………
mp3文件信息解析。支持ID3标准的V1和V2.3.可以再nutch中使用。宁外附上nutch的parse-html的一些定制。(图片,关键字匹配)
Nutch分布式网络爬虫研究与优化.pdfNutch分布式网络爬虫研究与优化.pdfNutch分布式网络爬虫研究与优化.pdf
eclipse配置nutch,eclipse配置nutch
1.2研究nutch的原因...1 1.3 nutch的目标..1 1.4 nutch VS lucene.....2 2. nutch的安装与配置.....3 2.1 JDK的安装与配置.3 2.2 nutch的安装与配置........5 2.3 tomcat的安装与配置......5 3. nutch初体验7...
nutch使用&Nutch;入门教程 pdf
Nutch源码研究