更新日志:
兼容处理分类问题
更新日志:
兼容处理分类问题
更新日志: 兼容处理分类问题
使用方法:
C:\Users\obaby>F:\Pycharm_Projects\meitulu-spider\dist\imn5_v2.exe
****************************************************************************************************
爱美女网爬虫[预览版] 23.07.02
当前服务器地址:https://www.imn5.cc/
Blog: http://oba.by
姐姐的上面的域名怎样啊?说不好的不让用!!哼!!
****************************************************************************************************
USAGE:
spider -h <help> -a <all> -q <search>
Arguments:
-a <download all site images>
-q <query the image with keywords>
-h <display help text, just this>
Option Arguments:
-p <image download path>
-r <random index category list>
-c <single category url>
-e <early stop, work in site crawl mode only>
-s <site url eg: https://www.xrmnw.cc (no last backslash "/")>
****************************************************************************************************
C:\Users\obaby>F:\Pycharm_Projects\meitulu-spider\dist\imn5_v2.exe
****************************************************************************************************
爱美女网爬虫[预览版] 23.07.02
当前服务器地址:https://www.imn5.cc/
Blog: http://oba.by
姐姐的上面的域名怎样啊?说不好的不让用!!哼!!
****************************************************************************************************
USAGE:
spider -h <help> -a <all> -q <search>
Arguments:
-a <download all site images>
-q <query the image with keywords>
-h <display help text, just this>
Option Arguments:
-p <image download path>
-r <random index category list>
-c <single category url>
-e <early stop, work in site crawl mode only>
-s <site url eg: https://www.xrmnw.cc (no last backslash "/")>
****************************************************************************************************
C:\Users\obaby>F:\Pycharm_Projects\meitulu-spider\dist\imn5_v2.exe **************************************************************************************************** 爱美女网爬虫[预览版] 23.07.02 当前服务器地址:https://www.imn5.cc/ Blog: http://oba.by 姐姐的上面的域名怎样啊?说不好的不让用!!哼!! **************************************************************************************************** USAGE: spider -h <help> -a <all> -q <search> Arguments: -a <download all site images> -q <query the image with keywords> -h <display help text, just this> Option Arguments: -p <image download path> -r <random index category list> -c <single category url> -e <early stop, work in site crawl mode only> -s <site url eg: https://www.xrmnw.cc (no last backslash "/")> ****************************************************************************************************