Compare commits
108 Commits
Author | SHA1 | Date | |
---|---|---|---|
489e8bf0f4 | |||
86c31f9b5e | |||
6f20405f47 | |||
c0143548d1 | |||
114c364f03 | |||
af26482b6d | |||
b8ea917db2 | |||
963f4d9ddf | |||
ef36e012ce | |||
16e8ce6f45 | |||
0632826827 | |||
8d2cd1974b | |||
8c176cd2ad | |||
f2c88e8ade | |||
2300744c5c | |||
7f30c84eff | |||
dda849b770 | |||
14b3c82248 | |||
4577e9df9a | |||
de157ccb7f | |||
126bbe8d49 | |||
8546b9e759 | |||
6ff9751c30 | |||
ddc4a20251 | |||
206aa3710a | |||
b5b201f61c | |||
eb8b41cd1d | |||
98bf88d638 | |||
0bc83982e4 | |||
99edcef9ac | |||
3ddd474aab | |||
f2573d5f10 | |||
147eec57cf | |||
f316c3243b | |||
967e0b4ff5 | |||
22cf2592dd | |||
caa0753adb | |||
0e14dd62d5 | |||
7c9693785e | |||
08ad73b683 | |||
a56d3ca18c | |||
c1975897d2 | |||
4ed596ff98 | |||
debf287fb0 | |||
308c5277b8 | |||
b425c883c7 | |||
7bf9507bd2 | |||
5f5245f70f | |||
45fb35b950 | |||
2271b83d93 | |||
0ee000edeb | |||
a47359f411 | |||
48c6fadc98 | |||
dbc834ea2e | |||
71177ff94e | |||
d1ed9b6980 | |||
42a09e2c1e | |||
e306d50b7e | |||
043f391d04 | |||
9549c5f5a2 | |||
5592b30be4 | |||
12f7b2225b | |||
b0e71c9a6c | |||
ad64a5685a | |||
6bd0a6b96a | |||
3a80c233d5 | |||
69e0d1d6f1 | |||
c300a2777f | |||
d0d7fb7015 | |||
4ed91db60a | |||
4c11288d63 | |||
de476aac46 | |||
a3fb75eb11 | |||
bb5024f1d7 | |||
da5b860e5f | |||
8b63d41cbb | |||
55d24883be | |||
8f3bdc73bf | |||
cc2f0521b3 | |||
795e8b2bb8 | |||
97b2ba8fd2 | |||
6858bacd41 | |||
148b4a1a08 | |||
3ba8f62fe2 | |||
16d3b555c9 | |||
0d185f465d | |||
3eacd118ed | |||
e42f42d7db | |||
fd0b53ee36 | |||
35fec2e1f4 | |||
40e880cf77 | |||
2f756ecb5b | |||
441317c28c | |||
8442f00c6c | |||
43e59b724a | |||
5d6a773460 | |||
9fe43dc219 | |||
0f89ff4d63 | |||
5bb98aa007 | |||
a4ac1c9720 | |||
8d25673180 | |||
aab92bbc8e | |||
2b52e300d4 | |||
6e3299a08d | |||
e598c8686a | |||
dd7b2d493e | |||
3d481dbf13 | |||
3a52e8a8bc |
2
.gitignore
vendored
@ -4,4 +4,4 @@ build
|
||||
dist/
|
||||
*.egg-info
|
||||
.python-version
|
||||
|
||||
.DS_Store
|
||||
|
11
.travis.yml
@ -1,15 +1,18 @@
|
||||
os:
|
||||
- linux
|
||||
- os x
|
||||
|
||||
language: python
|
||||
python:
|
||||
- 2.7
|
||||
- 2.6
|
||||
- 3.6
|
||||
- 3.5
|
||||
- 3.4
|
||||
|
||||
install:
|
||||
- python setup.py install
|
||||
|
||||
script:
|
||||
- nhentai --search umaru
|
||||
- nhentai --ids=152503,146134 -t 10 --download --path=/tmp/
|
||||
- NHENTAI=https://nhentai.net nhentai --search umaru
|
||||
- NHENTAI=https://nhentai.net nhentai --id=152503,146134 -t 10 --output=/tmp/
|
||||
- NHENTAI=https://nhentai.net nhentai -l nhentai_test:nhentai --output=/tmp/
|
||||
- NHENTAI=https://nhentai.net nhentai --tag lolicon
|
@ -1,2 +1,5 @@
|
||||
include README.md
|
||||
include requirements.txt
|
||||
include README.md
|
||||
include requirements.txt
|
||||
include nhentai/viewer/index.html
|
||||
include nhentai/viewer/styles.css
|
||||
include nhentai/viewer/scripts.js
|
||||
|
130
README.md
@ -1,52 +1,78 @@
|
||||
nhentai
|
||||
=======
|
||||
_ _ _ _
|
||||
_ __ | | | | ___ _ __ | |_ __ _(_)
|
||||
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
|
||||
| | | | _ | __/ | | | || (_| | |
|
||||
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
|
||||
|
||||
あなたも変態。 いいね?
|
||||
[](https://travis-ci.org/RicterZ/nhentai)
|
||||
|
||||
由于 [http://nhentai.net](http://nhentai.net) 下载下来的种子速度很慢,而且官方也提供在线观看本子的功能,所以可以利用本脚本下载本子。
|
||||
### 安装
|
||||
|
||||
git clone https://github.com/RicterZ/nhentai
|
||||
cd nhentai
|
||||
python setup.py install
|
||||
|
||||
|
||||
### 用法
|
||||
+ 下载指定 id 的本子:
|
||||
|
||||
|
||||
nhentai --id=123855 --download
|
||||
|
||||
|
||||
+ 下载指定 id 列表的本子:
|
||||
|
||||
|
||||
nhentai --ids=123855,123866 --download
|
||||
|
||||
|
||||
+ 下载某关键词第一页的本子(不推荐):
|
||||
|
||||
|
||||
nhentai --search="tomori" --page=1 --download
|
||||
|
||||
|
||||
`-t, --thread` 指定下载的线程数,最多为 10 线程。
|
||||
`--path` 指定下载文件的输出路径,默认为当前目录。
|
||||
`--timeout` 指定下载图片的超时时间,默认为 30 秒。
|
||||
`--proxy` 指定下载的代理,例如: http://127.0.0.1:8080/
|
||||
|
||||
|
||||

|
||||

|
||||
|
||||
### License
|
||||
MIT
|
||||
|
||||
### あなたも変態
|
||||

|
||||
nhentai
|
||||
=======
|
||||
_ _ _ _
|
||||
_ __ | | | | ___ _ __ | |_ __ _(_)
|
||||
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
|
||||
| | | | _ | __/ | | | || (_| | |
|
||||
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
|
||||
|
||||
あなたも変態。 いいね?
|
||||
[](https://travis-ci.org/RicterZ/nhentai) 
|
||||
|
||||
|
||||
nHentai is a CLI tool for downloading doujinshi from [nhentai.net](http://nhentai.net).
|
||||
|
||||
### Installation
|
||||
|
||||
git clone https://github.com/RicterZ/nhentai
|
||||
cd nhentai
|
||||
python setup.py install
|
||||
|
||||
### Gentoo
|
||||
|
||||
layman -fa glicOne
|
||||
sudo emerge net-misc/nhentai
|
||||
|
||||
### Usage
|
||||
Download specified doujinshi:
|
||||
```bash
|
||||
nhentai --id=123855,123866
|
||||
```
|
||||
|
||||
Search a keyword and download the first page:
|
||||
```bash
|
||||
nhentai --search="tomori" --page=1 --download
|
||||
```
|
||||
|
||||
Download your favourite doujinshi (login required):
|
||||
```bash
|
||||
nhentai --login "username:password" --download
|
||||
```
|
||||
|
||||
Download by tag name:
|
||||
```bash
|
||||
nhentai --tag lolicon --download
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
+ `-t, --thread`: Download threads, max: 10
|
||||
+ `--output`:Output dir of saving doujinshi
|
||||
+ `--tag`:Download by tag name
|
||||
+ `--timeout`: Timeout of downloading each image
|
||||
+ `--proxy`: Use proxy, example: http://127.0.0.1:8080/
|
||||
+ `--login`: username:password pair of your nhentai account
|
||||
+ `--nohtml`: Do not generate HTML
|
||||
+ `--cbz`: Generate Comic Book CBZ File
|
||||
|
||||
### nHentai Mirror
|
||||
If you want to use a mirror, you should set up a reverse proxy of `nhentai.net` and `i.nhentai.net`.
|
||||
For example:
|
||||
|
||||
i.h.loli.club -> i.nhentai.net
|
||||
h.loli.club -> nhentai.net
|
||||
|
||||
Set `NHENTAI` env var to your nhentai mirror.
|
||||
```bash
|
||||
NHENTAI=http://h.loli.club nhentai --id 123456
|
||||
```
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
### License
|
||||
MIT
|
||||
|
||||
### あなたも変態
|
||||

|
||||
|
Before Width: | Height: | Size: 541 KiB After Width: | Height: | Size: 189 KiB |
0
images/image.jpg
Executable file → Normal file
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 658 KiB After Width: | Height: | Size: 173 KiB |
BIN
images/viewer.png
Normal file
After Width: | Height: | Size: 311 KiB |
@ -1,3 +1,3 @@
|
||||
__version__ = '0.1.4'
|
||||
__author__ = 'Ricter'
|
||||
__version__ = '0.2.16'
|
||||
__author__ = 'RicterZ'
|
||||
__email__ = 'ricterzheng@gmail.com'
|
||||
|
@ -1,72 +1,123 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
from optparse import OptionParser
|
||||
from logger import logger
|
||||
from nhentai import __version__
|
||||
try:
|
||||
from itertools import ifilter as filter
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
import nhentai.constant as constant
|
||||
from nhentai.utils import urlparse, generate_html
|
||||
from nhentai.logger import logger
|
||||
|
||||
import constant
|
||||
try:
|
||||
if sys.version_info < (3, 0, 0):
|
||||
import codecs
|
||||
import locale
|
||||
sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout)
|
||||
sys.stderr = codecs.getwriter(locale.getpreferredencoding())(sys.stderr)
|
||||
|
||||
except NameError:
|
||||
# python3
|
||||
pass
|
||||
|
||||
|
||||
def banner():
|
||||
logger.info('''nHentai: あなたも変態。 いいね?
|
||||
logger.info(u'''nHentai ver %s: あなたも変態。 いいね?
|
||||
_ _ _ _
|
||||
_ __ | | | | ___ _ __ | |_ __ _(_)
|
||||
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
|
||||
| | | | _ | __/ | | | || (_| | |
|
||||
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
|
||||
''')
|
||||
''' % __version__)
|
||||
|
||||
|
||||
def cmd_parser():
|
||||
parser = OptionParser()
|
||||
parser.add_option('--download', dest='is_download', action='store_true', help='download doujinshi or not')
|
||||
parser.add_option('--id', type='int', dest='id', action='store', help='doujinshi id of nhentai')
|
||||
parser.add_option('--ids', type='str', dest='ids', action='store', help='doujinshi id set, e.g. 1,2,3')
|
||||
parser.add_option('--search', type='string', dest='keyword', action='store', help='keyword searched')
|
||||
parser = OptionParser('\n nhentai --search [keyword] --download'
|
||||
'\n NHENTAI=http://h.loli.club nhentai --id [ID ...]'
|
||||
'\n\nEnvironment Variable:\n'
|
||||
' NHENTAI nhentai mirror url')
|
||||
parser.add_option('--download', dest='is_download', action='store_true',
|
||||
help='download doujinshi (for search results)')
|
||||
parser.add_option('--show-info', dest='is_show', action='store_true', help='just show the doujinshi information')
|
||||
parser.add_option('--id', type='string', dest='id', action='store', help='doujinshi ids set, e.g. 1,2,3')
|
||||
parser.add_option('--search', type='string', dest='keyword', action='store', help='search doujinshi by keyword')
|
||||
parser.add_option('--page', type='int', dest='page', action='store', default=1,
|
||||
help='page number of search result')
|
||||
parser.add_option('--path', type='string', dest='saved_path', action='store', default='',
|
||||
help='path which save the doujinshi')
|
||||
help='page number of search results')
|
||||
parser.add_option('--tag', type='string', dest='tag', action='store', help='download doujinshi by tag')
|
||||
parser.add_option('--max-page', type='int', dest='max_page', action='store', default=1,
|
||||
help='The max page when recursive download tagged doujinshi')
|
||||
parser.add_option('--output', type='string', dest='output_dir', action='store', default='',
|
||||
help='output dir')
|
||||
parser.add_option('--threads', '-t', type='int', dest='threads', action='store', default=5,
|
||||
help='thread count of download doujinshi')
|
||||
help='thread count for downloading doujinshi')
|
||||
parser.add_option('--timeout', type='int', dest='timeout', action='store', default=30,
|
||||
help='timeout of download doujinshi')
|
||||
help='timeout for downloading doujinshi')
|
||||
parser.add_option('--proxy', type='string', dest='proxy', action='store', default='',
|
||||
help='use proxy, example: http://127.0.0.1:1080')
|
||||
args, _ = parser.parse_args()
|
||||
help='uses a proxy, for example: http://127.0.0.1:1080')
|
||||
parser.add_option('--html', dest='html_viewer', action='store_true',
|
||||
help='generate a html viewer at current directory')
|
||||
|
||||
if args.ids:
|
||||
_ = map(lambda id: id.strip(), args.ids.split(','))
|
||||
args.ids = set(map(int, filter(lambda id: id.isdigit(), _)))
|
||||
parser.add_option('--login', '-l', type='str', dest='login', action='store',
|
||||
help='username:password pair of nhentai account')
|
||||
|
||||
if args.is_download and not args.id and not args.ids and not args.keyword:
|
||||
logger.critical('Doujinshi id/ids is required for downloading')
|
||||
parser.print_help()
|
||||
raise SystemExit
|
||||
parser.add_option('--nohtml', dest='is_nohtml', action='store_true',
|
||||
help='Don\'t generate HTML')
|
||||
|
||||
parser.add_option('--cbz', dest='is_cbz', action='store_true',
|
||||
help='Generate Comic Book CBZ File')
|
||||
|
||||
try:
|
||||
sys.argv = list(map(lambda x: unicode(x.decode(sys.stdin.encoding)), sys.argv))
|
||||
except (NameError, TypeError):
|
||||
pass
|
||||
except UnicodeDecodeError:
|
||||
exit(0)
|
||||
|
||||
args, _ = parser.parse_args(sys.argv[1:])
|
||||
|
||||
if args.html_viewer:
|
||||
generate_html()
|
||||
exit(0)
|
||||
|
||||
if args.login:
|
||||
try:
|
||||
_, _ = args.login.split(':', 1)
|
||||
except ValueError:
|
||||
logger.error('Invalid `username:password` pair.')
|
||||
exit(1)
|
||||
|
||||
if not args.is_download:
|
||||
logger.warning('YOU DO NOT SPECIFY `--download` OPTION !!!')
|
||||
|
||||
if args.id:
|
||||
args.ids = (args.id, ) if not args.ids else args.ids
|
||||
_ = map(lambda id: id.strip(), args.id.split(','))
|
||||
args.id = set(map(int, filter(lambda id_: id_.isdigit(), _)))
|
||||
|
||||
if not args.keyword and not args.ids:
|
||||
if (args.is_download or args.is_show) and not args.id and not args.keyword and \
|
||||
not args.login and not args.tag:
|
||||
logger.critical('Doujinshi id(s) are required for downloading')
|
||||
parser.print_help()
|
||||
raise SystemExit
|
||||
exit(1)
|
||||
|
||||
if not args.keyword and not args.id and not args.login and not args.tag:
|
||||
parser.print_help()
|
||||
exit(1)
|
||||
|
||||
if args.threads <= 0:
|
||||
args.threads = 1
|
||||
elif args.threads > 10:
|
||||
logger.critical('Maximum number of used threads is 10')
|
||||
raise SystemExit
|
||||
|
||||
elif args.threads > 15:
|
||||
logger.critical('Maximum number of used threads is 15')
|
||||
exit(1)
|
||||
|
||||
if args.proxy:
|
||||
import urlparse
|
||||
proxy_url = urlparse.urlparse(args.proxy)
|
||||
proxy_url = urlparse(args.proxy)
|
||||
if proxy_url.scheme not in ('http', 'https'):
|
||||
logger.error('Invalid protocol \'{}\' of proxy, ignored'.format(proxy_url.scheme))
|
||||
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
|
||||
else:
|
||||
constant.PROXY = {proxy_url.scheme: args.proxy}
|
||||
constant.PROXY = {'http': args.proxy, 'https': args.proxy}
|
||||
|
||||
return args
|
||||
|
@ -1,50 +1,80 @@
|
||||
#!/usr/bin/env python2.7
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals, print_function
|
||||
import signal
|
||||
from cmdline import cmd_parser, banner
|
||||
from parser import doujinshi_parser, search_parser, print_doujinshi
|
||||
from doujinshi import Doujinshi
|
||||
from downloader import Downloader
|
||||
from logger import logger
|
||||
import platform
|
||||
|
||||
from nhentai.cmdline import cmd_parser, banner
|
||||
from nhentai.parser import doujinshi_parser, search_parser, print_doujinshi, login_parser, tag_guessing, tag_parser
|
||||
from nhentai.doujinshi import Doujinshi
|
||||
from nhentai.downloader import Downloader
|
||||
from nhentai.logger import logger
|
||||
from nhentai.constant import BASE_URL
|
||||
from nhentai.utils import generate_html, generate_cbz
|
||||
|
||||
|
||||
def main():
|
||||
banner()
|
||||
logger.info('Using mirror: {0}'.format(BASE_URL))
|
||||
options = cmd_parser()
|
||||
|
||||
doujinshi_ids = []
|
||||
doujinshi_list = []
|
||||
|
||||
import pdb; pdb.set_trace()
|
||||
|
||||
if options.login:
|
||||
username, password = options.login.split(':', 1)
|
||||
logger.info('Logging in to nhentai using credential pair \'%s:%s\'' % (username, '*' * len(password)))
|
||||
for doujinshi_info in login_parser(username=username, password=password):
|
||||
doujinshi_list.append(Doujinshi(**doujinshi_info))
|
||||
|
||||
if options.tag:
|
||||
tag_id = tag_guessing(options.tag)
|
||||
if tag_id:
|
||||
doujinshis = tag_parser(tag_id, max_page=options.max_page)
|
||||
print_doujinshi(doujinshis)
|
||||
if options.is_download:
|
||||
doujinshi_ids = map(lambda d: d['id'], doujinshis)
|
||||
|
||||
if options.keyword:
|
||||
doujinshis = search_parser(options.keyword, options.page)
|
||||
print_doujinshi(doujinshis)
|
||||
if options.is_download:
|
||||
doujinshi_ids = map(lambda d: d['id'], doujinshis)
|
||||
else:
|
||||
doujinshi_ids = options.ids
|
||||
|
||||
if not doujinshi_ids:
|
||||
doujinshi_ids = options.id
|
||||
|
||||
if doujinshi_ids:
|
||||
for id in doujinshi_ids:
|
||||
doujinshi_info = doujinshi_parser(id)
|
||||
for id_ in doujinshi_ids:
|
||||
doujinshi_info = doujinshi_parser(id_)
|
||||
doujinshi_list.append(Doujinshi(**doujinshi_info))
|
||||
else:
|
||||
raise SystemExit
|
||||
|
||||
if options.is_download:
|
||||
downloader = Downloader(path=options.saved_path,
|
||||
if not options.is_show:
|
||||
downloader = Downloader(path=options.output_dir,
|
||||
thread=options.threads, timeout=options.timeout)
|
||||
|
||||
for doujinshi in doujinshi_list:
|
||||
doujinshi.downloader = downloader
|
||||
doujinshi.download()
|
||||
else:
|
||||
map(lambda doujinshi: doujinshi.show(), doujinshi_list)
|
||||
if not options.is_nohtml and not options.is_cbz:
|
||||
generate_html(options.output_dir, doujinshi)
|
||||
elif options.is_cbz:
|
||||
generate_cbz(options.output_dir, doujinshi)
|
||||
|
||||
logger.log(15, u'🍺 All done.')
|
||||
if not platform.system() == 'Windows':
|
||||
logger.log(15, '🍻 All done.')
|
||||
else:
|
||||
logger.log(15, 'All done.')
|
||||
|
||||
else:
|
||||
[doujinshi.show() for doujinshi in doujinshi_list]
|
||||
|
||||
|
||||
def signal_handler(signal, frame):
|
||||
logger.error('Ctrl-C signal received. Quit.')
|
||||
raise SystemExit
|
||||
logger.error('Ctrl-C signal received. Stopping...')
|
||||
exit(1)
|
||||
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
|
@ -1,6 +1,18 @@
|
||||
SCHEMA = 'http://'
|
||||
URL = '%snhentai.net' % SCHEMA
|
||||
DETAIL_URL = '%s/g' % URL
|
||||
SEARCH_URL = '%s/search/' % URL
|
||||
IMAGE_URL = '%si.nhentai.net/galleries' % SCHEMA
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals, print_function
|
||||
import os
|
||||
from nhentai.utils import urlparse
|
||||
|
||||
BASE_URL = os.getenv('NHENTAI', 'https://nhentai.net')
|
||||
|
||||
DETAIL_URL = '%s/api/gallery' % BASE_URL
|
||||
SEARCH_URL = '%s/api/galleries/search' % BASE_URL
|
||||
TAG_URL = '%s/tag' % BASE_URL
|
||||
TAG_API_URL = '%s/api/galleries/tagged' % BASE_URL
|
||||
LOGIN_URL = '%s/login/' % BASE_URL
|
||||
FAV_URL = '%s/favorites/' % BASE_URL
|
||||
|
||||
u = urlparse(BASE_URL)
|
||||
IMAGE_URL = '%s://i.%s/galleries' % (u.scheme, u.hostname)
|
||||
|
||||
PROXY = {}
|
||||
|
@ -1,8 +1,18 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function
|
||||
from __future__ import print_function, unicode_literals
|
||||
from tabulate import tabulate
|
||||
from constant import DETAIL_URL, IMAGE_URL
|
||||
from logger import logger
|
||||
from future.builtins import range
|
||||
|
||||
from nhentai.constant import DETAIL_URL, IMAGE_URL
|
||||
from nhentai.logger import logger
|
||||
from nhentai.utils import format_filename
|
||||
|
||||
|
||||
EXT_MAP = {
|
||||
'j': 'jpg',
|
||||
'p': 'png',
|
||||
'g': 'gif',
|
||||
}
|
||||
|
||||
|
||||
class DoujinshiInfo(dict):
|
||||
@ -17,7 +27,7 @@ class DoujinshiInfo(dict):
|
||||
|
||||
|
||||
class Doujinshi(object):
|
||||
def __init__(self, name=None, id=None, img_id=None, ext='jpg', pages=0, **kwargs):
|
||||
def __init__(self, name=None, id=None, img_id=None, ext='', pages=0, **kwargs):
|
||||
self.name = name
|
||||
self.id = id
|
||||
self.img_id = img_id
|
||||
@ -28,7 +38,7 @@ class Doujinshi(object):
|
||||
self.info = DoujinshiInfo(**kwargs)
|
||||
|
||||
def __repr__(self):
|
||||
return '<Doujinshi: {}>'.format(self.name)
|
||||
return '<Doujinshi: {0}>'.format(self.name)
|
||||
|
||||
def show(self):
|
||||
table = [
|
||||
@ -41,17 +51,18 @@ class Doujinshi(object):
|
||||
["URL", self.url],
|
||||
["Pages", self.pages],
|
||||
]
|
||||
logger.info(u'Print doujinshi information\n{}'.format(tabulate(table)))
|
||||
logger.info(u'Print doujinshi information of {0}\n{1}'.format(self.id, tabulate(table)))
|
||||
|
||||
def download(self):
|
||||
logger.info('Start download doujinshi: %s' % self.name)
|
||||
logger.info('Starting to download doujinshi: %s' % self.name)
|
||||
if self.downloader:
|
||||
download_queue = []
|
||||
for i in xrange(1, self.pages + 1):
|
||||
download_queue.append('%s/%d/%d.%s' % (IMAGE_URL, int(self.img_id), i, self.ext))
|
||||
self.downloader.download(download_queue, self.id)
|
||||
for i in range(len(self.ext)):
|
||||
download_queue.append('%s/%d/%d.%s' % (IMAGE_URL, int(self.img_id), i+1, EXT_MAP[self.ext[i]]))
|
||||
|
||||
self.downloader.download(download_queue, format_filename('%s-%s' % (self.id, self.name[:200])))
|
||||
else:
|
||||
logger.critical('Downloader has not be loaded')
|
||||
logger.critical('Downloader has not been loaded')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
@ -1,73 +1,110 @@
|
||||
# coding: utf-8
|
||||
# coding: utf-
|
||||
from __future__ import unicode_literals, print_function
|
||||
from future.builtins import str as text
|
||||
import os
|
||||
import requests
|
||||
import threadpool
|
||||
from urlparse import urlparse
|
||||
from logger import logger
|
||||
from parser import request
|
||||
try:
|
||||
from urllib.parse import urlparse
|
||||
except ImportError:
|
||||
from urlparse import urlparse
|
||||
|
||||
from nhentai.logger import logger
|
||||
from nhentai.parser import request
|
||||
from nhentai.utils import Singleton
|
||||
|
||||
|
||||
class Downloader(object):
|
||||
_instance = None
|
||||
requests.packages.urllib3.disable_warnings()
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if not cls._instance:
|
||||
cls._instance = super(Downloader, cls).__new__(cls, *args, **kwargs)
|
||||
return cls._instance
|
||||
|
||||
class NhentaiImageNotExistException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class Downloader(Singleton):
|
||||
|
||||
def __init__(self, path='', thread=1, timeout=30):
|
||||
if not isinstance(thread, (int, )) or thread < 1 or thread > 10:
|
||||
if not isinstance(thread, (int, )) or thread < 1 or thread > 15:
|
||||
raise ValueError('Invalid threads count')
|
||||
self.path = str(path)
|
||||
self.thread_count = thread
|
||||
self.threads = []
|
||||
self.timeout = timeout
|
||||
|
||||
def _download(self, url, folder='', filename='', retried=False):
|
||||
logger.info('Start downloading: {} ...'.format(url))
|
||||
def _download(self, url, folder='', filename='', retried=0):
|
||||
logger.info('Starting to download {0} ...'.format(url))
|
||||
filename = filename if filename else os.path.basename(urlparse(url).path)
|
||||
base_filename, extension = os.path.splitext(filename)
|
||||
try:
|
||||
with open(os.path.join(folder, filename), "wb") as f:
|
||||
response = request('get', url, stream=True, timeout=self.timeout)
|
||||
if os.path.exists(os.path.join(folder, base_filename.zfill(3) + extension)):
|
||||
logger.warning('File: {0} exists, ignoring'.format(os.path.join(folder, base_filename.zfill(3) +
|
||||
extension)))
|
||||
return 1, url
|
||||
|
||||
with open(os.path.join(folder, base_filename.zfill(3) + extension), "wb") as f:
|
||||
i=0
|
||||
while i<10:
|
||||
try:
|
||||
response = request('get', url, stream=True, timeout=self.timeout)
|
||||
except Exception as e:
|
||||
i+=1
|
||||
if not i<10:
|
||||
logger.critical(str(e))
|
||||
return 0, None
|
||||
continue
|
||||
break
|
||||
if response.status_code != 200:
|
||||
raise NhentaiImageNotExistException
|
||||
length = response.headers.get('content-length')
|
||||
if length is None:
|
||||
f.write(response.content)
|
||||
else:
|
||||
for chunk in response.iter_content(2048):
|
||||
f.write(chunk)
|
||||
except requests.HTTPError as e:
|
||||
if not retried:
|
||||
logger.error('Error: {}, retrying'.format(str(e)))
|
||||
return self._download(url=url, folder=folder, filename=filename, retried=True)
|
||||
|
||||
except (requests.HTTPError, requests.Timeout) as e:
|
||||
if retried < 3:
|
||||
logger.warning('Warning: {0}, retrying({1}) ...'.format(str(e), retried))
|
||||
return 0, self._download(url=url, folder=folder, filename=filename, retried=retried+1)
|
||||
else:
|
||||
return None
|
||||
return 0, None
|
||||
|
||||
except NhentaiImageNotExistException as e:
|
||||
os.remove(os.path.join(folder, base_filename.zfill(3) + extension))
|
||||
return -1, url
|
||||
|
||||
except Exception as e:
|
||||
logger.critical(str(e))
|
||||
return None
|
||||
return url
|
||||
return 0, None
|
||||
|
||||
return 1, url
|
||||
|
||||
def _download_callback(self, request, result):
|
||||
if not result:
|
||||
logger.critical('Too many errors occurred, quit.')
|
||||
raise SystemExit
|
||||
logger.log(15, '{} download successfully'.format(result))
|
||||
result, data = result
|
||||
if result == 0:
|
||||
logger.warning('fatal errors occurred, ignored')
|
||||
# exit(1)
|
||||
elif result == -1:
|
||||
logger.warning('url {} return status code 404'.format(data))
|
||||
else:
|
||||
logger.log(15, '{0} downloaded successfully'.format(data))
|
||||
|
||||
def download(self, queue, folder=''):
|
||||
if not isinstance(folder, (str, unicode)):
|
||||
if not isinstance(folder, text):
|
||||
folder = str(folder)
|
||||
|
||||
if self.path:
|
||||
folder = os.path.join(self.path, folder)
|
||||
|
||||
if not os.path.exists(folder):
|
||||
logger.warn('Path \'{}\' not exist.'.format(folder))
|
||||
logger.warn('Path \'{0}\' does not exist, creating.'.format(folder))
|
||||
try:
|
||||
os.makedirs(folder)
|
||||
except EnvironmentError as e:
|
||||
logger.critical('Error: {}'.format(str(e)))
|
||||
raise SystemExit
|
||||
logger.critical('{0}'.format(str(e)))
|
||||
exit(1)
|
||||
else:
|
||||
logger.warn('Path \'{}\' already exist.'.format(folder))
|
||||
logger.warn('Path \'{0}\' already exist.'.format(folder))
|
||||
|
||||
queue = [([url], {'folder': folder}) for url in queue]
|
||||
|
||||
|
@ -1,13 +1,24 @@
|
||||
import logging
|
||||
#
|
||||
# Copyright (C) 2010-2012 Vinay Sajip. All rights reserved. Licensed under the new BSD license.
|
||||
#
|
||||
from __future__ import print_function, unicode_literals
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import platform
|
||||
import sys
|
||||
|
||||
|
||||
if platform.system() == 'Windows':
|
||||
import ctypes
|
||||
import ctypes.wintypes
|
||||
|
||||
# Reference: https://gist.github.com/vsajip/758430
|
||||
# https://github.com/ipython/ipython/issues/4252
|
||||
# https://msdn.microsoft.com/en-us/library/windows/desktop/ms686047%28v=vs.85%29.aspx
|
||||
ctypes.windll.kernel32.SetConsoleTextAttribute.argtypes = [ctypes.wintypes.HANDLE, ctypes.wintypes.WORD]
|
||||
ctypes.windll.kernel32.SetConsoleTextAttribute.restype = ctypes.wintypes.BOOL
|
||||
|
||||
|
||||
class ColorizingStreamHandler(logging.StreamHandler):
|
||||
# color names to indices
|
||||
color_map = {
|
||||
@ -22,22 +33,13 @@ class ColorizingStreamHandler(logging.StreamHandler):
|
||||
}
|
||||
|
||||
# levels to (background, foreground, bold/intense)
|
||||
if os.name == 'nt':
|
||||
level_map = {
|
||||
logging.DEBUG: (None, 'white', False),
|
||||
logging.INFO: (None, 'green', False),
|
||||
logging.WARNING: (None, 'yellow', False),
|
||||
logging.ERROR: (None, 'red', False),
|
||||
logging.CRITICAL: ('red', 'white', False)
|
||||
}
|
||||
else:
|
||||
level_map = {
|
||||
logging.DEBUG: (None, 'white', False),
|
||||
logging.INFO: (None, 'green', False),
|
||||
logging.WARNING: (None, 'yellow', False),
|
||||
logging.ERROR: (None, 'red', False),
|
||||
logging.CRITICAL: ('red', 'white', False)
|
||||
}
|
||||
level_map = {
|
||||
logging.DEBUG: (None, 'blue', False),
|
||||
logging.INFO: (None, 'green', False),
|
||||
logging.WARNING: (None, 'yellow', False),
|
||||
logging.ERROR: (None, 'red', False),
|
||||
logging.CRITICAL: ('red', 'white', False)
|
||||
}
|
||||
csi = '\x1b['
|
||||
reset = '\x1b[0m'
|
||||
disable_coloring = False
|
||||
@ -47,7 +49,29 @@ class ColorizingStreamHandler(logging.StreamHandler):
|
||||
isatty = getattr(self.stream, 'isatty', None)
|
||||
return isatty and isatty() and not self.disable_coloring
|
||||
|
||||
if os.name != 'nt':
|
||||
def emit(self, record):
|
||||
try:
|
||||
message = self.format(record)
|
||||
stream = self.stream
|
||||
|
||||
if not self.is_tty:
|
||||
if message and message[0] == "\r":
|
||||
message = message[1:]
|
||||
stream.write(message)
|
||||
else:
|
||||
self.output_colorized(message)
|
||||
stream.write(getattr(self, 'terminator', '\n'))
|
||||
|
||||
self.flush()
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
raise
|
||||
except IOError:
|
||||
pass
|
||||
except:
|
||||
self.handleError(record)
|
||||
|
||||
|
||||
if not platform.system() == 'Windows':
|
||||
def output_colorized(self, message):
|
||||
self.stream.write(message)
|
||||
else:
|
||||
@ -65,8 +89,6 @@ class ColorizingStreamHandler(logging.StreamHandler):
|
||||
}
|
||||
|
||||
def output_colorized(self, message):
|
||||
import ctypes
|
||||
|
||||
parts = self.ansi_esc.split(message)
|
||||
write = self.stream.write
|
||||
h = None
|
||||
@ -75,14 +97,17 @@ class ColorizingStreamHandler(logging.StreamHandler):
|
||||
if fd is not None:
|
||||
fd = fd()
|
||||
|
||||
if fd in (1, 2): # stdout or stderr
|
||||
if fd in (1, 2): # stdout or stderr
|
||||
h = ctypes.windll.kernel32.GetStdHandle(-10 - fd)
|
||||
|
||||
while parts:
|
||||
text = parts.pop(0)
|
||||
|
||||
if text:
|
||||
write(text)
|
||||
if sys.version_info < (3, 0, 0):
|
||||
write(text.encode('utf-8'))
|
||||
else:
|
||||
write(text)
|
||||
|
||||
if parts:
|
||||
params = parts.pop(0)
|
||||
@ -97,11 +122,11 @@ class ColorizingStreamHandler(logging.StreamHandler):
|
||||
elif 30 <= p <= 37:
|
||||
color |= self.nt_color_map[p - 30]
|
||||
elif p == 1:
|
||||
color |= 0x08 # foreground intensity on
|
||||
elif p == 0: # reset to default color
|
||||
color |= 0x08 # foreground intensity on
|
||||
elif p == 0: # reset to default color
|
||||
color = 0x07
|
||||
else:
|
||||
pass # error condition ignored
|
||||
pass # error condition ignored
|
||||
|
||||
ctypes.windll.kernel32.SetConsoleTextAttribute(h, color)
|
||||
|
||||
@ -135,6 +160,7 @@ class ColorizingStreamHandler(logging.StreamHandler):
|
||||
message = logging.StreamHandler.format(self, record)
|
||||
return self.colorize(message, record)
|
||||
|
||||
|
||||
logging.addLevelName(15, "INFO")
|
||||
logger = logging.getLogger('nhentai')
|
||||
LOGGER_HANDLER = ColorizingStreamHandler(sys.stdout)
|
||||
|
@ -1,103 +1,258 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
from __future__ import unicode_literals, print_function
|
||||
|
||||
import os
|
||||
import re
|
||||
import threadpool
|
||||
import requests
|
||||
import time
|
||||
from bs4 import BeautifulSoup
|
||||
import constant
|
||||
from logger import logger
|
||||
from tabulate import tabulate
|
||||
|
||||
import nhentai.constant as constant
|
||||
from nhentai.logger import logger
|
||||
|
||||
|
||||
def request(method, url, **kwargs):
|
||||
if not hasattr(requests, method):
|
||||
raise AttributeError('\'requests\' object has no attribute \'{}\''.format(method))
|
||||
raise AttributeError('\'requests\' object has no attribute \'{0}\''.format(method))
|
||||
|
||||
return requests.__dict__[method](url, proxies=constant.PROXY, **kwargs)
|
||||
return requests.__dict__[method](url, proxies=constant.PROXY, verify=False, **kwargs)
|
||||
|
||||
|
||||
def login_parser(username, password):
|
||||
s = requests.Session()
|
||||
s.proxies = constant.PROXY
|
||||
s.verify = False
|
||||
s.headers.update({'Referer': constant.LOGIN_URL})
|
||||
|
||||
s.get(constant.LOGIN_URL)
|
||||
content = s.get(constant.LOGIN_URL).content
|
||||
html = BeautifulSoup(content, 'html.parser')
|
||||
csrf_token_elem = html.find('input', attrs={'name': 'csrfmiddlewaretoken'})
|
||||
|
||||
if not csrf_token_elem:
|
||||
raise Exception('Cannot find csrf token to login')
|
||||
csrf_token = csrf_token_elem.attrs['value']
|
||||
|
||||
login_dict = {
|
||||
'csrfmiddlewaretoken': csrf_token,
|
||||
'username_or_email': username,
|
||||
'password': password,
|
||||
}
|
||||
resp = s.post(constant.LOGIN_URL, data=login_dict)
|
||||
if 'Invalid username/email or password' in resp.text:
|
||||
logger.error('Login failed, please check your username and password')
|
||||
exit(1)
|
||||
|
||||
html = BeautifulSoup(s.get(constant.FAV_URL).content, 'html.parser')
|
||||
count = html.find('span', attrs={'class': 'count'})
|
||||
if not count:
|
||||
logger.error("Can't get your number of favorited doujins. Did the login failed?")
|
||||
|
||||
count = int(count.text.strip('(').strip(')'))
|
||||
if count == 0:
|
||||
logger.warning('No favorites found')
|
||||
return []
|
||||
pages = int(count / 25)
|
||||
|
||||
if pages:
|
||||
pages += 1 if count % (25 * pages) else 0
|
||||
else:
|
||||
pages = 1
|
||||
|
||||
logger.info('You have %d favorites in %d pages.' % (count, pages))
|
||||
|
||||
if os.getenv('DEBUG'):
|
||||
pages = 1
|
||||
|
||||
ret = []
|
||||
doujinshi_id = re.compile('data-id="([\d]+)"')
|
||||
|
||||
def _callback(request, result):
|
||||
ret.append(result)
|
||||
|
||||
thread_pool = threadpool.ThreadPool(5)
|
||||
|
||||
for page in range(1, pages+1):
|
||||
try:
|
||||
logger.info('Getting doujinshi ids of page %d' % page)
|
||||
resp = s.get(constant.FAV_URL + '?page=%d' % page).text
|
||||
ids = doujinshi_id.findall(resp)
|
||||
requests_ = threadpool.makeRequests(doujinshi_parser, ids, _callback)
|
||||
[thread_pool.putRequest(req) for req in requests_]
|
||||
thread_pool.wait()
|
||||
except Exception as e:
|
||||
logger.error('Error: %s, continue', str(e))
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def doujinshi_parser(id_):
|
||||
if not isinstance(id_, (int,)) and (isinstance(id_, (str,)) and not id_.isdigit()):
|
||||
raise Exception('Doujinshi id({}) is not valid'.format(id_))
|
||||
raise Exception('Doujinshi id({0}) is not valid'.format(id_))
|
||||
|
||||
id_ = int(id_)
|
||||
logger.log(15, 'Fetching doujinshi information of id {}'.format(id_))
|
||||
logger.log(15, 'Fetching information of doujinshi id {0}'.format(id_))
|
||||
doujinshi = dict()
|
||||
doujinshi['id'] = id_
|
||||
url = '{}/{}/'.format(constant.DETAIL_URL, id_)
|
||||
url = '{0}/{1}'.format(constant.DETAIL_URL, id_)
|
||||
i=0
|
||||
while i<5:
|
||||
try:
|
||||
response = request('get', url).json()
|
||||
except Exception as e:
|
||||
i+=1
|
||||
if not i<5:
|
||||
logger.critical(str(e))
|
||||
exit(1)
|
||||
continue
|
||||
break
|
||||
|
||||
try:
|
||||
response = request('get', url).content
|
||||
except Exception as e:
|
||||
logger.critical(str(e))
|
||||
sys.exit()
|
||||
|
||||
html = BeautifulSoup(response)
|
||||
doujinshi_info = html.find('div', attrs={'id': 'info'})
|
||||
|
||||
title = doujinshi_info.find('h1').text
|
||||
subtitle = doujinshi_info.find('h2')
|
||||
|
||||
doujinshi['name'] = title
|
||||
doujinshi['subtitle'] = subtitle.text if subtitle else ''
|
||||
|
||||
doujinshi_cover = html.find('div', attrs={'id': 'cover'})
|
||||
img_id = re.search('/galleries/([\d]+)/cover\.(jpg|png)$', doujinshi_cover.a.img['src'])
|
||||
if not img_id:
|
||||
logger.critical('Tried yo get image id failed')
|
||||
sys.exit()
|
||||
doujinshi['img_id'] = img_id.group(1)
|
||||
doujinshi['ext'] = img_id.group(2)
|
||||
|
||||
pages = 0
|
||||
for _ in doujinshi_info.find_all('div', class_=''):
|
||||
pages = re.search('([\d]+) pages', _.text)
|
||||
if pages:
|
||||
pages = pages.group(1)
|
||||
break
|
||||
doujinshi['pages'] = int(pages)
|
||||
doujinshi['name'] = response['title']['english']
|
||||
doujinshi['subtitle'] = response['title']['japanese']
|
||||
doujinshi['img_id'] = response['media_id']
|
||||
doujinshi['ext'] = ''.join(map(lambda s: s['t'], response['images']['pages']))
|
||||
doujinshi['pages'] = len(response['images']['pages'])
|
||||
|
||||
# gain information of the doujinshi
|
||||
information_fields = doujinshi_info.find_all('div', attrs={'class': 'field-name'})
|
||||
needed_fields = ['Characters', 'Artists', 'Language', 'Tags']
|
||||
for field in information_fields:
|
||||
field_name = field.contents[0].strip().strip(':')
|
||||
if field_name in needed_fields:
|
||||
data = [sub_field.contents[0].strip() for sub_field in
|
||||
field.find_all('a', attrs={'class': 'tag'})]
|
||||
doujinshi[field_name.lower()] = ', '.join(data)
|
||||
needed_fields = ['character', 'artist', 'language', 'tag']
|
||||
for tag in response['tags']:
|
||||
tag_type = tag['type']
|
||||
if tag_type in needed_fields:
|
||||
if tag_type == 'tag':
|
||||
if tag_type not in doujinshi:
|
||||
doujinshi[tag_type] = {}
|
||||
|
||||
tag['name'] = tag['name'].replace(' ', '-')
|
||||
tag['name'] = tag['name'].lower()
|
||||
doujinshi[tag_type][tag['name']] = tag['id']
|
||||
elif tag_type not in doujinshi:
|
||||
doujinshi[tag_type] = tag['name']
|
||||
else:
|
||||
doujinshi[tag_type] += tag['name']
|
||||
|
||||
return doujinshi
|
||||
|
||||
|
||||
def search_parser(keyword, page):
|
||||
logger.debug('Searching doujinshis of keyword {}'.format(keyword))
|
||||
logger.debug('Searching doujinshis using keywords {0}'.format(keyword))
|
||||
result = []
|
||||
try:
|
||||
response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page}).content
|
||||
except requests.ConnectionError as e:
|
||||
logger.critical(e)
|
||||
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
|
||||
raise SystemExit
|
||||
i=0
|
||||
while i<5:
|
||||
try:
|
||||
response = request('get', url=constant.SEARCH_URL, params={'query': keyword, 'page': page}).json()
|
||||
except Exception as e:
|
||||
i+=1
|
||||
if not i<5:
|
||||
logger.critical(str(e))
|
||||
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
|
||||
exit(1)
|
||||
continue
|
||||
break
|
||||
|
||||
if 'result' not in response:
|
||||
raise Exception('No result in response')
|
||||
|
||||
for row in response['result']:
|
||||
title = row['title']['english']
|
||||
title = title[:85] + '..' if len(title) > 85 else title
|
||||
result.append({'id': row['id'], 'title': title})
|
||||
|
||||
if not result:
|
||||
logger.warn('No results for keywords {}'.format(keyword))
|
||||
|
||||
html = BeautifulSoup(response)
|
||||
doujinshi_search_result = html.find_all('div', attrs={'class': 'gallery'})
|
||||
for doujinshi in doujinshi_search_result:
|
||||
doujinshi_container = doujinshi.find('div', attrs={'class': 'caption'})
|
||||
title = doujinshi_container.text.strip()
|
||||
title = (title[:85] + '..') if len(title) > 85 else title
|
||||
id_ = re.search('/g/(\d+)/', doujinshi.a['href']).group(1)
|
||||
result.append({'id': id_, 'title': title})
|
||||
return result
|
||||
|
||||
|
||||
def print_doujinshi(doujinshi_list):
|
||||
if not doujinshi_list:
|
||||
return
|
||||
doujinshi_list = [i.values() for i in doujinshi_list]
|
||||
doujinshi_list = [(i['id'], i['title']) for i in doujinshi_list]
|
||||
headers = ['id', 'doujinshi']
|
||||
logger.info('Search Result\n' +
|
||||
tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst'))
|
||||
|
||||
|
||||
def tag_parser(tag_id, max_page=1):
|
||||
logger.info('Searching for doujinshi with tag id {0}'.format(tag_id))
|
||||
result = []
|
||||
i=0
|
||||
while i<5:
|
||||
try:
|
||||
response = request('get', url=constant.TAG_API_URL, params={'sort': 'popular', 'tag_id': tag_id}).json()
|
||||
except Exception as e:
|
||||
i+=1
|
||||
if not i<5:
|
||||
logger.critical(str(e))
|
||||
exit(1)
|
||||
continue
|
||||
break
|
||||
page = max_page if max_page <= response['num_pages'] else int(response['num_pages'])
|
||||
|
||||
for i in range(1, page+1):
|
||||
logger.info('Getting page {} ...'.format(i))
|
||||
|
||||
if page != 1:
|
||||
i=0
|
||||
while i<5:
|
||||
try:
|
||||
response = request('get', url=constant.TAG_API_URL, params={'sort': 'popular', 'tag_id': tag_id}).json()
|
||||
except Exception as e:
|
||||
i+=1
|
||||
if not i<5:
|
||||
logger.critical(str(e))
|
||||
exit(1)
|
||||
continue
|
||||
break
|
||||
for row in response['result']:
|
||||
title = row['title']['english']
|
||||
title = title[:85] + '..' if len(title) > 85 else title
|
||||
result.append({'id': row['id'], 'title': title})
|
||||
|
||||
if not result:
|
||||
logger.warn('No results for tag id {}'.format(tag_id))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def tag_guessing(tag_name):
|
||||
tag_name = tag_name.lower()
|
||||
tag_name = tag_name.replace(' ', '-')
|
||||
logger.info('Trying to get tag_id of tag \'{0}\''.format(tag_name))
|
||||
i=0
|
||||
while i<5:
|
||||
try:
|
||||
response = request('get', url='%s/%s' % (constant.TAG_URL, tag_name)).content
|
||||
except Exception as e:
|
||||
i+=1
|
||||
if not i<5:
|
||||
logger.critical(str(e))
|
||||
exit(1)
|
||||
continue
|
||||
break
|
||||
|
||||
html = BeautifulSoup(response, 'html.parser')
|
||||
first_item = html.find('div', attrs={'class': 'gallery'})
|
||||
if not first_item:
|
||||
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
|
||||
return
|
||||
|
||||
doujinshi_id = re.findall('(\d+)', first_item.a.attrs['href'])
|
||||
if not doujinshi_id:
|
||||
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
|
||||
return
|
||||
|
||||
ret = doujinshi_parser(doujinshi_id[0])
|
||||
if 'tag' in ret and tag_name in ret['tag']:
|
||||
tag_id = ret['tag'][tag_name]
|
||||
logger.info('Tag id of tag \'{0}\' is {1}'.format(tag_name, tag_id))
|
||||
else:
|
||||
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
|
||||
return
|
||||
|
||||
return tag_id
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
print(doujinshi_parser("32271"))
|
||||
|
127
nhentai/utils.py
Normal file
@ -0,0 +1,127 @@
|
||||
# coding: utf-8
|
||||
from __future__ import unicode_literals, print_function
|
||||
|
||||
import sys
|
||||
import os
|
||||
import string
|
||||
import zipfile
|
||||
import shutil
|
||||
from nhentai.logger import logger
|
||||
|
||||
|
||||
class _Singleton(type):
|
||||
""" A metaclass that creates a Singleton base class when called. """
|
||||
_instances = {}
|
||||
|
||||
def __call__(cls, *args, **kwargs):
|
||||
if cls not in cls._instances:
|
||||
cls._instances[cls] = super(_Singleton, cls).__call__(*args, **kwargs)
|
||||
return cls._instances[cls]
|
||||
|
||||
|
||||
class Singleton(_Singleton(str('SingletonMeta'), (object,), {})):
|
||||
pass
|
||||
|
||||
|
||||
def urlparse(url):
|
||||
try:
|
||||
from urlparse import urlparse
|
||||
except ImportError:
|
||||
from urllib.parse import urlparse
|
||||
|
||||
return urlparse(url)
|
||||
|
||||
|
||||
def readfile(path):
|
||||
loc = os.path.dirname(__file__)
|
||||
|
||||
with open(os.path.join(loc, path), 'r') as file:
|
||||
return file.read()
|
||||
|
||||
|
||||
def generate_html(output_dir='.', doujinshi_obj=None):
|
||||
image_html = ''
|
||||
|
||||
if doujinshi_obj is not None:
|
||||
doujinshi_dir = os.path.join(output_dir, format_filename('%s-%s' % (doujinshi_obj.id,
|
||||
doujinshi_obj.name)))
|
||||
else:
|
||||
doujinshi_dir = '.'
|
||||
|
||||
file_list = os.listdir(doujinshi_dir)
|
||||
file_list.sort()
|
||||
|
||||
for image in file_list:
|
||||
if not os.path.splitext(image)[1] in ('.jpg', '.png'):
|
||||
continue
|
||||
|
||||
image_html += '<img src="{0}" class="image-item"/>\n'\
|
||||
.format(image)
|
||||
|
||||
html = readfile('viewer/index.html')
|
||||
css = readfile('viewer/styles.css')
|
||||
js = readfile('viewer/scripts.js')
|
||||
|
||||
if doujinshi_obj is not None:
|
||||
title = doujinshi_obj.name
|
||||
if sys.version_info < (3, 0):
|
||||
title = title.encode('utf-8')
|
||||
else:
|
||||
title = 'nHentai HTML Viewer'
|
||||
|
||||
data = html.format(TITLE=title, IMAGES=image_html, SCRIPTS=js, STYLES=css)
|
||||
try:
|
||||
if sys.version_info < (3, 0):
|
||||
with open(os.path.join(doujinshi_dir, 'index.html'), 'w') as f:
|
||||
f.write(data)
|
||||
else:
|
||||
with open(os.path.join(doujinshi_dir, 'index.html'), 'wb') as f:
|
||||
f.write(data.encode('utf-8'))
|
||||
|
||||
logger.log(15, 'HTML Viewer has been write to \'{0}\''.format(os.path.join(doujinshi_dir, 'index.html')))
|
||||
except Exception as e:
|
||||
logger.warning('Writen HTML Viewer failed ({})'.format(str(e)))
|
||||
|
||||
|
||||
def generate_cbz(output_dir='.', doujinshi_obj=None):
|
||||
if doujinshi_obj is not None:
|
||||
doujinshi_dir = os.path.join(output_dir, format_filename('%s-%s' % (doujinshi_obj.id,
|
||||
str(doujinshi_obj.name[:200]))))
|
||||
cbz_filename = os.path.join(output_dir, format_filename('%s-%s.cbz' % (doujinshi_obj.id,
|
||||
str(doujinshi_obj.name[:200]))))
|
||||
else:
|
||||
cbz_filename = './doujinshi.cbz'
|
||||
doujinshi_dir = '.'
|
||||
|
||||
file_list = os.listdir(doujinshi_dir)
|
||||
file_list.sort()
|
||||
|
||||
with zipfile.ZipFile(cbz_filename, 'w') as cbz_pf:
|
||||
for image in file_list:
|
||||
image_path = os.path.join(doujinshi_dir, image)
|
||||
cbz_pf.write(image_path, image)
|
||||
|
||||
shutil.rmtree(doujinshi_dir, ignore_errors=True)
|
||||
logger.log(15, 'Comic Book CBZ file has been write to \'{0}\''.format(doujinshi_dir))
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def format_filename(s):
|
||||
"""Take a string and return a valid filename constructed from the string.
|
||||
Uses a whitelist approach: any characters not present in valid_chars are
|
||||
removed. Also spaces are replaced with underscores.
|
||||
|
||||
Note: this method may produce invalid filenames such as ``, `.` or `..`
|
||||
When I use this method I prepend a date string like '2009_01_15_19_46_32_'
|
||||
and append a file extension like '.txt', so I avoid the potential of using
|
||||
an invalid filename.
|
||||
|
||||
"""
|
||||
valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits)
|
||||
filename = ''.join(c for c in s if c in valid_chars)
|
||||
filename = filename.replace(' ', '_') # I don't like spaces in filenames.
|
||||
return filename
|
24
nhentai/viewer/index.html
Normal file
@ -0,0 +1,24 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>{TITLE}</title>
|
||||
<style>
|
||||
{STYLES}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<nav id="list">
|
||||
{IMAGES}</nav>
|
||||
|
||||
<div id="image-container">
|
||||
<span id="page-num"></span>
|
||||
<div id="dest"></div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
{SCRIPTS}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
62
nhentai/viewer/scripts.js
Normal file
@ -0,0 +1,62 @@
|
||||
const pages = Array.from(document.querySelectorAll('img.image-item'));
|
||||
let currentPage = 0;
|
||||
|
||||
function changePage(pageNum) {
|
||||
const previous = pages[currentPage];
|
||||
const current = pages[pageNum];
|
||||
|
||||
if (current == null) {
|
||||
return;
|
||||
}
|
||||
|
||||
previous.classList.remove('current');
|
||||
current.classList.add('current');
|
||||
|
||||
currentPage = pageNum;
|
||||
|
||||
const display = document.getElementById('dest');
|
||||
display.style.backgroundImage = `url("${current.src}")`;
|
||||
|
||||
document.getElementById('page-num')
|
||||
.innerText = [
|
||||
(pageNum + 1).toLocaleString(),
|
||||
pages.length.toLocaleString()
|
||||
].join('\u200a/\u200a');
|
||||
}
|
||||
|
||||
changePage(0);
|
||||
|
||||
document.getElementById('list').onclick = event => {
|
||||
if (pages.includes(event.target)) {
|
||||
changePage(pages.indexOf(event.target));
|
||||
}
|
||||
};
|
||||
|
||||
document.getElementById('image-container').onclick = event => {
|
||||
const width = document.getElementById('image-container').clientWidth;
|
||||
const clickPos = event.clientX / width;
|
||||
|
||||
if (clickPos < 0.5) {
|
||||
changePage(currentPage - 1);
|
||||
} else {
|
||||
changePage(currentPage + 1);
|
||||
}
|
||||
};
|
||||
|
||||
document.onkeypress = event => {
|
||||
switch (event.key.toLowerCase()) {
|
||||
// Previous Image
|
||||
case 'arrowleft':
|
||||
case 'a':
|
||||
changePage(currentPage - 1);
|
||||
break;
|
||||
|
||||
// Next Image
|
||||
case ' ':
|
||||
case 'enter':
|
||||
case 'arrowright':
|
||||
case 'd':
|
||||
changePage(currentPage + 1);
|
||||
break;
|
||||
}
|
||||
};
|
69
nhentai/viewer/styles.css
Normal file
@ -0,0 +1,69 @@
|
||||
*, *::after, *::before {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
img {
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
html, body {
|
||||
display: flex;
|
||||
background-color: #e8e6e6;
|
||||
height: 100%;
|
||||
width: 100%;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
font-family: sans-serif;
|
||||
}
|
||||
|
||||
#list {
|
||||
height: 100%;
|
||||
overflow: auto;
|
||||
width: 260px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
#list img {
|
||||
width: 200px;
|
||||
padding: 10px;
|
||||
border-radius: 10px;
|
||||
margin: 15px 0;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
#list img.current {
|
||||
background: #0003;
|
||||
}
|
||||
|
||||
#image-container {
|
||||
flex: auto;
|
||||
height: 100vh;
|
||||
background: #222;
|
||||
color: #fff;
|
||||
text-align: center;
|
||||
cursor: pointer;
|
||||
-webkit-user-select: none;
|
||||
user-select: none;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
#image-container #dest {
|
||||
height: 100%;
|
||||
width: 100%;
|
||||
background-size: contain;
|
||||
background-repeat: no-repeat;
|
||||
background-position: center;
|
||||
}
|
||||
|
||||
#image-container #page-num {
|
||||
position: absolute;
|
||||
font-size: 18pt;
|
||||
left: 10px;
|
||||
bottom: 5px;
|
||||
font-weight: bold;
|
||||
opacity: 0.75;
|
||||
text-shadow: /* Duplicate the same shadow to make it very strong */
|
||||
0 0 2px #222,
|
||||
0 0 2px #222,
|
||||
0 0 2px #222;
|
||||
}
|
@ -2,3 +2,4 @@ requests>=2.5.0
|
||||
BeautifulSoup4>=4.0.0
|
||||
threadpool>=1.2.7
|
||||
tabulate>=0.7.5
|
||||
future>=0.15.2threadpool==1.3.2
|
||||
|
13
setup.py
@ -1,9 +1,20 @@
|
||||
# coding: utf-8
|
||||
from __future__ import print_function, unicode_literals
|
||||
import sys
|
||||
import codecs
|
||||
from setuptools import setup, find_packages
|
||||
from nhentai import __version__, __author__, __email__
|
||||
|
||||
|
||||
with open('requirements.txt') as f:
|
||||
requirements = [l for l in f.read().splitlines() if l]
|
||||
|
||||
|
||||
def long_description():
|
||||
with codecs.open('README.md', 'rb') as f:
|
||||
if sys.version_info >= (3, 0, 0):
|
||||
return str(f.read())
|
||||
|
||||
setup(
|
||||
name='nhentai',
|
||||
version=__version__,
|
||||
@ -13,7 +24,9 @@ setup(
|
||||
author_email=__email__,
|
||||
keywords='nhentai, doujinshi',
|
||||
description='nhentai.net doujinshis downloader',
|
||||
long_description=long_description(),
|
||||
url='https://github.com/RicterZ/nhentai',
|
||||
download_url='https://github.com/RicterZ/nhentai/tarball/master',
|
||||
include_package_data=True,
|
||||
zip_safe=False,
|
||||
|
||||
|