Compare commits

..

47 Commits
0.3.2 ... 0.3.5

Author SHA1 Message Date
92640d9767 0.3.5 2019-06-12 22:54:22 +08:00
6b97777b7d fix bug 2019-06-12 22:48:41 +08:00
1af195d727 add cookie check 2019-06-12 22:45:44 +08:00
58b2b644c1 fix #64 2019-06-12 22:37:25 +08:00
0cfec34e9e modify cookie 2019-06-12 22:08:32 +08:00
1172282362 fix #50 2019-06-04 08:38:42 +08:00
a909ad6d92 fix --gen-main bugs 2019-06-04 08:35:13 +08:00
440bb0dc38 Merge pull request #58 from symant233/master
fix show info
2019-06-03 17:53:27 +08:00
f5b7d89fb0 fix show info 2019-06-01 11:31:53 +08:00
535b804ef6 Merge pull request #53 from symant233/master
Create a main viewer contains all the sub index.html and thumb pic
2019-05-30 20:10:22 +08:00
9b65544942 add travis-ci test 2019-05-30 20:05:46 +08:00
0935d609c3 fix --gen-main action 2019-05-29 13:43:47 +08:00
f10ae3cf58 store proxy config 2019-05-28 19:47:48 +08:00
86b3a092c7 ignore other folders 2019-05-26 15:57:50 +08:00
710cc86eaf fix codec error for py2 2019-05-21 17:06:42 +08:00
2d327359aa small fix 2019-05-21 16:16:58 +08:00
f78b8bc2cd fix conflict 2019-05-21 15:53:43 +08:00
a95396033b Update README.rst 2019-05-18 22:36:03 +08:00
01c0e73849 fix bug while installing on windows / python3 2019-05-18 22:30:20 +08:00
57e9305849 0.3.3 2019-05-18 22:15:42 +08:00
6bd37f384c fix 2019-05-18 22:14:08 +08:00
2c61fd3a3f add doujinshi folder formatter 2019-05-18 22:13:23 +08:00
cf4291d3c2 new line 2019-05-18 22:01:29 +08:00
450e3689a0 fix 2019-05-18 22:00:33 +08:00
b5deca2704 fix 2019-05-18 21:57:43 +08:00
57dc4a58b9 remove Options block 2019-05-18 21:56:59 +08:00
1e1d03064b readme 2019-05-18 21:56:35 +08:00
40a98881c6 add some shortcut options 2019-05-18 21:53:40 +08:00
a7848c3cd0 fix bug 2019-05-18 21:52:36 +08:00
5df58780d9 add delay #55 2019-05-18 21:51:38 +08:00
56dace81f1 remove readme.md 2019-05-18 20:31:18 +08:00
086e469275 Update README.rst 2019-05-18 20:27:08 +08:00
1f76a8a70e Update README.rst 2019-05-18 20:24:49 +08:00
5d294212e6 Update README.rst 2019-05-18 20:24:15 +08:00
ef274a672b Update README.rst 2019-05-18 20:23:19 +08:00
795f80752f Update README.rst 2019-05-18 20:22:55 +08:00
53c23bb6dc Update README.rst 2019-05-18 20:07:45 +08:00
8d5f12292c update rst 2019-05-18 20:06:10 +08:00
f3141d5726 add rst 2019-05-18 20:04:16 +08:00
475e4db9af 0.3.2 #54 2019-05-18 19:47:04 +08:00
a5eba94064 clean unused style for main.css 2019-05-06 15:41:26 +08:00
6053e302ee fix output_dir make gen-main error 2019-05-05 22:02:24 +08:00
c32b516575 js return to prev page press 'q' 2019-05-05 21:47:23 +08:00
0150e79c49 Add main viewer sources 2019-05-05 21:10:24 +08:00
0cda30385b Main viewer generator 2019-05-05 21:01:49 +08:00
18bdab1962 add main viewer 2019-05-05 21:01:49 +08:00
8e8f935a9b set alias for local:1080 proxy 2019-05-05 21:01:49 +08:00
16 changed files with 765 additions and 269 deletions

View File

@ -13,9 +13,10 @@ install:
script: script:
- echo 268642 > /tmp/test.txt - echo 268642 > /tmp/test.txt
- NHENTAI=https://nhentai.net nhentai --cookie '__cfduid=da09f237ceb0f51c75980b0b3fda3ce571558179357; _ga=GA1.2.2000087053.1558179358; _gid=GA1.2.717818542.1558179358; csrftoken=iSxrTFOjrujJqauhAqWvTTI9dl3sfWnxdEFoMuqgmlBrbMin5Gj9wJW4r61cmH1X; sessionid=ewuaayfewbzpiukrarx9d52oxwlz2esd' - NHENTAI=https://nhentai.net nhentai --cookie '__cfduid=da09f237ceb0f51c75980b0b3fda3ce571558179357; _ga=GA1.2.2000087053.1558179358; _gid=GA1.2.782652201.1560348447; csrftoken=E2O8wfriFkcXUgN1AC41DoLqfRaBbggIUdvy46yC45PKCRCmCHQHQ7YRUy0d7FXZ; sessionid=0rapzxkt6yl1wjhdxm9whtfdc7gvw0iu'
- NHENTAI=https://nhentai.net nhentai --search umaru - NHENTAI=https://nhentai.net nhentai --search umaru
- NHENTAI=https://nhentai.net nhentai --id=152503,146134 -t 10 --output=/tmp/ --cbz - NHENTAI=https://nhentai.net nhentai --id=152503,146134 -t 10 --output=/tmp/ --cbz
- NHENTAI=https://nhentai.net nhentai --tag lolicon - NHENTAI=https://nhentai.net nhentai --tag lolicon
- NHENTAI=https://nhentai.net nhentai -F - NHENTAI=https://nhentai.net nhentai -F
- NHENTAI=https://nhentai.net nhentai --file /tmp/test.txt - NHENTAI=https://nhentai.net nhentai --file /tmp/test.txt
- nhentai --id=152503,146134 --gen-main --output=/tmp/

View File

@ -3,3 +3,5 @@ include requirements.txt
include nhentai/viewer/index.html include nhentai/viewer/index.html
include nhentai/viewer/styles.css include nhentai/viewer/styles.css
include nhentai/viewer/scripts.js include nhentai/viewer/scripts.js
include nhentai/viewer/main.html
include nhentai/viewer/main.css

View File

@ -1,84 +0,0 @@
nhentai
=======
_ _ _ _
_ __ | | | | ___ _ __ | |_ __ _(_)
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
| | | | _ | __/ | | | || (_| | |
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
あなたも変態。 いいね?
[![Build Status](https://travis-ci.org/RicterZ/nhentai.svg?branch=master)](https://travis-ci.org/RicterZ/nhentai) ![nhentai PyPI Downloads](https://img.shields.io/pypi/dm/nhentai.svg) [![license](https://img.shields.io/cocoapods/l/AFNetworking.svg)](https://github.com/RicterZ/nhentai/blob/master/LICENSE)
nHentai is a CLI tool for downloading doujinshi from [nhentai.net](http://nhentai.net).
### Installation
git clone https://github.com/RicterZ/nhentai
cd nhentai
python setup.py install
### Installation (Gentoo)
layman -fa glicOne
sudo emerge net-misc/nhentai
### Usage
**IMPORTANT**: To bypass the nhentai frequency limit, you should use `--login` option to log into nhentai.net.
*The default download folder will be the path where you run the command (CLI path).*
Download specified doujinshi:
```bash
nhentai --id=123855,123866
```
Download doujinshi with ids specified in a file:
```bash
nhentai --file=doujinshi.txt
```
Search a keyword and download the first page:
```bash
nhentai --search="tomori" --page=1 --download
```
Download your favourite doujinshi (login required):
```bash
nhentai --login "username:password" --download
```
Download by tag name:
```bash
nhentai --tag lolicon --download
```
### Options
+ `-t, --thread`: Download threads, max: 10
+ `--output`:Output dir of saving doujinshi
+ `--tag`:Download by tag name
+ `--timeout`: Timeout of downloading each image
+ `--proxy`: Use proxy, example: http://127.0.0.1:8080/
+ `--login`: username:password pair of your nhentai account
+ `--nohtml`: Do not generate HTML
+ `--cbz`: Generate Comic Book CBZ File
### nHentai Mirror
If you want to use a mirror, you should set up a reverse proxy of `nhentai.net` and `i.nhentai.net`.
For example:
i.h.loli.club -> i.nhentai.net
h.loli.club -> nhentai.net
Set `NHENTAI` env var to your nhentai mirror.
```bash
NHENTAI=http://h.loli.club nhentai --id 123456
```
![](./images/search.png)
![](./images/download.png)
![](./images/viewer.png)
### あなたも変態
![](./images/image.jpg)

187
README.rst Normal file
View File

@ -0,0 +1,187 @@
nhentai
=======
.. code-block::
_ _ _ _
_ __ | | | | ___ _ __ | |_ __ _(_)
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
| | | | _ | __/ | | | || (_| | |
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
あなたも変態。 いいね?
|travis|
|pypi|
|license|
nHentai is a CLI tool for downloading doujinshi from <http://nhentai.net>
============
Installation
============
.. code-block::
git clone https://github.com/RicterZ/nhentai
cd nhentai
python setup.py install
=====================
Installation (Gentoo)
=====================
.. code-block::
layman -fa glicOne
sudo emerge net-misc/nhentai
=====
Usage
=====
**IMPORTANT**: To bypass the nhentai frequency limit, you should use `--cookie` option to store your cookie.
*The default download folder will be the path where you run the command (CLI path).*
Set your nhentai cookie against captcha:
.. code-block:: bash
nhentai --cookie 'YOUR COOKIE FROM nhentai.net'
Download specified doujinshi:
.. code-block:: bash
nhentai --id=123855,123866
Download doujinshi with ids specified in a file (doujinshi ids split by line):
.. code-block:: bash
nhentai --file=doujinshi.txt
Search a keyword and download the first page:
.. code-block:: bash
nhentai --search="tomori" --page=1 --download
Download by tag name:
.. code-block:: bash
nhentai --tag lolicon --download --page=2
Download your favorites with delay:
.. code-block:: bash
nhentai --favorites --download --delay 1
Format output doujinshi folder name:
.. code-block:: bash
nhentai --id 261100 --format '[%i]%s'
Supported doujinshi folder formatter:
- %i: Doujinshi id
- %t: Doujinshi name
- %s: Doujinshi subtitle (translated name)
- %a: Doujinshi authors' name
Other options:
.. code-block::
Options:
# Operation options
-h, --help show this help message and exit
-D, --download download doujinshi (for search results)
-S, --show just show the doujinshi information
# Doujinshi options
--id=ID doujinshi ids set, e.g. 1,2,3
-s KEYWORD, --search=KEYWORD
search doujinshi by keyword
--tag=TAG download doujinshi by tag
-F, --favorites list or download your favorites.
# Multi-page options
--page=PAGE page number of search results
--max-page=MAX_PAGE The max page when recursive download tagged doujinshi
# Download options
-o OUTPUT_DIR, --output=OUTPUT_DIR
output dir
-t THREADS, --threads=THREADS
thread count for downloading doujinshi
-T TIMEOUT, --timeout=TIMEOUT
timeout for downloading doujinshi
-d DELAY, --delay=DELAY
slow down between downloading every doujinshi
-p PROXY, --proxy=PROXY
uses a proxy, for example: http://127.0.0.1:1080
-f FILE, --file=FILE read gallery IDs from file.
--format=NAME_FORMAT format the saved folder name
# Generating options
--html generate a html viewer at current directory
--no-html don't generate HTML after downloading
-C, --cbz generate Comic Book CBZ File
--rm-origin-dir remove downloaded doujinshi dir when generated CBZ
file.
# nHentai options
--cookie=COOKIE set cookie of nhentai to bypass Google recaptcha
==============
nHentai Mirror
==============
If you want to use a mirror, you should set up a reverse proxy of `nhentai.net` and `i.nhentai.net`.
For example:
.. code-block::
i.h.loli.club -> i.nhentai.net
h.loli.club -> nhentai.net
Set `NHENTAI` env var to your nhentai mirror.
.. code-block:: bash
NHENTAI=http://h.loli.club nhentai --id 123456
.. image:: ./images/search.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/download.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/viewer.png?raw=true
:alt: nhentai
:align: center
============
あなたも変態
============
.. image:: ./images/image.jpg?raw=true
:alt: nhentai
:align: center
.. |travis| image:: https://travis-ci.org/RicterZ/nhentai.svg?branch=master
:target: https://travis-ci.org/RicterZ/nhentai
.. |pypi| image:: https://img.shields.io/pypi/dm/nhentai.svg
:target: https://pypi.org/project/nhentai/
.. |license| image:: https://img.shields.io/github/license/ricterz/nhentai.svg
:target: https://github.com/RicterZ/nhentai/blob/master/LICENSE

View File

@ -1,3 +1,3 @@
__version__ = '0.3.1' __version__ = '0.3.5'
__author__ = 'RicterZ' __author__ = 'RicterZ'
__email__ = 'ricterzheng@gmail.com' __email__ = 'ricterzheng@gmail.com'

View File

@ -10,7 +10,7 @@ except ImportError:
import nhentai.constant as constant import nhentai.constant as constant
from nhentai import __version__ from nhentai import __version__
from nhentai.utils import urlparse, generate_html from nhentai.utils import urlparse, generate_html, generate_main_html
from nhentai.logger import logger from nhentai.logger import logger
try: try:
@ -42,13 +42,13 @@ def cmd_parser():
'\n\nEnvironment Variable:\n' '\n\nEnvironment Variable:\n'
' NHENTAI nhentai mirror url') ' NHENTAI nhentai mirror url')
# operation options # operation options
parser.add_option('--download', dest='is_download', action='store_true', parser.add_option('--download', '-D', dest='is_download', action='store_true',
help='download doujinshi (for search results)') help='download doujinshi (for search results)')
parser.add_option('--show', dest='is_show', action='store_true', help='just show the doujinshi information') parser.add_option('--show', '-S', dest='is_show', action='store_true', help='just show the doujinshi information')
# doujinshi options # doujinshi options
parser.add_option('--id', type='string', dest='id', action='store', help='doujinshi ids set, e.g. 1,2,3') parser.add_option('--id', type='string', dest='id', action='store', help='doujinshi ids set, e.g. 1,2,3')
parser.add_option('--search', type='string', dest='keyword', action='store', help='search doujinshi by keyword') parser.add_option('--search', '-s', type='string', dest='keyword', action='store', help='search doujinshi by keyword')
parser.add_option('--tag', type='string', dest='tag', action='store', help='download doujinshi by tag') parser.add_option('--tag', type='string', dest='tag', action='store', help='download doujinshi by tag')
parser.add_option('--favorites', '-F', action='store_true', dest='favorites', parser.add_option('--favorites', '-F', action='store_true', dest='favorites',
help='list or download your favorites.') help='list or download your favorites.')
@ -60,22 +60,28 @@ def cmd_parser():
help='The max page when recursive download tagged doujinshi') help='The max page when recursive download tagged doujinshi')
# download options # download options
parser.add_option('--output', type='string', dest='output_dir', action='store', default='', parser.add_option('--output', '-o', type='string', dest='output_dir', action='store', default='',
help='output dir') help='output dir')
parser.add_option('--threads', '-t', type='int', dest='threads', action='store', default=5, parser.add_option('--threads', '-t', type='int', dest='threads', action='store', default=5,
help='thread count for downloading doujinshi') help='thread count for downloading doujinshi')
parser.add_option('--timeout', type='int', dest='timeout', action='store', default=30, parser.add_option('--timeout', '-T', type='int', dest='timeout', action='store', default=30,
help='timeout for downloading doujinshi') help='timeout for downloading doujinshi')
parser.add_option('--proxy', type='string', dest='proxy', action='store', default='', parser.add_option('--delay', '-d', type='int', dest='delay', action='store', default=0,
help='uses a proxy, for example: http://127.0.0.1:1080') help='slow down between downloading every doujinshi')
parser.add_option('--proxy', '-p', type='string', dest='proxy', action='store', default='',
help='store a proxy, for example: -p \'http://127.0.0.1:1080\'')
parser.add_option('--file', '-f', type='string', dest='file', action='store', help='read gallery IDs from file.') parser.add_option('--file', '-f', type='string', dest='file', action='store', help='read gallery IDs from file.')
parser.add_option('--format', type='string', dest='name_format', action='store',
help='format the saved folder name', default='[%i][%a][%t]')
# generate options # generate options
parser.add_option('--html', dest='html_viewer', action='store_true', parser.add_option('--html', dest='html_viewer', action='store_true',
help='generate a html viewer at current directory') help='generate a html viewer at current directory')
parser.add_option('--nohtml', dest='is_nohtml', action='store_true', parser.add_option('--no-html', dest='is_nohtml', action='store_true',
help='don\'t generate HTML') help='don\'t generate HTML after downloading')
parser.add_option('--cbz', dest='is_cbz', action='store_true', parser.add_option('--gen-main', dest='main_viewer', action='store_true',
help='generate a main viewer contain all the doujin in the folder')
parser.add_option('--cbz', '-C', dest='is_cbz', action='store_true',
help='generate Comic Book CBZ File') help='generate Comic Book CBZ File')
parser.add_option('--rm-origin-dir', dest='rm_origin_dir', action='store_true', default=False, parser.add_option('--rm-origin-dir', dest='rm_origin_dir', action='store_true', default=False,
help='remove downloaded doujinshi dir when generated CBZ file.') help='remove downloaded doujinshi dir when generated CBZ file.')
@ -97,6 +103,11 @@ def cmd_parser():
generate_html() generate_html()
exit(0) exit(0)
if args.main_viewer and not args.id and not args.keyword and \
not args.tag and not args.favorites:
generate_main_html()
exit(0)
if os.path.exists(os.path.join(constant.NHENTAI_HOME, 'cookie')): if os.path.exists(os.path.join(constant.NHENTAI_HOME, 'cookie')):
with open(os.path.join(constant.NHENTAI_HOME, 'cookie'), 'r') as f: with open(os.path.join(constant.NHENTAI_HOME, 'cookie'), 'r') as f:
constant.COOKIE = f.read() constant.COOKIE = f.read()
@ -115,17 +126,28 @@ def cmd_parser():
logger.info('Cookie saved.') logger.info('Cookie saved.')
exit(0) exit(0)
''' if os.path.exists(os.path.join(constant.NHENTAI_HOME, 'proxy')):
if args.login: with open(os.path.join(constant.NHENTAI_HOME, 'proxy'), 'r') as f:
link = f.read()
constant.PROXY = {'http': link, 'https': link}
if args.proxy:
try: try:
_, _ = args.login.split(':', 1) if not os.path.exists(constant.NHENTAI_HOME):
except ValueError: os.mkdir(constant.NHENTAI_HOME)
logger.error('Invalid `username:password` pair.')
proxy_url = urlparse(args.proxy)
if proxy_url.scheme not in ('http', 'https'):
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
else:
with open(os.path.join(constant.NHENTAI_HOME, 'proxy'), 'w') as f:
f.write(args.proxy)
except Exception as e:
logger.error('Cannot create NHENTAI_HOME: {}'.format(str(e)))
exit(1) exit(1)
if not args.is_download: logger.info('Proxy \'{0}\' saved.'.format(args.proxy))
logger.warning('YOU DO NOT SPECIFY `--download` OPTION !!!') exit(0)
'''
if args.favorites: if args.favorites:
if not constant.COOKIE: if not constant.COOKIE:
@ -158,11 +180,4 @@ def cmd_parser():
logger.critical('Maximum number of used threads is 15') logger.critical('Maximum number of used threads is 15')
exit(1) exit(1)
if args.proxy:
proxy_url = urlparse(args.proxy)
if proxy_url.scheme not in ('http', 'https'):
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
else:
constant.PROXY = {'http': args.proxy, 'https': args.proxy}
return args return args

View File

@ -3,6 +3,7 @@
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
import signal import signal
import platform import platform
import time
from nhentai.cmdline import cmd_parser, banner from nhentai.cmdline import cmd_parser, banner
from nhentai.parser import doujinshi_parser, search_parser, print_doujinshi, favorites_parser, tag_parser, login from nhentai.parser import doujinshi_parser, search_parser, print_doujinshi, favorites_parser, tag_parser, login
@ -10,13 +11,21 @@ from nhentai.doujinshi import Doujinshi
from nhentai.downloader import Downloader from nhentai.downloader import Downloader
from nhentai.logger import logger from nhentai.logger import logger
from nhentai.constant import BASE_URL from nhentai.constant import BASE_URL
from nhentai.utils import generate_html, generate_cbz from nhentai.utils import generate_html, generate_cbz, generate_main_html, check_cookie
def main(): def main():
banner() banner()
logger.info('Using mirror: {0}'.format(BASE_URL))
options = cmd_parser() options = cmd_parser()
logger.info('Using mirror: {0}'.format(BASE_URL))
from nhentai.constant import PROXY
# constant.PROXY will be changed after cmd_parser()
if PROXY != {}:
logger.info('Using proxy: {0}'.format(PROXY))
# check your cookie
check_cookie()
doujinshi_ids = [] doujinshi_ids = []
doujinshi_list = [] doujinshi_list = []
@ -25,36 +34,36 @@ def main():
if not options.is_download: if not options.is_download:
logger.warning('You do not specify --download option') logger.warning('You do not specify --download option')
for doujinshi_info in favorites_parser(): doujinshis = favorites_parser()
doujinshi_list.append(Doujinshi(**doujinshi_info)) print_doujinshi(doujinshis)
if options.is_download and doujinshis:
doujinshi_ids = map(lambda d: d['id'], doujinshis)
if not options.is_download: elif options.tag:
print_doujinshi([{'id': i.id, 'title': i.name} for i in doujinshi_list])
exit(0)
if options.tag:
doujinshis = tag_parser(options.tag, max_page=options.max_page) doujinshis = tag_parser(options.tag, max_page=options.max_page)
print_doujinshi(doujinshis) print_doujinshi(doujinshis)
if options.is_download and doujinshis: if options.is_download and doujinshis:
doujinshi_ids = map(lambda d: d['id'], doujinshis) doujinshi_ids = map(lambda d: d['id'], doujinshis)
if options.keyword: elif options.keyword:
doujinshis = search_parser(options.keyword, options.page) doujinshis = search_parser(options.keyword, options.page)
print_doujinshi(doujinshis) print_doujinshi(doujinshis)
if options.is_download: if options.is_download:
doujinshi_ids = map(lambda d: d['id'], doujinshis) doujinshi_ids = map(lambda d: d['id'], doujinshis)
if not doujinshi_ids: elif not doujinshi_ids:
doujinshi_ids = options.id doujinshi_ids = options.id
if doujinshi_ids: if doujinshi_ids:
for id_ in doujinshi_ids: for id_ in doujinshi_ids:
if options.delay:
time.sleep(options.delay)
doujinshi_info = doujinshi_parser(id_) doujinshi_info = doujinshi_parser(id_)
doujinshi_list.append(Doujinshi(**doujinshi_info)) doujinshi_list.append(Doujinshi(name_format=options.name_format, **doujinshi_info))
if not options.is_show: if not options.is_show:
downloader = Downloader(path=options.output_dir, downloader = Downloader(path=options.output_dir,
thread=options.threads, timeout=options.timeout) thread=options.threads, timeout=options.timeout, delay=options.delay)
for doujinshi in doujinshi_list: for doujinshi in doujinshi_list:
doujinshi.downloader = downloader doujinshi.downloader = downloader
@ -63,7 +72,8 @@ def main():
generate_html(options.output_dir, doujinshi) generate_html(options.output_dir, doujinshi)
elif options.is_cbz: elif options.is_cbz:
generate_cbz(options.output_dir, doujinshi, options.rm_origin_dir) generate_cbz(options.output_dir, doujinshi, options.rm_origin_dir)
if options.main_viewer:
generate_main_html(options.output_dir)
if not platform.system() == 'Windows': if not platform.system() == 'Windows':
logger.log(15, '🍻 All done.') logger.log(15, '🍻 All done.')
else: else:

View File

@ -2,7 +2,12 @@
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
import os import os
import tempfile import tempfile
from nhentai.utils import urlparse
try:
from urlparse import urlparse
except ImportError:
from urllib.parse import urlparse
BASE_URL = os.getenv('NHENTAI', 'https://nhentai.net') BASE_URL = os.getenv('NHENTAI', 'https://nhentai.net')

View File

@ -27,7 +27,7 @@ class DoujinshiInfo(dict):
class Doujinshi(object): class Doujinshi(object):
def __init__(self, name=None, id=None, img_id=None, ext='', pages=0, **kwargs): def __init__(self, name=None, id=None, img_id=None, ext='', pages=0, name_format='[%i][%a][%t]', **kwargs):
self.name = name self.name = name
self.id = id self.id = id
self.img_id = img_id self.img_id = img_id
@ -36,7 +36,12 @@ class Doujinshi(object):
self.downloader = None self.downloader = None
self.url = '%s/%d' % (DETAIL_URL, self.id) self.url = '%s/%d' % (DETAIL_URL, self.id)
self.info = DoujinshiInfo(**kwargs) self.info = DoujinshiInfo(**kwargs)
self.filename = format_filename('[%s][%s][%s]' % (self.id, self.info.artist, self.name))
name_format = name_format.replace('%i', str(self.id))
name_format = name_format.replace('%a', self.info.artists)
name_format = name_format.replace('%t', self.name)
name_format = name_format.replace('%s', self.info.subtitle)
self.filename = format_filename(name_format)
def __repr__(self): def __repr__(self):
return '<Doujinshi: {0}>'.format(self.name) return '<Doujinshi: {0}>'.format(self.name)
@ -45,9 +50,9 @@ class Doujinshi(object):
table = [ table = [
["Doujinshi", self.name], ["Doujinshi", self.name],
["Subtitle", self.info.subtitle], ["Subtitle", self.info.subtitle],
["Characters", self.info.character], ["Characters", self.info.characters],
["Authors", self.info.artist], ["Authors", self.info.artists],
["Language", self.info.language], ["Languages", self.info.languages],
["Tags", self.info.tags], ["Tags", self.info.tags],
["URL", self.url], ["URL", self.url],
["Pages", self.pages], ["Pages", self.pages],

View File

@ -4,6 +4,8 @@ from future.builtins import str as text
import os import os
import requests import requests
import threadpool import threadpool
import time
try: try:
from urllib.parse import urlparse from urllib.parse import urlparse
except ImportError: except ImportError:
@ -23,7 +25,7 @@ class NhentaiImageNotExistException(Exception):
class Downloader(Singleton): class Downloader(Singleton):
def __init__(self, path='', thread=1, timeout=30): def __init__(self, path='', thread=1, timeout=30, delay=0):
if not isinstance(thread, (int, )) or thread < 1 or thread > 15: if not isinstance(thread, (int, )) or thread < 1 or thread > 15:
raise ValueError('Invalid threads count') raise ValueError('Invalid threads count')
self.path = str(path) self.path = str(path)
@ -31,8 +33,11 @@ class Downloader(Singleton):
self.threads = [] self.threads = []
self.thread_pool = None self.thread_pool = None
self.timeout = timeout self.timeout = timeout
self.delay = delay
def _download(self, url, folder='', filename='', retried=0): def _download(self, url, folder='', filename='', retried=0):
if self.delay:
time.sleep(self.delay)
logger.info('Starting to download {0} ...'.format(url)) logger.info('Starting to download {0} ...'.format(url))
filename = filename if filename else os.path.basename(urlparse(url).path) filename = filename if filename else os.path.basename(urlparse(url).path)
base_filename, extension = os.path.splitext(filename) base_filename, extension = os.path.splitext(filename)

View File

@ -10,25 +10,10 @@ from bs4 import BeautifulSoup
from tabulate import tabulate from tabulate import tabulate
import nhentai.constant as constant import nhentai.constant as constant
from nhentai.utils import request
from nhentai.logger import logger from nhentai.logger import logger
session = requests.Session()
session.headers.update({
'Referer': constant.LOGIN_URL,
'User-Agent': 'nhentai command line client (https://github.com/RicterZ/nhentai)',
})
def request(method, url, **kwargs):
global session
if not hasattr(session, method):
raise AttributeError('\'requests.Session\' object has no attribute \'{0}\''.format(method))
session.headers.update({'Cookie': constant.COOKIE})
return getattr(session, method)(url, proxies=constant.PROXY, verify=False, **kwargs)
def _get_csrf_token(content): def _get_csrf_token(content):
html = BeautifulSoup(content, 'html.parser') html = BeautifulSoup(content, 'html.parser')
csrf_token_elem = html.find('input', attrs={'name': 'csrfmiddlewaretoken'}) csrf_token_elem = html.find('input', attrs={'name': 'csrfmiddlewaretoken'})
@ -66,7 +51,22 @@ def login(username, password):
exit(2) exit(2)
def _get_title_and_id(response):
result = []
html = BeautifulSoup(response, 'html.parser')
doujinshi_search_result = html.find_all('div', attrs={'class': 'gallery'})
for doujinshi in doujinshi_search_result:
doujinshi_container = doujinshi.find('div', attrs={'class': 'caption'})
title = doujinshi_container.text.strip()
title = title if len(title) < 85 else title[:82] + '...'
id_ = re.search('/g/(\d+)/', doujinshi.a['href']).group(1)
result.append({'id': id_, 'title': title})
return result
def favorites_parser(): def favorites_parser():
result = []
html = BeautifulSoup(request('get', constant.FAV_URL).content, 'html.parser') html = BeautifulSoup(request('get', constant.FAV_URL).content, 'html.parser')
count = html.find('span', attrs={'class': 'count'}) count = html.find('span', attrs={'class': 'count'})
if not count: if not count:
@ -89,22 +89,16 @@ def favorites_parser():
if os.getenv('DEBUG'): if os.getenv('DEBUG'):
pages = 1 pages = 1
ret = []
doujinshi_id = re.compile('data-id="([\d]+)"')
for page in range(1, pages + 1): for page in range(1, pages + 1):
try: try:
logger.info('Getting doujinshi ids of page %d' % page) logger.info('Getting doujinshi ids of page %d' % page)
resp = request('get', constant.FAV_URL + '?page=%d' % page).text resp = request('get', constant.FAV_URL + '?page=%d' % page).content
ids = doujinshi_id.findall(resp)
for i in ids:
ret.append(doujinshi_parser(i))
result.extend(_get_title_and_id(resp))
except Exception as e: except Exception as e:
logger.error('Error: %s, continue', str(e)) logger.error('Error: %s, continue', str(e))
return ret return result
def doujinshi_parser(id_): def doujinshi_parser(id_):
@ -164,7 +158,7 @@ def doujinshi_parser(id_):
# gain information of the doujinshi # gain information of the doujinshi
information_fields = doujinshi_info.find_all('div', attrs={'class': 'field-name'}) information_fields = doujinshi_info.find_all('div', attrs={'class': 'field-name'})
needed_fields = ['Characters', 'Artists', 'Language', 'Tags'] needed_fields = ['Characters', 'Artists', 'Languages', 'Tags']
for field in information_fields: for field in information_fields:
field_name = field.contents[0].strip().strip(':') field_name = field.contents[0].strip().strip(':')
if field_name in needed_fields: if field_name in needed_fields:
@ -177,7 +171,6 @@ def doujinshi_parser(id_):
def search_parser(keyword, page): def search_parser(keyword, page):
logger.debug('Searching doujinshis of keyword {0}'.format(keyword)) logger.debug('Searching doujinshis of keyword {0}'.format(keyword))
result = []
try: try:
response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page}).content response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page}).content
except requests.ConnectionError as e: except requests.ConnectionError as e:
@ -185,20 +178,95 @@ def search_parser(keyword, page):
logger.warn('If you are in China, please configure the proxy to fu*k GFW.') logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
raise SystemExit raise SystemExit
html = BeautifulSoup(response, 'html.parser') result = _get_title_and_id(response)
doujinshi_search_result = html.find_all('div', attrs={'class': 'gallery'})
for doujinshi in doujinshi_search_result:
doujinshi_container = doujinshi.find('div', attrs={'class': 'caption'})
title = doujinshi_container.text.strip()
title = title if len(title) < 85 else title[:82] + '...'
id_ = re.search('/g/(\d+)/', doujinshi.a['href']).group(1)
result.append({'id': id_, 'title': title})
if not result: if not result:
logger.warn('Not found anything of keyword {}'.format(keyword)) logger.warn('Not found anything of keyword {}'.format(keyword))
return result return result
def print_doujinshi(doujinshi_list):
if not doujinshi_list:
return
doujinshi_list = [(i['id'], i['title']) for i in doujinshi_list]
headers = ['id', 'doujinshi']
logger.info('Search Result\n' +
tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst'))
def tag_parser(tag_name, max_page=1):
result = []
tag_name = tag_name.lower()
tag_name = tag_name.replace(' ', '-')
for p in range(1, max_page + 1):
logger.debug('Fetching page {0} for doujinshi with tag \'{1}\''.format(p, tag_name))
response = request('get', url='%s/%s?page=%d' % (constant.TAG_URL, tag_name, p)).content
result = _get_title_and_id(response)
if not result:
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
return
if not result:
logger.warn('No results for tag \'{}\''.format(tag_name))
return result
def __api_suspended_search_parser(keyword, page):
logger.debug('Searching doujinshis using keywords {0}'.format(keyword))
result = []
i = 0
while i < 5:
try:
response = request('get', url=constant.SEARCH_URL, params={'query': keyword, 'page': page}).json()
except Exception as e:
i += 1
if not i < 5:
logger.critical(str(e))
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
exit(1)
continue
break
if 'result' not in response:
raise Exception('No result in response')
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for keywords {}'.format(keyword))
return result
def __api_suspended_tag_parser(tag_id, max_page=1):
logger.info('Searching for doujinshi with tag id {0}'.format(tag_id))
result = []
response = request('get', url=constant.TAG_API_URL, params={'sort': 'popular', 'tag_id': tag_id}).json()
page = max_page if max_page <= response['num_pages'] else int(response['num_pages'])
for i in range(1, page + 1):
logger.info('Getting page {} ...'.format(i))
if page != 1:
response = request('get', url=constant.TAG_API_URL,
params={'sort': 'popular', 'tag_id': tag_id}).json()
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for tag id {}'.format(tag_id))
return result
def __api_suspended_doujinshi_parser(id_): def __api_suspended_doujinshi_parser(id_):
if not isinstance(id_, (int,)) and (isinstance(id_, (str,)) and not id_.isdigit()): if not isinstance(id_, (int,)) and (isinstance(id_, (str,)) and not id_.isdigit()):
raise Exception('Doujinshi id({0}) is not valid'.format(id_)) raise Exception('Doujinshi id({0}) is not valid'.format(id_))
@ -246,94 +314,5 @@ def __api_suspended_doujinshi_parser(id_):
return doujinshi return doujinshi
def __api_suspended_search_parser(keyword, page):
logger.debug('Searching doujinshis using keywords {0}'.format(keyword))
result = []
i = 0
while i < 5:
try:
response = request('get', url=constant.SEARCH_URL, params={'query': keyword, 'page': page}).json()
except Exception as e:
i += 1
if not i < 5:
logger.critical(str(e))
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
exit(1)
continue
break
if 'result' not in response:
raise Exception('No result in response')
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for keywords {}'.format(keyword))
return result
def print_doujinshi(doujinshi_list):
if not doujinshi_list:
return
doujinshi_list = [(i['id'], i['title']) for i in doujinshi_list]
headers = ['id', 'doujinshi']
logger.info('Search Result\n' +
tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst'))
def __api_suspended_tag_parser(tag_id, max_page=1):
logger.info('Searching for doujinshi with tag id {0}'.format(tag_id))
result = []
response = request('get', url=constant.TAG_API_URL, params={'sort': 'popular', 'tag_id': tag_id}).json()
page = max_page if max_page <= response['num_pages'] else int(response['num_pages'])
for i in range(1, page + 1):
logger.info('Getting page {} ...'.format(i))
if page != 1:
response = request('get', url=constant.TAG_API_URL,
params={'sort': 'popular', 'tag_id': tag_id}).json()
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for tag id {}'.format(tag_id))
return result
def tag_parser(tag_name, max_page=1):
result = []
tag_name = tag_name.lower()
tag_name = tag_name.replace(' ', '-')
for p in range(1, max_page + 1):
logger.debug('Fetching page {0} for doujinshi with tag \'{1}\''.format(p, tag_name))
response = request('get', url='%s/%s?page=%d' % (constant.TAG_URL, tag_name, p)).content
html = BeautifulSoup(response, 'html.parser')
doujinshi_items = html.find_all('div', attrs={'class': 'gallery'})
if not doujinshi_items:
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
return
for i in doujinshi_items:
doujinshi_id = i.a.attrs['href'].strip('/g')
doujinshi_title = i.a.text.strip()
doujinshi_title = doujinshi_title if len(doujinshi_title) < 85 else doujinshi_title[:82] + '...'
result.append({'title': doujinshi_title, 'id': doujinshi_id})
if not result:
logger.warn('No results for tag \'{}\''.format(tag_name))
return result
if __name__ == '__main__': if __name__ == '__main__':
print(doujinshi_parser("32271")) print(doujinshi_parser("32271"))

View File

@ -2,13 +2,36 @@
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
import sys import sys
import re
import os import os
import string import string
import zipfile import zipfile
import shutil import shutil
import requests
from nhentai import constant
from nhentai.logger import logger from nhentai.logger import logger
def request(method, url, **kwargs):
session = requests.Session()
session.headers.update({
'Referer': constant.LOGIN_URL,
'User-Agent': 'nhentai command line client (https://github.com/RicterZ/nhentai)',
'Cookie': constant.COOKIE
})
return getattr(session, method)(url, proxies=constant.PROXY, verify=False, **kwargs)
def check_cookie():
response = request('get', constant.BASE_URL).text
username = re.findall('"/users/\d+/(.*?)"', response)
if not username:
logger.error('Cannot get your username, please check your cookie or use `nhentai --cookie` to set your cookie')
else:
logger.info('Login successfully! Your username: {}'.format(username[0]))
class _Singleton(type): class _Singleton(type):
""" A metaclass that creates a Singleton base class when called. """ """ A metaclass that creates a Singleton base class when called. """
_instances = {} _instances = {}
@ -82,6 +105,66 @@ def generate_html(output_dir='.', doujinshi_obj=None):
logger.warning('Writen HTML Viewer failed ({})'.format(str(e))) logger.warning('Writen HTML Viewer failed ({})'.format(str(e)))
def generate_main_html(output_dir='./'):
"""
Generate a main html to show all the contain doujinshi.
With a link to their `index.html`.
Default output folder will be the CLI path.
"""
count = 0
image_html = ''
main = readfile('viewer/main.html')
css = readfile('viewer/main.css')
element = '\n\
<div class="gallery-favorite">\n\
<div class="gallery">\n\
<a href="./{FOLDER}/index.html" class="cover" style="padding:0 0 141.6% 0"><img\n\
src="./{FOLDER}/{IMAGE}" />\n\
<div class="caption">{TITLE}</div>\n\
</a>\n\
</div>\n\
</div>\n'
os.chdir(output_dir)
doujinshi_dirs = next(os.walk('.'))[1]
for folder in doujinshi_dirs:
files = os.listdir(folder)
files.sort()
if 'index.html' in files:
count += 1
logger.info('Add doujinshi \'{}\''.format(folder))
else:
continue
image = files[0] # 001.jpg or 001.png
if folder is not None:
title = folder.replace('_', ' ')
else:
title = 'nHentai HTML Viewer'
image_html += element.format(FOLDER=folder, IMAGE=image, TITLE=title)
if image_html == '':
logger.warning('None index.html found, --gen-main paused.')
return
try:
data = main.format(STYLES=css, COUNT=count, PICTURE=image_html)
if sys.version_info < (3, 0):
with open('./main.html', 'w') as f:
f.write(data)
else:
with open('./main.html', 'wb') as f:
f.write(data.encode('utf-8'))
logger.log(
15, 'Main Viewer has been write to \'{0}main.html\''.format(output_dir))
except Exception as e:
logger.warning('Writen Main Viewer failed ({})'.format(str(e)))
def generate_cbz(output_dir='.', doujinshi_obj=None, rm_origin_dir=False): def generate_cbz(output_dir='.', doujinshi_obj=None, rm_origin_dir=False):
if doujinshi_obj is not None: if doujinshi_obj is not None:
doujinshi_dir = os.path.join(output_dir, doujinshi_obj.filename) doujinshi_dir = os.path.join(output_dir, doujinshi_obj.filename)
@ -118,7 +201,6 @@ an invalid filename.
""" """
valid_chars = "-_.()[] %s%s" % (string.ascii_letters, string.digits) valid_chars = "-_.()[] %s%s" % (string.ascii_letters, string.digits)
filename = ''.join(c for c in s if c in valid_chars) filename = ''.join(c for c in s if c in valid_chars)
filename = filename.replace(' ', '_') # I don't like spaces in filenames.
if len(filename) > 100: if len(filename) > 100:
filename = filename[:100] + '...]' filename = filename[:100] + '...]'

255
nhentai/viewer/main.css Normal file
View File

@ -0,0 +1,255 @@
/*! normalize.css v5.0.0 | MIT License | github.com/necolas/normalize.css */
/* Original from https://static.nhentai.net/css/main_style.9bb9b703e601.css */
html {
font-family: sans-serif;
line-height: 1.15;
-ms-text-size-adjust: 100%;
-webkit-text-size-adjust: 100%
}
body {
margin: 0
}
h1 {
font-size: 2em;
margin: .67em 0
}
a {
background-color: transparent;
-webkit-text-decoration-skip: objects
}
a:active,a:hover {
outline-width: 0
}
abbr[title] {
border-bottom: none;
text-decoration: underline;
text-decoration: underline dotted
}
b,strong {
font-weight: inherit
}
b,strong {
font-weight: bolder
}
code,kbd,samp {
font-family: monospace,monospace;
font-size: 1em
}
small {
font-size: 80%
}
sub,sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline
}
sub {
bottom: -.25em
}
sup {
top: -.5em
}
img {
border-style: none
}
label {
display: block;
font-weight: 700;
text-align: justify;
white-space: nowrap
}
html {
box-sizing: border-box
}
*,:after,:before {
box-sizing: inherit
}
h1,h2,h3,h4,h5,h6 {
font-weight: 700
}
body,html {
font-family: 'Noto Sans',sans-serif;
font-size: 14px;
line-height: 1.42857143;
height: 100%;
margin: 0;
text-align: center;
color: #34495e;
background-color: #fff;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale
}
a {
text-decoration: none;
color: #34495e
}
a:hover {
text-decoration: none;
color: #ed2553
}
a.count {
color: #999
}
a.bold {
font-weight: 700
}
code {
color: #ed2553;
border: 1px solid #fbd3dd;
background-color: #fef0f3
}
blockquote {
border: 0
}
.container {
display: block;
clear: both;
margin-left: auto;
margin-right: auto;
margin-bottom: 10px;
margin-top: 10px;
padding: 10px;
border-radius: 9px;
background-color: #ecf0f1;
width: 100%;
max-width: 1200px
}
.gallery,.gallery-favorite,.thumb-container {
display: inline-block;
vertical-align: top
}
.gallery img,.gallery-favorite img,.thumb-container img {
display: block;
max-width: 100%;
height: auto
}
@media screen and (min-width: 980px) {
.gallery,.gallery-favorite,.thumb-container {
width:19%;
margin: 3px;
margin-bottom: 8px
}
}
@media screen and (max-width: 979px) {
.gallery,.gallery-favorite,.thumb-container {
width:24%;
margin: 2px
}
}
@media screen and (max-width: 772px) {
.gallery,.gallery-favorite,.thumb-container {
width:32%;
margin: 1.5px
}
}
@media screen and (max-width: 500px) {
.gallery,.gallery-favorite,.thumb-container {
width:49%;
margin: .5px
}
}
.gallery a,.gallery-favorite a {
display: block
}
.gallery a img,.gallery-favorite a img {
position: absolute
}
.caption {
line-height: 15px;
left: 0;
right: 0;
top: 100%;
position: absolute;
z-index: 10;
overflow: hidden;
width: 100%;
max-height: 34px;
padding: 3px;
background-color: #fff;
font-weight: 700;
display: block;
text-align: center;
text-decoration: none;
color: #34495e
}
.gallery {
position: relative;
margin-bottom: 3em
}
.gallery:hover .caption {
max-height: 100%;
box-shadow: 0 10px 20px rgba(100,100,100,.5)
}
.gallery-favorite .gallery {
width: 100%
}
html.theme-black,html.theme-black body {
color: #d9d9d9;
background-color: #0d0d0d
}
html.theme-black #thumbnail-container,html.theme-black .container {
background-color: #1f1f1f
}
html.theme-black #thumbnail-container .lazyload,html.theme-black .lazyload {
background-color: #262626
}
html.theme-black #thumbnail-container .lazyload-loading,html.theme-black .lazyload-loading {
background-color: #2e2e2e
}
html.theme-black .gallery:hover .caption {
box-shadow: 0 10px 20px rgba(0,0,0,.5)
}
html.theme-black .caption {
background-color: #404040;
color: #d9d9d9
}
html.theme-black code {
color: #ed2553;
border: none;
background-color: #292929
}

30
nhentai/viewer/main.html Normal file
View File

@ -0,0 +1,30 @@
<!doctype html>
<html lang="en" class=" theme-black">
<head>
<meta charset="utf-8" />
<meta name="theme-color" content="#1f1f1f" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, viewport-fit=cover" />
<title>nHentai &raquo; Viewer</title>
<!-- <link rel="stylesheet" href="./main.css"> -->
<style>
{STYLES}
</style>
</head>
<body>
<div id="content">
<h1>Main Folder({COUNT})</h1>
<div class="container" id="favcontainer">
{PICTURE}
</div> <!-- container -->
</div>
</body>
</html>

View File

@ -46,14 +46,17 @@ document.getElementById('image-container').onclick = event => {
document.onkeypress = event => { document.onkeypress = event => {
switch (event.key.toLowerCase()) { switch (event.key.toLowerCase()) {
// Previous Image // Previous Image
case 'w':
case 'a': case 'a':
changePage(currentPage - 1); changePage(currentPage - 1);
break; break;
// Return to previous page
case 'q':
window.history.go(-1);
break;
// Next Image // Next Image
case ' ': case ' ':
case 'esc': // future close page function case 's':
case 'enter':
case 'd': case 'd':
changePage(currentPage + 1); changePage(currentPage + 1);
break; break;

View File

@ -11,9 +11,10 @@ with open('requirements.txt') as f:
def long_description(): def long_description():
with codecs.open('README.md', 'rb') as f: with codecs.open('README.rst', 'rb') as readme:
if sys.version_info >= (3, 0, 0): if not sys.version_info < (3, 0, 0):
return str(f.read()) return readme.read().decode('utf-8')
setup( setup(
name='nhentai', name='nhentai',