Merge pull request #66 from RicterZ/dev

0.3.5
This commit is contained in:
Ricter Zheng 2019-06-12 23:04:08 +08:00 committed by GitHub
commit 158b15bda8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 727 additions and 345 deletions

View File

@ -13,9 +13,10 @@ install:
script:
- echo 268642 > /tmp/test.txt
- NHENTAI=https://nhentai.net nhentai --cookie '__cfduid=da09f237ceb0f51c75980b0b3fda3ce571558179357; _ga=GA1.2.2000087053.1558179358; _gid=GA1.2.717818542.1558179358; csrftoken=iSxrTFOjrujJqauhAqWvTTI9dl3sfWnxdEFoMuqgmlBrbMin5Gj9wJW4r61cmH1X; sessionid=ewuaayfewbzpiukrarx9d52oxwlz2esd'
- NHENTAI=https://nhentai.net nhentai --cookie '__cfduid=da09f237ceb0f51c75980b0b3fda3ce571558179357; _ga=GA1.2.2000087053.1558179358; _gid=GA1.2.782652201.1560348447; csrftoken=E2O8wfriFkcXUgN1AC41DoLqfRaBbggIUdvy46yC45PKCRCmCHQHQ7YRUy0d7FXZ; sessionid=0rapzxkt6yl1wjhdxm9whtfdc7gvw0iu'
- NHENTAI=https://nhentai.net nhentai --search umaru
- NHENTAI=https://nhentai.net nhentai --id=152503,146134 -t 10 --output=/tmp/ --cbz
- NHENTAI=https://nhentai.net nhentai --tag lolicon
- NHENTAI=https://nhentai.net nhentai -F
- NHENTAI=https://nhentai.net nhentai --file /tmp/test.txt
- nhentai --id=152503,146134 --gen-main --output=/tmp/

View File

@ -1,5 +1,7 @@
include README.md
include requirements.txt
include nhentai/viewer/index.html
include nhentai/viewer/styles.css
include nhentai/viewer/scripts.js
include README.md
include requirements.txt
include nhentai/viewer/index.html
include nhentai/viewer/styles.css
include nhentai/viewer/scripts.js
include nhentai/viewer/main.html
include nhentai/viewer/main.css

View File

@ -1,187 +1,187 @@
nhentai
=======
.. code-block::
_ _ _ _
_ __ | | | | ___ _ __ | |_ __ _(_)
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
| | | | _ | __/ | | | || (_| | |
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
あなたも変態。 いいね?
|travis|
|pypi|
|license|
nHentai is a CLI tool for downloading doujinshi from <http://nhentai.net>
============
Installation
============
.. code-block::
git clone https://github.com/RicterZ/nhentai
cd nhentai
python setup.py install
=====================
Installation (Gentoo)
=====================
.. code-block::
layman -fa glicOne
sudo emerge net-misc/nhentai
=====
Usage
=====
**IMPORTANT**: To bypass the nhentai frequency limit, you should use `--cookie` option to store your cookie.
*The default download folder will be the path where you run the command (CLI path).*
Set your nhentai cookie against captcha:
.. code-block:: bash
nhentai --cookie 'YOUR COOKIE FROM nhentai.net'
Download specified doujinshi:
.. code-block:: bash
nhentai --id=123855,123866
Download doujinshi with ids specified in a file (doujinshi ids split by line):
.. code-block:: bash
nhentai --file=doujinshi.txt
Search a keyword and download the first page:
.. code-block:: bash
nhentai --search="tomori" --page=1 --download
Download by tag name:
.. code-block:: bash
nhentai --tag lolicon --download --page=2
Download your favorites with delay:
.. code-block:: bash
nhentai --favorites --download --delay 1
Format output doujinshi folder name:
.. code-block:: bash
nhentai --id 261100 --format '[%i]%s'
Supported doujinshi folder formatter:
- %i: Doujinshi id
- %t: Doujinshi name
- %s: Doujinshi subtitle (translated name)
- %a: Doujinshi authors' name
Other options:
.. code-block::
Options:
# Operation options
-h, --help show this help message and exit
-D, --download download doujinshi (for search results)
-S, --show just show the doujinshi information
# Doujinshi options
--id=ID doujinshi ids set, e.g. 1,2,3
-s KEYWORD, --search=KEYWORD
search doujinshi by keyword
--tag=TAG download doujinshi by tag
-F, --favorites list or download your favorites.
# Multi-page options
--page=PAGE page number of search results
--max-page=MAX_PAGE The max page when recursive download tagged doujinshi
# Download options
-o OUTPUT_DIR, --output=OUTPUT_DIR
output dir
-t THREADS, --threads=THREADS
thread count for downloading doujinshi
-T TIMEOUT, --timeout=TIMEOUT
timeout for downloading doujinshi
-d DELAY, --delay=DELAY
slow down between downloading every doujinshi
-p PROXY, --proxy=PROXY
uses a proxy, for example: http://127.0.0.1:1080
-f FILE, --file=FILE read gallery IDs from file.
--format=NAME_FORMAT format the saved folder name
# Generating options
--html generate a html viewer at current directory
--no-html don't generate HTML after downloading
-C, --cbz generate Comic Book CBZ File
--rm-origin-dir remove downloaded doujinshi dir when generated CBZ
file.
# nHentai options
--cookie=COOKIE set cookie of nhentai to bypass Google recaptcha
==============
nHentai Mirror
==============
If you want to use a mirror, you should set up a reverse proxy of `nhentai.net` and `i.nhentai.net`.
For example:
.. code-block::
i.h.loli.club -> i.nhentai.net
h.loli.club -> nhentai.net
Set `NHENTAI` env var to your nhentai mirror.
.. code-block:: bash
NHENTAI=http://h.loli.club nhentai --id 123456
.. image:: ./images/search.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/download.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/viewer.png?raw=true
:alt: nhentai
:align: center
============
あなたも変態
============
.. image:: ./images/image.jpg?raw=true
:alt: nhentai
:align: center
.. |travis| image:: https://travis-ci.org/RicterZ/nhentai.svg?branch=master
:target: https://travis-ci.org/RicterZ/nhentai
.. |pypi| image:: https://img.shields.io/pypi/dm/nhentai.svg
:target: https://pypi.org/project/nhentai/
.. |license| image:: https://img.shields.io/github/license/ricterz/nhentai.svg
:target: https://github.com/RicterZ/nhentai/blob/master/LICENSE
nhentai
=======
.. code-block::
_ _ _ _
_ __ | | | | ___ _ __ | |_ __ _(_)
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
| | | | _ | __/ | | | || (_| | |
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
あなたも変態。 いいね?
|travis|
|pypi|
|license|
nHentai is a CLI tool for downloading doujinshi from <http://nhentai.net>
============
Installation
============
.. code-block::
git clone https://github.com/RicterZ/nhentai
cd nhentai
python setup.py install
=====================
Installation (Gentoo)
=====================
.. code-block::
layman -fa glicOne
sudo emerge net-misc/nhentai
=====
Usage
=====
**IMPORTANT**: To bypass the nhentai frequency limit, you should use `--cookie` option to store your cookie.
*The default download folder will be the path where you run the command (CLI path).*
Set your nhentai cookie against captcha:
.. code-block:: bash
nhentai --cookie 'YOUR COOKIE FROM nhentai.net'
Download specified doujinshi:
.. code-block:: bash
nhentai --id=123855,123866
Download doujinshi with ids specified in a file (doujinshi ids split by line):
.. code-block:: bash
nhentai --file=doujinshi.txt
Search a keyword and download the first page:
.. code-block:: bash
nhentai --search="tomori" --page=1 --download
Download by tag name:
.. code-block:: bash
nhentai --tag lolicon --download --page=2
Download your favorites with delay:
.. code-block:: bash
nhentai --favorites --download --delay 1
Format output doujinshi folder name:
.. code-block:: bash
nhentai --id 261100 --format '[%i]%s'
Supported doujinshi folder formatter:
- %i: Doujinshi id
- %t: Doujinshi name
- %s: Doujinshi subtitle (translated name)
- %a: Doujinshi authors' name
Other options:
.. code-block::
Options:
# Operation options
-h, --help show this help message and exit
-D, --download download doujinshi (for search results)
-S, --show just show the doujinshi information
# Doujinshi options
--id=ID doujinshi ids set, e.g. 1,2,3
-s KEYWORD, --search=KEYWORD
search doujinshi by keyword
--tag=TAG download doujinshi by tag
-F, --favorites list or download your favorites.
# Multi-page options
--page=PAGE page number of search results
--max-page=MAX_PAGE The max page when recursive download tagged doujinshi
# Download options
-o OUTPUT_DIR, --output=OUTPUT_DIR
output dir
-t THREADS, --threads=THREADS
thread count for downloading doujinshi
-T TIMEOUT, --timeout=TIMEOUT
timeout for downloading doujinshi
-d DELAY, --delay=DELAY
slow down between downloading every doujinshi
-p PROXY, --proxy=PROXY
uses a proxy, for example: http://127.0.0.1:1080
-f FILE, --file=FILE read gallery IDs from file.
--format=NAME_FORMAT format the saved folder name
# Generating options
--html generate a html viewer at current directory
--no-html don't generate HTML after downloading
-C, --cbz generate Comic Book CBZ File
--rm-origin-dir remove downloaded doujinshi dir when generated CBZ
file.
# nHentai options
--cookie=COOKIE set cookie of nhentai to bypass Google recaptcha
==============
nHentai Mirror
==============
If you want to use a mirror, you should set up a reverse proxy of `nhentai.net` and `i.nhentai.net`.
For example:
.. code-block::
i.h.loli.club -> i.nhentai.net
h.loli.club -> nhentai.net
Set `NHENTAI` env var to your nhentai mirror.
.. code-block:: bash
NHENTAI=http://h.loli.club nhentai --id 123456
.. image:: ./images/search.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/download.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/viewer.png?raw=true
:alt: nhentai
:align: center
============
あなたも変態
============
.. image:: ./images/image.jpg?raw=true
:alt: nhentai
:align: center
.. |travis| image:: https://travis-ci.org/RicterZ/nhentai.svg?branch=master
:target: https://travis-ci.org/RicterZ/nhentai
.. |pypi| image:: https://img.shields.io/pypi/dm/nhentai.svg
:target: https://pypi.org/project/nhentai/
.. |license| image:: https://img.shields.io/github/license/ricterz/nhentai.svg
:target: https://github.com/RicterZ/nhentai/blob/master/LICENSE

View File

@ -1,3 +1,3 @@
__version__ = '0.3.4'
__version__ = '0.3.5'
__author__ = 'RicterZ'
__email__ = 'ricterzheng@gmail.com'

View File

@ -10,7 +10,7 @@ except ImportError:
import nhentai.constant as constant
from nhentai import __version__
from nhentai.utils import urlparse, generate_html
from nhentai.utils import urlparse, generate_html, generate_main_html
from nhentai.logger import logger
try:
@ -69,7 +69,7 @@ def cmd_parser():
parser.add_option('--delay', '-d', type='int', dest='delay', action='store', default=0,
help='slow down between downloading every doujinshi')
parser.add_option('--proxy', '-p', type='string', dest='proxy', action='store', default='',
help='uses a proxy, for example: http://127.0.0.1:1080')
help='store a proxy, for example: -p \'http://127.0.0.1:1080\'')
parser.add_option('--file', '-f', type='string', dest='file', action='store', help='read gallery IDs from file.')
parser.add_option('--format', type='string', dest='name_format', action='store',
help='format the saved folder name', default='[%i][%a][%t]')
@ -79,6 +79,8 @@ def cmd_parser():
help='generate a html viewer at current directory')
parser.add_option('--no-html', dest='is_nohtml', action='store_true',
help='don\'t generate HTML after downloading')
parser.add_option('--gen-main', dest='main_viewer', action='store_true',
help='generate a main viewer contain all the doujin in the folder')
parser.add_option('--cbz', '-C', dest='is_cbz', action='store_true',
help='generate Comic Book CBZ File')
parser.add_option('--rm-origin-dir', dest='rm_origin_dir', action='store_true', default=False,
@ -101,6 +103,11 @@ def cmd_parser():
generate_html()
exit(0)
if args.main_viewer and not args.id and not args.keyword and \
not args.tag and not args.favorites:
generate_main_html()
exit(0)
if os.path.exists(os.path.join(constant.NHENTAI_HOME, 'cookie')):
with open(os.path.join(constant.NHENTAI_HOME, 'cookie'), 'r') as f:
constant.COOKIE = f.read()
@ -119,17 +126,28 @@ def cmd_parser():
logger.info('Cookie saved.')
exit(0)
'''
if args.login:
if os.path.exists(os.path.join(constant.NHENTAI_HOME, 'proxy')):
with open(os.path.join(constant.NHENTAI_HOME, 'proxy'), 'r') as f:
link = f.read()
constant.PROXY = {'http': link, 'https': link}
if args.proxy:
try:
_, _ = args.login.split(':', 1)
except ValueError:
logger.error('Invalid `username:password` pair.')
if not os.path.exists(constant.NHENTAI_HOME):
os.mkdir(constant.NHENTAI_HOME)
proxy_url = urlparse(args.proxy)
if proxy_url.scheme not in ('http', 'https'):
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
else:
with open(os.path.join(constant.NHENTAI_HOME, 'proxy'), 'w') as f:
f.write(args.proxy)
except Exception as e:
logger.error('Cannot create NHENTAI_HOME: {}'.format(str(e)))
exit(1)
if not args.is_download:
logger.warning('YOU DO NOT SPECIFY `--download` OPTION !!!')
'''
logger.info('Proxy \'{0}\' saved.'.format(args.proxy))
exit(0)
if args.favorites:
if not constant.COOKIE:
@ -162,11 +180,4 @@ def cmd_parser():
logger.critical('Maximum number of used threads is 15')
exit(1)
if args.proxy:
proxy_url = urlparse(args.proxy)
if proxy_url.scheme not in ('http', 'https'):
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
else:
constant.PROXY = {'http': args.proxy, 'https': args.proxy}
return args

View File

@ -11,13 +11,21 @@ from nhentai.doujinshi import Doujinshi
from nhentai.downloader import Downloader
from nhentai.logger import logger
from nhentai.constant import BASE_URL
from nhentai.utils import generate_html, generate_cbz
from nhentai.utils import generate_html, generate_cbz, generate_main_html, check_cookie
def main():
banner()
logger.info('Using mirror: {0}'.format(BASE_URL))
options = cmd_parser()
logger.info('Using mirror: {0}'.format(BASE_URL))
from nhentai.constant import PROXY
# constant.PROXY will be changed after cmd_parser()
if PROXY != {}:
logger.info('Using proxy: {0}'.format(PROXY))
# check your cookie
check_cookie()
doujinshi_ids = []
doujinshi_list = []
@ -26,7 +34,10 @@ def main():
if not options.is_download:
logger.warning('You do not specify --download option')
doujinshi_ids = favorites_parser()
doujinshis = favorites_parser()
print_doujinshi(doujinshis)
if options.is_download and doujinshis:
doujinshi_ids = map(lambda d: d['id'], doujinshis)
elif options.tag:
doujinshis = tag_parser(options.tag, max_page=options.max_page)
@ -61,7 +72,8 @@ def main():
generate_html(options.output_dir, doujinshi)
elif options.is_cbz:
generate_cbz(options.output_dir, doujinshi, options.rm_origin_dir)
if options.main_viewer:
generate_main_html(options.output_dir)
if not platform.system() == 'Windows':
logger.log(15, '🍻 All done.')
else:

View File

@ -2,7 +2,12 @@
from __future__ import unicode_literals, print_function
import os
import tempfile
from nhentai.utils import urlparse
try:
from urlparse import urlparse
except ImportError:
from urllib.parse import urlparse
BASE_URL = os.getenv('NHENTAI', 'https://nhentai.net')

View File

@ -41,7 +41,7 @@ class Doujinshi(object):
name_format = name_format.replace('%a', self.info.artists)
name_format = name_format.replace('%t', self.name)
name_format = name_format.replace('%s', self.info.subtitle)
self.filename = name_format
self.filename = format_filename(name_format)
def __repr__(self):
return '<Doujinshi: {0}>'.format(self.name)
@ -50,9 +50,9 @@ class Doujinshi(object):
table = [
["Doujinshi", self.name],
["Subtitle", self.info.subtitle],
["Characters", self.info.character],
["Characters", self.info.characters],
["Authors", self.info.artists],
["Language", self.info.language],
["Languages", self.info.languages],
["Tags", self.info.tags],
["URL", self.url],
["Pages", self.pages],

View File

@ -10,25 +10,10 @@ from bs4 import BeautifulSoup
from tabulate import tabulate
import nhentai.constant as constant
from nhentai.utils import request
from nhentai.logger import logger
session = requests.Session()
session.headers.update({
'Referer': constant.LOGIN_URL,
'User-Agent': 'nhentai command line client (https://github.com/RicterZ/nhentai)',
})
def request(method, url, **kwargs):
global session
if not hasattr(session, method):
raise AttributeError('\'requests.Session\' object has no attribute \'{0}\''.format(method))
session.headers.update({'Cookie': constant.COOKIE})
return getattr(session, method)(url, proxies=constant.PROXY, verify=False, **kwargs)
def _get_csrf_token(content):
html = BeautifulSoup(content, 'html.parser')
csrf_token_elem = html.find('input', attrs={'name': 'csrfmiddlewaretoken'})
@ -66,7 +51,22 @@ def login(username, password):
exit(2)
def _get_title_and_id(response):
result = []
html = BeautifulSoup(response, 'html.parser')
doujinshi_search_result = html.find_all('div', attrs={'class': 'gallery'})
for doujinshi in doujinshi_search_result:
doujinshi_container = doujinshi.find('div', attrs={'class': 'caption'})
title = doujinshi_container.text.strip()
title = title if len(title) < 85 else title[:82] + '...'
id_ = re.search('/g/(\d+)/', doujinshi.a['href']).group(1)
result.append({'id': id_, 'title': title})
return result
def favorites_parser():
result = []
html = BeautifulSoup(request('get', constant.FAV_URL).content, 'html.parser')
count = html.find('span', attrs={'class': 'count'})
if not count:
@ -89,20 +89,16 @@ def favorites_parser():
if os.getenv('DEBUG'):
pages = 1
ret = []
doujinshi_id = re.compile('data-id="([\d]+)"')
for page in range(1, pages + 1):
try:
logger.info('Getting doujinshi ids of page %d' % page)
resp = request('get', constant.FAV_URL + '?page=%d' % page).text
ids = doujinshi_id.findall(resp)
ret.extend(ids)
resp = request('get', constant.FAV_URL + '?page=%d' % page).content
result.extend(_get_title_and_id(resp))
except Exception as e:
logger.error('Error: %s, continue', str(e))
return ret
return result
def doujinshi_parser(id_):
@ -162,7 +158,7 @@ def doujinshi_parser(id_):
# gain information of the doujinshi
information_fields = doujinshi_info.find_all('div', attrs={'class': 'field-name'})
needed_fields = ['Characters', 'Artists', 'Language', 'Tags']
needed_fields = ['Characters', 'Artists', 'Languages', 'Tags']
for field in information_fields:
field_name = field.contents[0].strip().strip(':')
if field_name in needed_fields:
@ -175,7 +171,6 @@ def doujinshi_parser(id_):
def search_parser(keyword, page):
logger.debug('Searching doujinshis of keyword {0}'.format(keyword))
result = []
try:
response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page}).content
except requests.ConnectionError as e:
@ -183,20 +178,95 @@ def search_parser(keyword, page):
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
raise SystemExit
html = BeautifulSoup(response, 'html.parser')
doujinshi_search_result = html.find_all('div', attrs={'class': 'gallery'})
for doujinshi in doujinshi_search_result:
doujinshi_container = doujinshi.find('div', attrs={'class': 'caption'})
title = doujinshi_container.text.strip()
title = title if len(title) < 85 else title[:82] + '...'
id_ = re.search('/g/(\d+)/', doujinshi.a['href']).group(1)
result.append({'id': id_, 'title': title})
result = _get_title_and_id(response)
if not result:
logger.warn('Not found anything of keyword {}'.format(keyword))
return result
def print_doujinshi(doujinshi_list):
if not doujinshi_list:
return
doujinshi_list = [(i['id'], i['title']) for i in doujinshi_list]
headers = ['id', 'doujinshi']
logger.info('Search Result\n' +
tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst'))
def tag_parser(tag_name, max_page=1):
result = []
tag_name = tag_name.lower()
tag_name = tag_name.replace(' ', '-')
for p in range(1, max_page + 1):
logger.debug('Fetching page {0} for doujinshi with tag \'{1}\''.format(p, tag_name))
response = request('get', url='%s/%s?page=%d' % (constant.TAG_URL, tag_name, p)).content
result = _get_title_and_id(response)
if not result:
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
return
if not result:
logger.warn('No results for tag \'{}\''.format(tag_name))
return result
def __api_suspended_search_parser(keyword, page):
logger.debug('Searching doujinshis using keywords {0}'.format(keyword))
result = []
i = 0
while i < 5:
try:
response = request('get', url=constant.SEARCH_URL, params={'query': keyword, 'page': page}).json()
except Exception as e:
i += 1
if not i < 5:
logger.critical(str(e))
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
exit(1)
continue
break
if 'result' not in response:
raise Exception('No result in response')
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for keywords {}'.format(keyword))
return result
def __api_suspended_tag_parser(tag_id, max_page=1):
logger.info('Searching for doujinshi with tag id {0}'.format(tag_id))
result = []
response = request('get', url=constant.TAG_API_URL, params={'sort': 'popular', 'tag_id': tag_id}).json()
page = max_page if max_page <= response['num_pages'] else int(response['num_pages'])
for i in range(1, page + 1):
logger.info('Getting page {} ...'.format(i))
if page != 1:
response = request('get', url=constant.TAG_API_URL,
params={'sort': 'popular', 'tag_id': tag_id}).json()
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for tag id {}'.format(tag_id))
return result
def __api_suspended_doujinshi_parser(id_):
if not isinstance(id_, (int,)) and (isinstance(id_, (str,)) and not id_.isdigit()):
raise Exception('Doujinshi id({0}) is not valid'.format(id_))
@ -244,94 +314,5 @@ def __api_suspended_doujinshi_parser(id_):
return doujinshi
def __api_suspended_search_parser(keyword, page):
logger.debug('Searching doujinshis using keywords {0}'.format(keyword))
result = []
i = 0
while i < 5:
try:
response = request('get', url=constant.SEARCH_URL, params={'query': keyword, 'page': page}).json()
except Exception as e:
i += 1
if not i < 5:
logger.critical(str(e))
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
exit(1)
continue
break
if 'result' not in response:
raise Exception('No result in response')
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for keywords {}'.format(keyword))
return result
def print_doujinshi(doujinshi_list):
if not doujinshi_list:
return
doujinshi_list = [(i['id'], i['title']) for i in doujinshi_list]
headers = ['id', 'doujinshi']
logger.info('Search Result\n' +
tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst'))
def __api_suspended_tag_parser(tag_id, max_page=1):
logger.info('Searching for doujinshi with tag id {0}'.format(tag_id))
result = []
response = request('get', url=constant.TAG_API_URL, params={'sort': 'popular', 'tag_id': tag_id}).json()
page = max_page if max_page <= response['num_pages'] else int(response['num_pages'])
for i in range(1, page + 1):
logger.info('Getting page {} ...'.format(i))
if page != 1:
response = request('get', url=constant.TAG_API_URL,
params={'sort': 'popular', 'tag_id': tag_id}).json()
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for tag id {}'.format(tag_id))
return result
def tag_parser(tag_name, max_page=1):
result = []
tag_name = tag_name.lower()
tag_name = tag_name.replace(' ', '-')
for p in range(1, max_page + 1):
logger.debug('Fetching page {0} for doujinshi with tag \'{1}\''.format(p, tag_name))
response = request('get', url='%s/%s?page=%d' % (constant.TAG_URL, tag_name, p)).content
html = BeautifulSoup(response, 'html.parser')
doujinshi_items = html.find_all('div', attrs={'class': 'gallery'})
if not doujinshi_items:
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
return
for i in doujinshi_items:
doujinshi_id = i.a.attrs['href'].strip('/g')
doujinshi_title = i.a.text.strip()
doujinshi_title = doujinshi_title if len(doujinshi_title) < 85 else doujinshi_title[:82] + '...'
result.append({'title': doujinshi_title, 'id': doujinshi_id})
if not result:
logger.warn('No results for tag \'{}\''.format(tag_name))
return result
if __name__ == '__main__':
print(doujinshi_parser("32271"))

View File

@ -2,13 +2,36 @@
from __future__ import unicode_literals, print_function
import sys
import re
import os
import string
import zipfile
import shutil
import requests
from nhentai import constant
from nhentai.logger import logger
def request(method, url, **kwargs):
session = requests.Session()
session.headers.update({
'Referer': constant.LOGIN_URL,
'User-Agent': 'nhentai command line client (https://github.com/RicterZ/nhentai)',
'Cookie': constant.COOKIE
})
return getattr(session, method)(url, proxies=constant.PROXY, verify=False, **kwargs)
def check_cookie():
response = request('get', constant.BASE_URL).text
username = re.findall('"/users/\d+/(.*?)"', response)
if not username:
logger.error('Cannot get your username, please check your cookie or use `nhentai --cookie` to set your cookie')
else:
logger.info('Login successfully! Your username: {}'.format(username[0]))
class _Singleton(type):
""" A metaclass that creates a Singleton base class when called. """
_instances = {}
@ -82,6 +105,66 @@ def generate_html(output_dir='.', doujinshi_obj=None):
logger.warning('Writen HTML Viewer failed ({})'.format(str(e)))
def generate_main_html(output_dir='./'):
"""
Generate a main html to show all the contain doujinshi.
With a link to their `index.html`.
Default output folder will be the CLI path.
"""
count = 0
image_html = ''
main = readfile('viewer/main.html')
css = readfile('viewer/main.css')
element = '\n\
<div class="gallery-favorite">\n\
<div class="gallery">\n\
<a href="./{FOLDER}/index.html" class="cover" style="padding:0 0 141.6% 0"><img\n\
src="./{FOLDER}/{IMAGE}" />\n\
<div class="caption">{TITLE}</div>\n\
</a>\n\
</div>\n\
</div>\n'
os.chdir(output_dir)
doujinshi_dirs = next(os.walk('.'))[1]
for folder in doujinshi_dirs:
files = os.listdir(folder)
files.sort()
if 'index.html' in files:
count += 1
logger.info('Add doujinshi \'{}\''.format(folder))
else:
continue
image = files[0] # 001.jpg or 001.png
if folder is not None:
title = folder.replace('_', ' ')
else:
title = 'nHentai HTML Viewer'
image_html += element.format(FOLDER=folder, IMAGE=image, TITLE=title)
if image_html == '':
logger.warning('None index.html found, --gen-main paused.')
return
try:
data = main.format(STYLES=css, COUNT=count, PICTURE=image_html)
if sys.version_info < (3, 0):
with open('./main.html', 'w') as f:
f.write(data)
else:
with open('./main.html', 'wb') as f:
f.write(data.encode('utf-8'))
logger.log(
15, 'Main Viewer has been write to \'{0}main.html\''.format(output_dir))
except Exception as e:
logger.warning('Writen Main Viewer failed ({})'.format(str(e)))
def generate_cbz(output_dir='.', doujinshi_obj=None, rm_origin_dir=False):
if doujinshi_obj is not None:
doujinshi_dir = os.path.join(output_dir, doujinshi_obj.filename)
@ -118,7 +201,6 @@ an invalid filename.
"""
valid_chars = "-_.()[] %s%s" % (string.ascii_letters, string.digits)
filename = ''.join(c for c in s if c in valid_chars)
filename = filename.replace(' ', '_') # I don't like spaces in filenames.
if len(filename) > 100:
filename = filename[:100] + '...]'

255
nhentai/viewer/main.css Normal file
View File

@ -0,0 +1,255 @@
/*! normalize.css v5.0.0 | MIT License | github.com/necolas/normalize.css */
/* Original from https://static.nhentai.net/css/main_style.9bb9b703e601.css */
html {
font-family: sans-serif;
line-height: 1.15;
-ms-text-size-adjust: 100%;
-webkit-text-size-adjust: 100%
}
body {
margin: 0
}
h1 {
font-size: 2em;
margin: .67em 0
}
a {
background-color: transparent;
-webkit-text-decoration-skip: objects
}
a:active,a:hover {
outline-width: 0
}
abbr[title] {
border-bottom: none;
text-decoration: underline;
text-decoration: underline dotted
}
b,strong {
font-weight: inherit
}
b,strong {
font-weight: bolder
}
code,kbd,samp {
font-family: monospace,monospace;
font-size: 1em
}
small {
font-size: 80%
}
sub,sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline
}
sub {
bottom: -.25em
}
sup {
top: -.5em
}
img {
border-style: none
}
label {
display: block;
font-weight: 700;
text-align: justify;
white-space: nowrap
}
html {
box-sizing: border-box
}
*,:after,:before {
box-sizing: inherit
}
h1,h2,h3,h4,h5,h6 {
font-weight: 700
}
body,html {
font-family: 'Noto Sans',sans-serif;
font-size: 14px;
line-height: 1.42857143;
height: 100%;
margin: 0;
text-align: center;
color: #34495e;
background-color: #fff;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale
}
a {
text-decoration: none;
color: #34495e
}
a:hover {
text-decoration: none;
color: #ed2553
}
a.count {
color: #999
}
a.bold {
font-weight: 700
}
code {
color: #ed2553;
border: 1px solid #fbd3dd;
background-color: #fef0f3
}
blockquote {
border: 0
}
.container {
display: block;
clear: both;
margin-left: auto;
margin-right: auto;
margin-bottom: 10px;
margin-top: 10px;
padding: 10px;
border-radius: 9px;
background-color: #ecf0f1;
width: 100%;
max-width: 1200px
}
.gallery,.gallery-favorite,.thumb-container {
display: inline-block;
vertical-align: top
}
.gallery img,.gallery-favorite img,.thumb-container img {
display: block;
max-width: 100%;
height: auto
}
@media screen and (min-width: 980px) {
.gallery,.gallery-favorite,.thumb-container {
width:19%;
margin: 3px;
margin-bottom: 8px
}
}
@media screen and (max-width: 979px) {
.gallery,.gallery-favorite,.thumb-container {
width:24%;
margin: 2px
}
}
@media screen and (max-width: 772px) {
.gallery,.gallery-favorite,.thumb-container {
width:32%;
margin: 1.5px
}
}
@media screen and (max-width: 500px) {
.gallery,.gallery-favorite,.thumb-container {
width:49%;
margin: .5px
}
}
.gallery a,.gallery-favorite a {
display: block
}
.gallery a img,.gallery-favorite a img {
position: absolute
}
.caption {
line-height: 15px;
left: 0;
right: 0;
top: 100%;
position: absolute;
z-index: 10;
overflow: hidden;
width: 100%;
max-height: 34px;
padding: 3px;
background-color: #fff;
font-weight: 700;
display: block;
text-align: center;
text-decoration: none;
color: #34495e
}
.gallery {
position: relative;
margin-bottom: 3em
}
.gallery:hover .caption {
max-height: 100%;
box-shadow: 0 10px 20px rgba(100,100,100,.5)
}
.gallery-favorite .gallery {
width: 100%
}
html.theme-black,html.theme-black body {
color: #d9d9d9;
background-color: #0d0d0d
}
html.theme-black #thumbnail-container,html.theme-black .container {
background-color: #1f1f1f
}
html.theme-black #thumbnail-container .lazyload,html.theme-black .lazyload {
background-color: #262626
}
html.theme-black #thumbnail-container .lazyload-loading,html.theme-black .lazyload-loading {
background-color: #2e2e2e
}
html.theme-black .gallery:hover .caption {
box-shadow: 0 10px 20px rgba(0,0,0,.5)
}
html.theme-black .caption {
background-color: #404040;
color: #d9d9d9
}
html.theme-black code {
color: #ed2553;
border: none;
background-color: #292929
}

30
nhentai/viewer/main.html Normal file
View File

@ -0,0 +1,30 @@
<!doctype html>
<html lang="en" class=" theme-black">
<head>
<meta charset="utf-8" />
<meta name="theme-color" content="#1f1f1f" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, viewport-fit=cover" />
<title>nHentai &raquo; Viewer</title>
<!-- <link rel="stylesheet" href="./main.css"> -->
<style>
{STYLES}
</style>
</head>
<body>
<div id="content">
<h1>Main Folder({COUNT})</h1>
<div class="container" id="favcontainer">
{PICTURE}
</div> <!-- container -->
</div>
</body>
</html>

View File

@ -46,14 +46,17 @@ document.getElementById('image-container').onclick = event => {
document.onkeypress = event => {
switch (event.key.toLowerCase()) {
// Previous Image
case 'w':
case 'a':
changePage(currentPage - 1);
break;
// Return to previous page
case 'q':
window.history.go(-1);
break;
// Next Image
case ' ':
case 'esc': // future close page function
case 'enter':
case 's':
case 'd':
changePage(currentPage + 1);
break;