Compare commits

..

69 Commits
0.3.6 ... 0.3.7

Author SHA1 Message Date
9f747dad7e 0.3.8 2020-04-09 21:12:24 +08:00
ca713197cc add sqlite3 db to save download history 2020-04-09 21:07:20 +08:00
49f07de95d remove repeat code 2020-04-09 20:37:13 +08:00
5c7bdae0d7 add a new option #111 2020-04-09 20:32:20 +08:00
d5f41bf37c fix bug of --tag in python2.7 2020-03-15 00:41:40 +08:00
56153015b1 update cookie 2020-03-15 00:25:02 +08:00
140249217a fix 2020-03-15 00:24:12 +08:00
9e537e60f2 reformat file 2020-03-15 00:03:48 +08:00
4df8e1bae0 update tests 2020-03-14 23:59:18 +08:00
c250d9c787 fix #106 2020-03-14 23:56:22 +08:00
a5547696eb Merge pull request #108 from RicterZ/dev
Merge dev to master
2020-03-14 23:35:02 +08:00
49ac1d035d Merge branch 'master' into dev 2020-03-14 23:34:49 +08:00
f234b7234e Merge pull request #104 from myzWILLmake/master
add page_range option for favorites
2020-02-08 16:12:25 +08:00
43a9b981dd add page_range option for favorites 2020-02-07 01:32:51 +08:00
bc29869a8b Merge pull request #101 from reynog/patch-1
Suggested change to viewer
2020-01-18 19:50:04 +08:00
53e1923e67 Changed keyboard nav
In conjunction with styles.css change, changed W, and S keys to scroll image vertically and removed page change from Up and Down, leaving A, D, Left, and Right as keys for changing page. Page returns to the top when changing page. W and S scroll behavior is not smooth. Up and Down scroll relies on browser's in-built keyboard scrolling functionality.
2020-01-16 20:20:42 +01:00
ba6d4047e2 Larger image display
Bodged file edit. Changed image to extend off the screen, and be scrollable. Easier to read speech and other text on smaller displays. Moved page counter to top center. Not quite as nice looking.
2020-01-16 20:12:27 +01:00
dcf22b30a5 Merge pull request #96 from symant233/dev
Add @media to html_viewer (mobile friendly)
2019-12-16 10:41:53 +08:00
0208d9b9e6 remove... 2019-12-13 11:57:42 +08:00
0115285e10 trying to fix conflict 2019-12-13 11:56:36 +08:00
ea8a576f7e remove webkit tap color and outline 2019-12-11 18:52:27 +08:00
05eaa9eebc fix 'max-width' not working 2019-12-11 18:35:53 +08:00
ab2dff4859 Merge remote-tracking branch 'upstream/master' into dev 2019-12-11 11:02:43 +08:00
9592870d85 add html viewer @media 2019-12-11 10:55:50 +08:00
c1a82635bd Merge pull request #94 from Alocks/dev
added filter for main.html and #95 fix
2019-12-09 11:27:32 +08:00
1974644513 download gif images 2019-12-08 20:59:37 -03:00
fe4fb46e30 fixed language tag 2019-12-07 17:50:23 -03:00
6156cf5914 added zoom in index.html and some increments in main.html 2019-12-07 14:36:19 -03:00
75b00fc523 Merge remote-tracking branch 'origin/dev' into dev 2019-12-07 12:58:59 -03:00
ff8af8279f fixed html and removed unused .css properties 2019-12-07 12:58:19 -03:00
e1556b09cc fixed unicode issues with japanese characters 2019-12-07 11:19:49 -03:00
110a2acb7c main page filter fixes 2019-12-06 13:08:16 -03:00
c60f1f34d5 main page filter(2/2) 2019-12-05 18:02:03 -03:00
4f2db83a13 almost gave up 2019-12-04 18:54:40 -03:00
bd8bb42ecd main page filter(1/2) 2019-12-04 00:45:14 -03:00
0abcb048b4 filter for main page(1/2) 2019-12-02 16:46:22 -03:00
411d6c2f30 Merge pull request #93 from Alocks/dev
Added language option and metadata serializer
2019-12-02 11:38:09 +08:00
88c0c1e021 Added language option and metadata serializer 2019-12-01 21:23:41 -03:00
86c43e5d8c Merge pull request #92 from RicterZ/dev
merge & update
2019-11-22 10:49:13 +08:00
39f8729d51 Merge pull request #91 from jwfiredragon/patch-1
Documenting --gen-main
2019-11-22 10:48:14 +08:00
d6461335f8 Adding --gen-main to documentation
--gen-main exists as an option in cmdline.py but is not documented in README
2019-11-21 08:40:57 -08:00
c0c7b33909 Merge pull request #88 from Alocks/dev
changed all map(lambda) to listcomp
2019-11-12 14:47:49 +08:00
893a8c194e removed list(). stupid mistake 2019-11-05 10:41:20 -03:00
e6d2eb554d Merge remote-tracking branch 'Alocks/dev' into dev 2019-11-04 16:17:20 -03:00
25e5acf671 changed every map(lambda) to listcomp 2019-11-04 16:14:52 -03:00
4f33228cec Merge pull request #86 from Alocks/dev
Fixed parser to work with new options, and updated readme
2019-10-23 10:16:09 +08:00
f227c9e897 Update README.rst 2019-10-22 14:18:38 -03:00
9f2f57248b Added commands in README and fixer parser 2019-10-22 14:14:50 -03:00
024f08ca97 Merge pull request #84 from Alocks/master
new options added [--artist, --character, --parody, --group]
2019-10-10 12:42:33 +08:00
3017fff823 Merge branch 'dev' into master 2019-10-08 15:42:35 -03:00
070e8917f4 Fixed whitespaces when using comma² 2019-10-05 15:07:49 -03:00
01caa8d4e5 Fixed if user add white spaces 2019-10-05 15:00:33 -03:00
35e724e206 xablau
Signed-off-by: Alocks <alocksmasao@gmail.com>
2019-10-03 18:26:28 -03:00
d045adfd6a 0.3.6 2019-08-04 22:39:31 +08:00
62e3552c84 update cookiewq 2019-08-04 22:39:31 +08:00
6e2a25cf55 fix bug in tag parser #70 2019-08-04 22:39:31 +08:00
44178a8cfb remove comment 2019-08-04 22:39:31 +08:00
4ca582c104 fix #74 2019-08-04 22:39:31 +08:00
97857b8dc6 "" :) 2019-08-04 22:39:31 +08:00
23774d9526 fix bugs 2019-08-01 21:06:40 +08:00
8dc7a1f40b singleton pool 2019-08-01 18:52:30 +08:00
349e21193b remove print 2019-07-31 19:04:25 +08:00
7e826c5255 use multiprocess instead of threadpool #78 2019-07-31 01:22:54 +08:00
bc70a2071b add test for sorting 2019-07-30 23:04:23 +08:00
1b49911166 code style 2019-07-30 23:03:29 +08:00
7eeed17ea5 Merge pull request #79 from Waiifu/added-sorting
sorting option
2019-07-30 22:53:40 +08:00
f4afcd549e Added sorting option 2019-07-29 09:11:45 +02:00
4fc6303db2 Merge pull request #76 from RicterZ/dev
0.3.6
2019-07-28 12:00:54 +08:00
158b15bda8 Merge pull request #66 from RicterZ/dev
0.3.5
2019-06-12 23:04:08 +08:00
18 changed files with 940 additions and 369 deletions

View File

@ -4,19 +4,17 @@ os:
language: python
python:
- 2.7
- 3.6
- 3.5
- 3.4
- 3.7
install:
- python setup.py install
script:
- echo 268642 > /tmp/test.txt
- nhentai --cookie "csrftoken=3c4Mzn4f6NAI1awFqfIh495G3pv5Wade9n63Kx03mkSac8c2QR5vRR4jCwVzb3OR; sessionid=m034e2dyyxgbl9s07hbzgfhvadbap2tk"
- nhentai --cookie "csrftoken=xIh7s9d4NB8qSLN7eJZG9064zsV84aHEYFoAU49Ib9anqmoT0pZRw6TIdayLzQuT; sessionid=un101zfgpglsyffdnsm72le4euuisp7t"
- nhentai --search umaru
- nhentai --id=152503,146134 -t 10 --output=/tmp/ --cbz
- nhentai --tag lolicon
- nhentai --tag lolicon --sorting popular
- nhentai -F
- nhentai --file /tmp/test.txt
- nhentai --id=152503,146134 --gen-main --output=/tmp/

View File

@ -5,3 +5,4 @@ include nhentai/viewer/styles.css
include nhentai/viewer/scripts.js
include nhentai/viewer/main.html
include nhentai/viewer/main.css
include nhentai/viewer/main.js

View File

@ -74,6 +74,42 @@ Download by tag name:
nhentai --tag lolicon --download --page=2
Download by language:
.. code-block:: bash
nhentai --language english --download --page=2
Download by artist name:
.. code-block:: bash
nhentai --artist henreader --download
Download by character name:
.. code-block:: bash
nhentai --character "kuro von einsbern" --download
Download by parody name:
.. code-block:: bash
nhentai --parody "the idolmaster" --download
Download by group name:
.. code-block:: bash
nhentai --group clesta --download
Download using multiple tags (--tag, --character, --paordy and --group supported):
.. code-block:: bash
nhentai --tag "lolicon, teasing" --artist "tamano kedama, atte nanakusa"
Download your favorites with delay:
.. code-block:: bash
@ -132,6 +168,7 @@ Other options:
# Generating options
--html generate a html viewer at current directory
--no-html don't generate HTML after downloading
--gen-main generate a main viewer contain all the doujin in the folder
-C, --cbz generate Comic Book CBZ File
--rm-origin-dir remove downloaded doujinshi dir when generated CBZ
file.

View File

@ -1,3 +1,3 @@
__version__ = '0.3.6'
__version__ = '0.3.8'
__author__ = 'RicterZ'
__email__ = 'ricterzheng@gmail.com'

View File

@ -10,7 +10,7 @@ except ImportError:
import nhentai.constant as constant
from nhentai import __version__
from nhentai.utils import urlparse, generate_html, generate_main_html
from nhentai.utils import urlparse, generate_html, generate_main_html, DB
from nhentai.logger import logger
try:
@ -48,8 +48,16 @@ def cmd_parser():
# doujinshi options
parser.add_option('--id', type='string', dest='id', action='store', help='doujinshi ids set, e.g. 1,2,3')
parser.add_option('--search', '-s', type='string', dest='keyword', action='store', help='search doujinshi by keyword')
parser.add_option('--search', '-s', type='string', dest='keyword', action='store',
help='search doujinshi by keyword')
parser.add_option('--tag', type='string', dest='tag', action='store', help='download doujinshi by tag')
parser.add_option('--artist', type='string', dest='artist', action='store', help='download doujinshi by artist')
parser.add_option('--character', type='string', dest='character', action='store',
help='download doujinshi by character')
parser.add_option('--parody', type='string', dest='parody', action='store', help='download doujinshi by parody')
parser.add_option('--group', type='string', dest='group', action='store', help='download doujinshi by group')
parser.add_option('--language', type='string', dest='language', action='store',
help='download doujinshi by language')
parser.add_option('--favorites', '-F', action='store_true', dest='favorites',
help='list or download your favorites.')
@ -58,6 +66,10 @@ def cmd_parser():
help='page number of search results')
parser.add_option('--max-page', type='int', dest='max_page', action='store', default=1,
help='The max page when recursive download tagged doujinshi')
parser.add_option('--page-range', type='string', dest='page_range', action='store',
help='page range of favorites. e.g. 1,2-5,14')
parser.add_option('--sorting', dest='sorting', action='store', default='date',
help='sorting of doujinshi (date / popular)', choices=['date', 'popular'])
# download options
parser.add_option('--output', '-o', type='string', dest='output_dir', action='store', default='',
@ -89,9 +101,14 @@ def cmd_parser():
# nhentai options
parser.add_option('--cookie', type='str', dest='cookie', action='store',
help='set cookie of nhentai to bypass Google recaptcha')
parser.add_option('--save-download-history', dest='is_save_download_history', action='store_true',
default=False, help='save downloaded doujinshis, whose will be skipped if you re-download them')
parser.add_option('--clean-download-history', action='store_true', default=False, dest='clean_download_history',
help='clean download history')
try:
sys.argv = list(map(lambda x: unicode(x.decode(sys.stdin.encoding)), sys.argv))
sys.argv = [unicode(i.decode(sys.stdin.encoding)) for i in sys.argv]
print()
except (NameError, TypeError):
pass
except UnicodeDecodeError:
@ -104,12 +121,20 @@ def cmd_parser():
exit(0)
if args.main_viewer and not args.id and not args.keyword and \
not args.tag and not args.favorites:
not args.tag and not args.artist and not args.character and \
not args.parody and not args.group and not args.language and not args.favorites:
generate_main_html()
exit(0)
if os.path.exists(os.path.join(constant.NHENTAI_HOME, 'cookie')):
with open(os.path.join(constant.NHENTAI_HOME, 'cookie'), 'r') as f:
if args.clean_download_history:
with DB() as db:
db.clean_all()
logger.info('Download history cleaned.')
exit(0)
if os.path.exists(constant.NHENTAI_COOKIE):
with open(constant.NHENTAI_COOKIE, 'r') as f:
constant.COOKIE = f.read()
if args.cookie:
@ -117,7 +142,7 @@ def cmd_parser():
if not os.path.exists(constant.NHENTAI_HOME):
os.mkdir(constant.NHENTAI_HOME)
with open(os.path.join(constant.NHENTAI_HOME, 'cookie'), 'w') as f:
with open(constant.NHENTAI_COOKIE, 'w') as f:
f.write(args.cookie)
except Exception as e:
logger.error('Cannot create NHENTAI_HOME: {}'.format(str(e)))
@ -126,8 +151,8 @@ def cmd_parser():
logger.info('Cookie saved.')
exit(0)
if os.path.exists(os.path.join(constant.NHENTAI_HOME, 'proxy')):
with open(os.path.join(constant.NHENTAI_HOME, 'proxy'), 'r') as f:
if os.path.exists(constant.NHENTAI_PROXY):
with open(constant.NHENTAI_PROXY, 'r') as f:
link = f.read()
constant.PROXY = {'http': link, 'https': link}
@ -140,8 +165,9 @@ def cmd_parser():
if proxy_url.scheme not in ('http', 'https'):
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
else:
with open(os.path.join(constant.NHENTAI_HOME, 'proxy'), 'w') as f:
with open(constant.NHENTAI_PROXY, 'w') as f:
f.write(args.proxy)
except Exception as e:
logger.error('Cannot create NHENTAI_HOME: {}'.format(str(e)))
exit(1)
@ -155,21 +181,23 @@ def cmd_parser():
exit(1)
if args.id:
_ = map(lambda id_: id_.strip(), args.id.split(','))
args.id = set(map(int, filter(lambda id_: id_.isdigit(), _)))
_ = [i.strip() for i in args.id.split(',')]
args.id = set(int(i) for i in _ if i.isdigit())
if args.file:
with open(args.file, 'r') as f:
_ = map(lambda id: id.strip(), f.readlines())
args.id = set(map(int, filter(lambda id_: id_.isdigit(), _)))
_ = [i.strip() for i in f.readlines()]
args.id = set(int(i) for i in _ if i.isdigit())
if (args.is_download or args.is_show) and not args.id and not args.keyword and \
not args.tag and not args.favorites:
not args.tag and not args.artist and not args.character and \
not args.parody and not args.group and not args.language and not args.favorites:
logger.critical('Doujinshi id(s) are required for downloading')
parser.print_help()
exit(1)
if not args.keyword and not args.id and not args.tag and not args.favorites:
if not args.keyword and not args.id and not args.tag and not args.artist and \
not args.character and not args.parody and not args.group and not args.language and not args.favorites:
parser.print_help()
exit(1)

View File

@ -6,12 +6,12 @@ import platform
import time
from nhentai.cmdline import cmd_parser, banner
from nhentai.parser import doujinshi_parser, search_parser, print_doujinshi, favorites_parser, tag_parser, login
from nhentai.parser import doujinshi_parser, search_parser, print_doujinshi, favorites_parser, tag_parser
from nhentai.doujinshi import Doujinshi
from nhentai.downloader import Downloader
from nhentai.logger import logger
from nhentai.constant import BASE_URL
from nhentai.utils import generate_html, generate_cbz, generate_main_html, check_cookie
from nhentai.utils import generate_html, generate_cbz, generate_main_html, check_cookie, signal_handler, DB
def main():
@ -19,7 +19,7 @@ def main():
options = cmd_parser()
logger.info('Using mirror: {0}'.format(BASE_URL))
from nhentai.constant import PROXY
from nhentai.constant import PROXY
# constant.PROXY will be changed after cmd_parser()
if PROXY != {}:
logger.info('Using proxy: {0}'.format(PROXY))
@ -27,6 +27,7 @@ def main():
# check your cookie
check_cookie()
doujinshis = []
doujinshi_ids = []
doujinshi_list = []
@ -34,46 +35,75 @@ def main():
if not options.is_download:
logger.warning('You do not specify --download option')
doujinshis = favorites_parser()
print_doujinshi(doujinshis)
if options.is_download and doujinshis:
doujinshi_ids = map(lambda d: d['id'], doujinshis)
doujinshis = favorites_parser(options.page_range)
elif options.tag:
doujinshis = tag_parser(options.tag, max_page=options.max_page)
print_doujinshi(doujinshis)
if options.is_download and doujinshis:
doujinshi_ids = map(lambda d: d['id'], doujinshis)
doujinshis = tag_parser(options.tag, sorting=options.sorting, max_page=options.max_page)
elif options.artist:
doujinshis = tag_parser(options.artist, max_page=options.max_page, index=1)
elif options.character:
doujinshis = tag_parser(options.character, max_page=options.max_page, index=2)
elif options.parody:
doujinshis = tag_parser(options.parody, max_page=options.max_page, index=3)
elif options.group:
doujinshis = tag_parser(options.group, max_page=options.max_page, index=4)
elif options.language:
doujinshis = tag_parser(options.language, max_page=options.max_page, index=5)
elif options.keyword:
doujinshis = search_parser(options.keyword, options.page)
print_doujinshi(doujinshis)
if options.is_download:
doujinshi_ids = map(lambda d: d['id'], doujinshis)
doujinshis = search_parser(options.keyword, sorting=options.sorting, page=options.page)
elif not doujinshi_ids:
doujinshi_ids = options.id
print_doujinshi(doujinshis)
if options.is_download and doujinshis:
doujinshi_ids = [i['id'] for i in doujinshis]
if options.is_save_download_history:
with DB() as db:
data = set(db.get_all())
doujinshi_ids = list(set(doujinshi_ids) - data)
if doujinshi_ids:
for id_ in doujinshi_ids:
for i, id_ in enumerate(doujinshi_ids):
if options.delay:
time.sleep(options.delay)
doujinshi_info = doujinshi_parser(id_)
doujinshi_list.append(Doujinshi(name_format=options.name_format, **doujinshi_info))
if doujinshi_info:
doujinshi_list.append(Doujinshi(name_format=options.name_format, **doujinshi_info))
if (i + 1) % 10 == 0:
logger.info('Progress: %d / %d' % (i + 1, len(doujinshi_ids)))
if not options.is_show:
downloader = Downloader(path=options.output_dir,
thread=options.threads, timeout=options.timeout, delay=options.delay)
downloader = Downloader(path=options.output_dir, size=options.threads,
timeout=options.timeout, delay=options.delay)
for doujinshi in doujinshi_list:
doujinshi.downloader = downloader
doujinshi.download()
if options.is_save_download_history:
with DB() as db:
db.add_one(doujinshi.id)
if not options.is_nohtml and not options.is_cbz:
generate_html(options.output_dir, doujinshi)
elif options.is_cbz:
generate_cbz(options.output_dir, doujinshi, options.rm_origin_dir)
if options.main_viewer:
generate_main_html(options.output_dir)
if not platform.system() == 'Windows':
logger.log(15, '🍻 All done.')
else:
@ -83,12 +113,8 @@ def main():
[doujinshi.show() for doujinshi in doujinshi_list]
def signal_handler(signal, frame):
logger.error('Ctrl-C signal received. Stopping...')
exit(1)
signal.signal(signal.SIGINT, signal_handler)
if __name__ == '__main__':
main()

View File

@ -17,7 +17,13 @@ __api_suspended_SEARCH_URL = '%s/api/galleries/search' % BASE_URL
DETAIL_URL = '%s/g' % BASE_URL
SEARCH_URL = '%s/search/' % BASE_URL
TAG_URL = '%s/tag' % BASE_URL
TAG_URL = ['%s/tag' % BASE_URL,
'%s/artist' % BASE_URL,
'%s/character' % BASE_URL,
'%s/parody' % BASE_URL,
'%s/group' % BASE_URL,
'%s/language' % BASE_URL]
TAG_API_URL = '%s/api/galleries/tagged' % BASE_URL
LOGIN_URL = '%s/login/' % BASE_URL
CHALLENGE_URL = '%s/challenge' % BASE_URL
@ -27,6 +33,9 @@ u = urlparse(BASE_URL)
IMAGE_URL = '%s://i.%s/galleries' % (u.scheme, u.hostname)
NHENTAI_HOME = os.path.join(os.getenv('HOME', tempfile.gettempdir()), '.nhentai')
NHENTAI_PROXY = os.path.join(NHENTAI_HOME, 'proxy')
NHENTAI_COOKIE = os.path.join(NHENTAI_HOME, 'cookie')
NHENTAI_HISTORY = os.path.join(NHENTAI_HOME, 'history.sqlite3')
PROXY = {}

View File

@ -1,10 +1,15 @@
# coding: utf-
from __future__ import unicode_literals, print_function
import multiprocessing
import signal
from future.builtins import str as text
import os
import requests
import threadpool
import time
import multiprocessing as mp
try:
from urllib.parse import urlparse
@ -13,29 +18,25 @@ except ImportError:
from nhentai.logger import logger
from nhentai.parser import request
from nhentai.utils import Singleton
from nhentai.utils import Singleton, signal_handler
requests.packages.urllib3.disable_warnings()
semaphore = mp.Semaphore()
class NhentaiImageNotExistException(Exception):
class NHentaiImageNotExistException(Exception):
pass
class Downloader(Singleton):
def __init__(self, path='', thread=1, timeout=30, delay=0):
if not isinstance(thread, (int, )) or thread < 1 or thread > 15:
raise ValueError('Invalid threads count')
def __init__(self, path='', size=5, timeout=30, delay=0):
self.size = size
self.path = str(path)
self.thread_count = thread
self.threads = []
self.thread_pool = None
self.timeout = timeout
self.delay = delay
def _download(self, url, folder='', filename='', retried=0):
def download_(self, url, folder='', filename='', retried=0):
if self.delay:
time.sleep(self.delay)
logger.info('Starting to download {0} ...'.format(url))
@ -54,9 +55,9 @@ class Downloader(Singleton):
try:
response = request('get', url, stream=True, timeout=self.timeout)
if response.status_code != 200:
raise NhentaiImageNotExistException
raise NHentaiImageNotExistException
except NhentaiImageNotExistException as e:
except NHentaiImageNotExistException as e:
raise e
except Exception as e:
@ -78,27 +79,37 @@ class Downloader(Singleton):
except (requests.HTTPError, requests.Timeout) as e:
if retried < 3:
logger.warning('Warning: {0}, retrying({1}) ...'.format(str(e), retried))
return 0, self._download(url=url, folder=folder, filename=filename, retried=retried+1)
return 0, self.download_(url=url, folder=folder, filename=filename, retried=retried+1)
else:
return 0, None
except NhentaiImageNotExistException as e:
except NHentaiImageNotExistException as e:
os.remove(os.path.join(folder, base_filename.zfill(3) + extension))
return -1, url
except Exception as e:
import traceback
traceback.print_stack()
logger.critical(str(e))
return 0, None
except KeyboardInterrupt:
return -3, None
return 1, url
def _download_callback(self, request, result):
def _download_callback(self, result):
result, data = result
if result == 0:
logger.warning('fatal errors occurred, ignored')
# exit(1)
elif result == -1:
logger.warning('url {} return status code 404'.format(data))
elif result == -2:
logger.warning('Ctrl-C pressed, exiting sub processes ...')
elif result == -3:
# workers wont be run, just pass
pass
else:
logger.log(15, '{0} downloaded successfully'.format(data))
@ -115,14 +126,34 @@ class Downloader(Singleton):
os.makedirs(folder)
except EnvironmentError as e:
logger.critical('{0}'.format(str(e)))
exit(1)
else:
logger.warn('Path \'{0}\' already exist.'.format(folder))
queue = [([url], {'folder': folder}) for url in queue]
queue = [(self, url, folder) for url in queue]
self.thread_pool = threadpool.ThreadPool(self.thread_count)
requests_ = threadpool.makeRequests(self._download, queue, self._download_callback)
[self.thread_pool.putRequest(req) for req in requests_]
pool = multiprocessing.Pool(self.size, init_worker)
self.thread_pool.wait()
for item in queue:
pool.apply_async(download_wrapper, args=item, callback=self._download_callback)
pool.close()
pool.join()
def download_wrapper(obj, url, folder=''):
if semaphore.get_value():
return Downloader.download_(obj, url=url, folder=folder)
else:
return -3, None
def init_worker():
signal.signal(signal.SIGINT, subprocess_signal)
def subprocess_signal(signal, frame):
if semaphore.acquire(timeout=1):
logger.warning('Ctrl-C pressed, exiting sub processes ...')
raise KeyboardInterrupt

View File

@ -1,10 +1,9 @@
# coding: utf-8
from __future__ import unicode_literals, print_function
import sys
import os
import re
import threadpool
import requests
import time
from bs4 import BeautifulSoup
from tabulate import tabulate
@ -65,7 +64,7 @@ def _get_title_and_id(response):
return result
def favorites_parser():
def favorites_parser(page_range=''):
result = []
html = BeautifulSoup(request('get', constant.FAV_URL).content, 'html.parser')
count = html.find('span', attrs={'class': 'count'})
@ -89,7 +88,12 @@ def favorites_parser():
if os.getenv('DEBUG'):
pages = 1
for page in range(1, pages + 1):
page_range_list = range(1, pages + 1)
if page_range:
logger.info('page range is {0}'.format(page_range))
page_range_list = page_range_parser(page_range, pages)
for page in page_range_list:
try:
logger.info('Getting doujinshi ids of page %d' % page)
resp = request('get', constant.FAV_URL + '?page=%d' % page).content
@ -101,6 +105,32 @@ def favorites_parser():
return result
def page_range_parser(page_range, max_page_num):
pages = set()
ranges = str.split(page_range, ',')
for range_str in ranges:
idx = range_str.find('-')
if idx == -1:
try:
page = int(range_str)
if page <= max_page_num:
pages.add(page)
except ValueError:
logger.error('page range({0}) is not valid'.format(page_range))
else:
try:
left = int(range_str[:idx])
right = int(range_str[idx+1:])
if right > max_page_num:
right = max_page_num
for page in range(left, right+1):
pages.add(page)
except ValueError:
logger.error('page range({0}) is not valid'.format(page_range))
return list(pages)
def doujinshi_parser(id_):
if not isinstance(id_, (int,)) and (isinstance(id_, (str,)) and not id_.isdigit()):
raise Exception('Doujinshi id({0}) is not valid'.format(id_))
@ -121,8 +151,8 @@ def doujinshi_parser(id_):
return doujinshi_parser(str(id_))
except Exception as e:
logger.critical(str(e))
raise SystemExit
logger.warn('Error: {}, ignored'.format(str(e)))
return None
html = BeautifulSoup(response, 'html.parser')
doujinshi_info = html.find('div', attrs={'id': 'info'})
@ -134,7 +164,7 @@ def doujinshi_parser(id_):
doujinshi['subtitle'] = subtitle.text if subtitle else ''
doujinshi_cover = html.find('div', attrs={'id': 'cover'})
img_id = re.search('/galleries/([\d]+)/cover\.(jpg|png)$', doujinshi_cover.a.img.attrs['data-src'])
img_id = re.search('/galleries/([\d]+)/cover\.(jpg|png|gif)$', doujinshi_cover.a.img.attrs['data-src'])
ext = []
for i in html.find_all('div', attrs={'class': 'thumb-container'}):
@ -158,7 +188,7 @@ def doujinshi_parser(id_):
# gain information of the doujinshi
information_fields = doujinshi_info.find_all('div', attrs={'class': 'field-name'})
needed_fields = ['Characters', 'Artists', 'Languages', 'Tags']
needed_fields = ['Characters', 'Artists', 'Languages', 'Tags', 'Parodies', 'Groups', 'Categories']
for field in information_fields:
field_name = field.contents[0].strip().strip(':')
if field_name in needed_fields:
@ -166,17 +196,15 @@ def doujinshi_parser(id_):
field.find_all('a', attrs={'class': 'tag'})]
doujinshi[field_name.lower()] = ', '.join(data)
time_field = doujinshi_info.find('time')
if time_field.has_attr('datetime'):
doujinshi['date'] = time_field['datetime']
return doujinshi
def search_parser(keyword, page):
def search_parser(keyword, sorting='date', page=1):
logger.debug('Searching doujinshis of keyword {0}'.format(keyword))
try:
response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page}).content
except requests.ConnectionError as e:
logger.critical(e)
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
raise SystemExit
response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page, 'sort': sorting}).content
result = _get_title_and_id(response)
if not result:
@ -194,16 +222,33 @@ def print_doujinshi(doujinshi_list):
tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst'))
def tag_parser(tag_name, max_page=1):
def tag_parser(tag_name, sorting='date', max_page=1, index=0):
result = []
tag_name = tag_name.lower()
tag_name = tag_name.replace(' ', '-')
if ',' in tag_name:
tag_name = [i.strip().replace(' ', '-') for i in tag_name.split(',')]
else:
tag_name = tag_name.strip().replace(' ', '-')
if sorting == 'date':
sorting = ''
for p in range(1, max_page + 1):
logger.debug('Fetching page {0} for doujinshi with tag \'{1}\''.format(p, tag_name))
response = request('get', url='%s/%s/?page=%d' % (constant.TAG_URL, tag_name, p)).content
if sys.version_info >= (3, 0, 0):
unicode_ = str
else:
unicode_ = unicode
if isinstance(tag_name, (str, unicode_)):
logger.debug('Fetching page {0} for doujinshi with tag \'{1}\''.format(p, tag_name))
response = request('get', url='%s/%s/%s?page=%d' % (constant.TAG_URL[index], tag_name, sorting, p)).content
result += _get_title_and_id(response)
else:
for i in tag_name:
logger.debug('Fetching page {0} for doujinshi with tag \'{1}\''.format(p, i))
response = request('get',
url='%s/%s/%s?page=%d' % (constant.TAG_URL[index], i, sorting, p)).content
result += _get_title_and_id(response)
result += _get_title_and_id(response)
if not result:
logger.error('Cannot find doujinshi id of tag \'{0}\''.format(tag_name))
return
@ -214,13 +259,13 @@ def tag_parser(tag_name, max_page=1):
return result
def __api_suspended_search_parser(keyword, page):
def __api_suspended_search_parser(keyword, sorting, page):
logger.debug('Searching doujinshis using keywords {0}'.format(keyword))
result = []
i = 0
while i < 5:
try:
response = request('get', url=constant.SEARCH_URL, params={'query': keyword, 'page': page}).json()
response = request('get', url=constant.SEARCH_URL, params={'query': keyword, 'page': page, 'sort': sorting}).json()
except Exception as e:
i += 1
if not i < 5:
@ -244,10 +289,10 @@ def __api_suspended_search_parser(keyword, page):
return result
def __api_suspended_tag_parser(tag_id, max_page=1):
def __api_suspended_tag_parser(tag_id, sorting, max_page=1):
logger.info('Searching for doujinshi with tag id {0}'.format(tag_id))
result = []
response = request('get', url=constant.TAG_API_URL, params={'sort': 'popular', 'tag_id': tag_id}).json()
response = request('get', url=constant.TAG_API_URL, params={'sort': sorting, 'tag_id': tag_id}).json()
page = max_page if max_page <= response['num_pages'] else int(response['num_pages'])
for i in range(1, page + 1):
@ -255,7 +300,7 @@ def __api_suspended_tag_parser(tag_id, max_page=1):
if page != 1:
response = request('get', url=constant.TAG_API_URL,
params={'sort': 'popular', 'tag_id': tag_id}).json()
params={'sort': sorting, 'tag_id': tag_id}).json()
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
@ -291,11 +336,11 @@ def __api_suspended_doujinshi_parser(id_):
doujinshi['name'] = response['title']['english']
doujinshi['subtitle'] = response['title']['japanese']
doujinshi['img_id'] = response['media_id']
doujinshi['ext'] = ''.join(map(lambda s: s['t'], response['images']['pages']))
doujinshi['ext'] = ''.join([i['t'] for i in response['images']['pages']])
doujinshi['pages'] = len(response['images']['pages'])
# gain information of the doujinshi
needed_fields = ['character', 'artist', 'language', 'tag']
needed_fields = ['character', 'artist', 'language', 'tag', 'parody', 'group', 'category']
for tag in response['tags']:
tag_type = tag['type']
if tag_type in needed_fields:

80
nhentai/serializer.py Normal file
View File

@ -0,0 +1,80 @@
# coding: utf-8
import json
import os
def serialize(doujinshi, dir):
metadata = {'title': doujinshi.name,
'subtitle': doujinshi.info.subtitle}
if doujinshi.info.date:
metadata['upload_date'] = doujinshi.info.date
if doujinshi.info.parodies:
metadata['parody'] = [i.strip() for i in doujinshi.info.parodies.split(',')]
if doujinshi.info.characters:
metadata['character'] = [i.strip() for i in doujinshi.info.characters.split(',')]
if doujinshi.info.tags:
metadata['tag'] = [i.strip() for i in doujinshi.info.tags.split(',')]
if doujinshi.info.artists:
metadata['artist'] = [i.strip() for i in doujinshi.info.artists.split(',')]
if doujinshi.info.groups:
metadata['group'] = [i.strip() for i in doujinshi.info.groups.split(',')]
if doujinshi.info.languages:
metadata['language'] = [i.strip() for i in doujinshi.info.languages.split(',')]
metadata['category'] = doujinshi.info.categories
metadata['URL'] = doujinshi.url
metadata['Pages'] = doujinshi.pages
with open(os.path.join(dir, 'metadata.json'), 'w') as f:
json.dump(metadata, f, separators=','':')
def merge_json():
lst = []
output_dir = "./"
os.chdir(output_dir)
doujinshi_dirs = next(os.walk('.'))[1]
for folder in doujinshi_dirs:
files = os.listdir(folder)
if 'metadata.json' not in files:
continue
data_folder = output_dir + folder + '/' + 'metadata.json'
json_file = open(data_folder, 'r')
json_dict = json.load(json_file)
json_dict['Folder'] = folder
lst.append(json_dict)
return lst
def serialize_unique(lst):
dictionary = {}
parody = []
character = []
tag = []
artist = []
group = []
for dic in lst:
if 'parody' in dic:
parody.extend([i for i in dic['parody']])
if 'character' in dic:
character.extend([i for i in dic['character']])
if 'tag' in dic:
tag.extend([i for i in dic['tag']])
if 'artist' in dic:
artist.extend([i for i in dic['artist']])
if 'group' in dic:
group.extend([i for i in dic['group']])
dictionary['parody'] = list(set(parody))
dictionary['character'] = list(set(character))
dictionary['tag'] = list(set(tag))
dictionary['artist'] = list(set(artist))
dictionary['group'] = list(set(group))
return dictionary
def set_js_database():
with open('data.js', 'w') as f:
indexed_json = merge_json()
unique_json = json.dumps(serialize_unique(indexed_json), separators=','':')
indexed_json = json.dumps(indexed_json, separators=','':')
f.write('var data = ' + indexed_json)
f.write(';\nvar tags = ' + unique_json)

View File

@ -8,9 +8,11 @@ import string
import zipfile
import shutil
import requests
import sqlite3
from nhentai import constant
from nhentai.logger import logger
from nhentai.serializer import serialize, set_js_database
def request(method, url, **kwargs):
@ -79,19 +81,19 @@ def generate_html(output_dir='.', doujinshi_obj=None):
image_html += '<img src="{0}" class="image-item"/>\n'\
.format(image)
html = readfile('viewer/index.html')
css = readfile('viewer/styles.css')
js = readfile('viewer/scripts.js')
if doujinshi_obj is not None:
title = doujinshi_obj.name
serialize(doujinshi_obj, doujinshi_dir)
name = doujinshi_obj.name
if sys.version_info < (3, 0):
title = title.encode('utf-8')
name = doujinshi_obj.name.encode('utf-8')
else:
title = 'nHentai HTML Viewer'
name = {'title': 'nHentai HTML Viewer'}
data = html.format(TITLE=title, IMAGES=image_html, SCRIPTS=js, STYLES=css)
data = html.format(TITLE=name, IMAGES=image_html, SCRIPTS=js, STYLES=css)
try:
if sys.version_info < (3, 0):
with open(os.path.join(doujinshi_dir, 'index.html'), 'w') as f:
@ -112,10 +114,12 @@ def generate_main_html(output_dir='./'):
Default output folder will be the CLI path.
"""
count = 0
image_html = ''
main = readfile('viewer/main.html')
css = readfile('viewer/main.css')
js = readfile('viewer/main.js')
element = '\n\
<div class="gallery-favorite">\n\
<div class="gallery">\n\
@ -130,12 +134,10 @@ def generate_main_html(output_dir='./'):
doujinshi_dirs = next(os.walk('.'))[1]
for folder in doujinshi_dirs:
files = os.listdir(folder)
files.sort()
if 'index.html' in files:
count += 1
logger.info('Add doujinshi \'{}\''.format(folder))
else:
continue
@ -147,18 +149,19 @@ def generate_main_html(output_dir='./'):
title = 'nHentai HTML Viewer'
image_html += element.format(FOLDER=folder, IMAGE=image, TITLE=title)
if image_html == '':
logger.warning('None index.html found, --gen-main paused.')
return
try:
data = main.format(STYLES=css, COUNT=count, PICTURE=image_html)
data = main.format(STYLES=css, SCRIPTS=js, PICTURE=image_html)
if sys.version_info < (3, 0):
with open('./main.html', 'w') as f:
f.write(data)
else:
with open('./main.html', 'wb') as f:
f.write(data.encode('utf-8'))
shutil.copy(os.path.dirname(__file__)+'/viewer/logo.png', './')
set_js_database()
logger.log(
15, 'Main Viewer has been write to \'{0}main.html\''.format(output_dir))
except Exception as e:
@ -205,5 +208,37 @@ an invalid filename.
filename = filename[:100] + '...]'
# Remove [] from filename
filename = filename.replace('[]', '')
filename = filename.replace('[]', '').strip()
return filename
def signal_handler(signal, frame):
logger.error('Ctrl-C signal received. Stopping...')
exit(1)
class DB(object):
conn = None
cur = None
def __enter__(self):
self.conn = sqlite3.connect(constant.NHENTAI_HISTORY)
self.cur = self.conn.cursor()
self.cur.execute('CREATE TABLE IF NOT EXISTS download_history (id text)')
self.conn.commit()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.conn.close()
def clean_all(self):
self.cur.execute('DELETE FROM download_history WHERE 1')
self.conn.commit()
def add_one(self, data):
self.cur.execute('INSERT INTO download_history VALUES (?)', [data])
self.conn.commit()
def get_all(self):
data = self.cur.execute('SELECT id FROM download_history')
return [i[0] for i in data]

View File

@ -2,6 +2,7 @@
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, viewport-fit=cover" />
<title>{TITLE}</title>
<style>
{STYLES}

BIN
nhentai/viewer/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -1,79 +1,14 @@
/*! normalize.css v5.0.0 | MIT License | github.com/necolas/normalize.css */
/* Original from https://static.nhentai.net/css/main_style.9bb9b703e601.css */
html {
font-family: sans-serif;
line-height: 1.15;
-ms-text-size-adjust: 100%;
-webkit-text-size-adjust: 100%
}
body {
margin: 0
}
h1 {
font-size: 2em;
margin: .67em 0
}
a {
background-color: transparent;
-webkit-text-decoration-skip: objects
}
a:active,a:hover {
outline-width: 0
}
abbr[title] {
border-bottom: none;
text-decoration: underline;
text-decoration: underline dotted
}
b,strong {
font-weight: inherit
}
b,strong {
font-weight: bolder
}
code,kbd,samp {
font-family: monospace,monospace;
font-size: 1em
}
small {
font-size: 80%
}
sub,sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline
}
sub {
bottom: -.25em
}
sup {
top: -.5em
}
img {
border-style: none
}
label {
display: block;
font-weight: 700;
text-align: justify;
white-space: nowrap
}
html {
box-sizing: border-box
}
@ -82,10 +17,6 @@ html {
box-sizing: inherit
}
h1,h2,h3,h4,h5,h6 {
font-weight: 700
}
body,html {
font-family: 'Noto Sans',sans-serif;
font-size: 14px;
@ -104,25 +35,6 @@ a {
color: #34495e
}
a:hover {
text-decoration: none;
color: #ed2553
}
a.count {
color: #999
}
a.bold {
font-weight: 700
}
code {
color: #ed2553;
border: 1px solid #fbd3dd;
background-color: #fef0f3
}
blockquote {
border: 0
}
@ -130,15 +42,15 @@ blockquote {
.container {
display: block;
clear: both;
margin-left: auto;
margin-right: auto;
margin-bottom: 10px;
margin-top: 10px;
padding: 10px;
margin-left: 15rem;
margin-right: 0.5rem;
margin-bottom: 5px;
margin-top: 5px;
padding: 4px;
border-radius: 9px;
background-color: #ecf0f1;
width: 100%;
max-width: 1200px
width: 100% - 15rem;
max-width: 1500px
}
.gallery,.gallery-favorite,.thumb-container {
@ -156,7 +68,6 @@ blockquote {
.gallery,.gallery-favorite,.thumb-container {
width:19%;
margin: 3px;
margin-bottom: 8px
}
}
@ -222,6 +133,186 @@ blockquote {
width: 100%
}
.sidenav {
height: 100%;
width: 15rem;
position: fixed;
z-index: 1;
top: 0;
left: 0;
background-color: #0d0d0d;
overflow: hidden;
padding-top: 20px;
-webkit-touch-callout: none; /* iOS Safari */
-webkit-user-select: none; /* Safari */
-khtml-user-select: none; /* Konqueror HTML */
-moz-user-select: none; /* Old versions of Firefox */
ms-user-select: none; /* Internet Explorer/Edge */
user-select: none;
}
.sidenav a {
background-color: #eee;
padding: 5px 0px 5px 15px;
text-decoration: none;
font-size: 15px;
color: #0d0d0d9;
display: block;
text-align: left;
}
.sidenav img {
width:100%;
padding: 0px 5px 0px 5px;
}
.sidenav h1 {
font-size: 1.5em;
margin: 0px 0px 10px;
}
.sidenav a:hover {
color: white;
background-color: #EC2754;
}
.accordion {
font-weight: bold;
background-color: #eee;
color: #444;
padding: 10px 0px 5px 8px;
width: 100%;
border: none;
text-align: left;
outline: none;
font-size: 15px;
transition: 0.4s;
cursor:pointer;
}
.accordion:hover {
background-color: #ddd;
}
.accordion.active{
background-color:#ddd;
}
.nav-btn {
font-weight: bold;
background-color: #eee;
color: #444;
padding: 8px 8px 5px 9px;
width: 100%;
border: none;
text-align: left;
outline: none;
font-size: 15px;
}
.hidden {
display:none;
}
.nav-btn a{
font-weight: normal;
padding-right: 10px;
border-radius: 15px;
cursor: crosshair
}
.options {
display:block;
padding: 0px 0px 0px 0px;
background-color: #eee;
max-height: 0;
overflow: hidden;
transition: max-height 0.2s ease-out;
cursor:pointer;
}
.search{
background-color: #eee;
padding-right:40px;
white-space: nowrap;
padding-top: 5px;
height:43px;
}
.search input{
border-top-right-radius:10px;
padding-top:0;
padding-bottom:0;
font-size:1em;
width:100%;
height:38px;
vertical-align:top;
}
.btn{
border-top-left-radius:10px;
color:#fff;
font-size:100%;
padding: 8px;
width:38px;
background-color:#ed2553;
}
#tags{
text-align:left;
display: flex;
width:15rem;
justify-content: start;
margin: 2px 2px 2px 0px;
flex-wrap: wrap;
}
.btn-2{
font-weight:700;
padding-right:0.5rem;
padding-left:0.5rem;
color:#fff;
border:0;
font-size:100%;
height:1.25rem;
outline: 0;
border-radius: 0.3rem;
cursor: pointer;
margin:0.15rem;
transition: all 1s linear;
}
.btn-2#parody{
background-color: red;
}
.btn-2#character{
background-color: blue;
}
.btn-2#tag{
background-color: green;
}
.btn-2#artist{
background-color: fuchsia;
}
.btn-2#group{
background-color: teal;
}
.btn-2.hover{
filter: saturate(20%)
}
input,input:focus{
border:none;
outline:0;
}
html.theme-black,html.theme-black body {
color: #d9d9d9;
background-color: #0d0d0d
@ -231,14 +322,6 @@ html.theme-black #thumbnail-container,html.theme-black .container {
background-color: #1f1f1f
}
html.theme-black #thumbnail-container .lazyload,html.theme-black .lazyload {
background-color: #262626
}
html.theme-black #thumbnail-container .lazyload-loading,html.theme-black .lazyload-loading {
background-color: #2e2e2e
}
html.theme-black .gallery:hover .caption {
box-shadow: 0 10px 20px rgba(0,0,0,.5)
}
@ -246,10 +329,4 @@ html.theme-black .gallery:hover .caption {
html.theme-black .caption {
background-color: #404040;
color: #d9d9d9
}
html.theme-black code {
color: #ed2553;
border: none;
background-color: #292929
}

View File

@ -5,7 +5,8 @@
<meta charset="utf-8" />
<meta name="theme-color" content="#1f1f1f" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, viewport-fit=cover" />
<title>nHentai &raquo; Viewer</title>
<title>nHentai Viewer</title>
<script type="text/javascript" src="data.js"></script>
<!-- <link rel="stylesheet" href="./main.css"> -->
<style>
{STYLES}
@ -14,9 +15,27 @@
<body>
<div id="content">
<h1>Main Folder({COUNT})</h1>
<nav class="sidenav">
<img src="logo.png">
<h1>nHentai Viewer</h1>
<button class="accordion">Language</button>
<div class="options" id="language">
<a>English</a>
<a>Japanese</a>
<a>Chinese</a>
</div>
<button class="accordion">Category</button>
<div class="options" id ="category">
<a>Doujinshi</a>
<a>Manga</a>
</div>
<button class="nav-btn hidden">Filters</button>
<div class="search">
<input autocomplete="off" type="search" id="tagfilter" name="q" value="" autocapitalize="none" required="">
<svg class="btn" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="white" d="M505 442.7L405.3 343c-4.5-4.5-10.6-7-17-7H372c27.6-35.3 44-79.7 44-128C416 93.1 322.9 0 208 0S0 93.1 0 208s93.1 208 208 208c48.3 0 92.7-16.4 128-44v16.3c0 6.4 2.5 12.5 7 17l99.7 99.7c9.4 9.4 24.6 9.4 33.9 0l28.3-28.3c9.4-9.4 9.4-24.6.1-34zM208 336c-70.7 0-128-57.2-128-128 0-70.7 57.2-128 128-128 70.7 0 128 57.2 128 128 0 70.7-57.2 128-128 128z"/></svg>
<div id="tags">
</div>
</nav>
<div class="container" id="favcontainer">
{PICTURE}
@ -24,7 +43,9 @@
</div> <!-- container -->
</div>
<script>
{SCRIPTS}
</script>
</body>
</html>

177
nhentai/viewer/main.js Normal file
View File

@ -0,0 +1,177 @@
//------------------------------------navbar script------------------------------------
var menu = document.getElementsByClassName("accordion");
for (var i = 0; i < menu.length; i++) {
menu[i].addEventListener("click", function() {
var panel = this.nextElementSibling;
if (panel.style.maxHeight) {
this.classList.toggle("active");
panel.style.maxHeight = null;
} else {
panel.style.maxHeight = panel.scrollHeight + "px";
this.classList.toggle("active");
}
});
}
var language = document.getElementById("language").children;
for (var i = 0; i < language.length; i++){
language[i].addEventListener("click", function() {
toggler = document.getElementById("language")
toggler.style.maxHeight = null;
document.getElementsByClassName("accordion")[0].classList.toggle("active");
filter_maker(this.innerText, "language");
});
}
var category = document.getElementById("category").children;
for (var i = 0; i < category.length; i++){
category[i].addEventListener("click", function() {
document.getElementById("category").style.maxHeight = null;
document.getElementsByClassName("accordion")[1].classList.toggle("active");
filter_maker(this.innerText, "category");
});
}
//-----------------------------------------------------------------------------------
//----------------------------------Tags Script--------------------------------------
tag_maker(tags);
var tag = document.getElementsByClassName("btn-2");
for (var i = 0; i < tag.length; i++){
tag[i].addEventListener("click", function() {
filter_maker(this.innerText, this.id);
});
}
var input = document.getElementById("tagfilter");
input.addEventListener("input", function() {
var tags = document.querySelectorAll(".btn-2");
if (this.value.length > 0) {
for (var i = 0; i < tags.length; i++) {
var tag = tags[i];
var nome = tag.innerText;
var exp = new RegExp(this.value, "i");;
if (exp.test(nome)) {
tag.classList.remove("hidden");
}
else {
tag.classList.add("hidden");
}
}
} else {
for (var i = 0; i < tags.length; i++) {
var tag = tags[i];
tag.classList.add('hidden');
}
}
});
input.addEventListener('keypress', function (e) {
enter_search(e, this.value);
});
//-----------------------------------------------------------------------------------
//------------------------------------Functions--------------------------------------
function enter_search(e, input){
var count = 0;
var key = e.which || e.keyCode;
if (key === 13 && input.length > 0) {
var all_tags = document.getElementById("tags").children;
for(i = 0; i < all_tags.length; i++){
if (!all_tags[i].classList.contains("hidden")){
count++;
var tag_name = all_tags[i].innerText;
var tag_id = all_tags[i].id;
if (count>1){break}
}
}
if (count == 1){
filter_maker(tag_name, tag_id);
}
}
}
function filter_maker(text, class_value){
var check = filter_checker(text);
var nav_btn = document.getElementsByClassName("nav-btn")[0];
if (nav_btn.classList.contains("hidden")){
nav_btn.classList.toggle("hidden");
}
if (check == true){
var node = document.createElement("a");
var textnode = document.createTextNode(text);
node.appendChild(textnode);
node.classList.add(class_value);
nav_btn.appendChild(node);
filter_searcher();
}
}
function filter_searcher(){
var verifier = null;
var tags_filter = [];
var doujinshi_id = [];
var filter_tag = document.getElementsByClassName("nav-btn")[0].children;
filter_tag[filter_tag.length-1].addEventListener("click", function() {
this.remove();
try{
filter_searcher();
}
catch{
var gallery = document.getElementsByClassName("gallery-favorite");
for (var i = 0; i < gallery.length; i++){
gallery[i].classList.remove("hidden");
}
}
});
for (var i=0; i < filter_tag.length; i++){
var fclass = filter_tag[i].className;
var fname = filter_tag[i].innerText.toLowerCase();
tags_filter.push([fclass, fname])
}
for (var i=0; i < data.length; i++){
for (var j=0; j < tags_filter.length; j++){
try{
if(data[i][tags_filter[j][0]].includes(tags_filter[j][1])){
verifier = true;
}
else{
verifier = false;
break
}
}
catch{
verifier = false;
break
}
}
if (verifier){doujinshi_id.push(data[i].Folder);}
}
var gallery = document.getElementsByClassName("gallery-favorite");
for (var i = 0; i < gallery.length; i++){
gtext = gallery [i].children[0].children[0].children[1].innerText;
if(doujinshi_id.includes(gtext)){
gallery[i].classList.remove("hidden");
}
else{
gallery[i].classList.add("hidden");
}
}
}
function filter_checker(text){
var filter_tags = document.getElementsByClassName("nav-btn")[0].children;
if (filter_tags == null){return true;}
for (var i=0; i < filter_tags.length; i++){
if (filter_tags[i].innerText == text){return false;}
}
return true;
}
function tag_maker(data){
for (i in data){
for (j in data[i]){
var node = document.createElement("button");
var textnode = document.createTextNode(data[i][j]);
node.appendChild(textnode);
node.classList.add("btn-2");
node.setAttribute('id', i);
node.classList.add("hidden");
document.getElementById("tags").appendChild(node);
}
}
}

View File

@ -1,81 +1,85 @@
const pages = Array.from(document.querySelectorAll('img.image-item'));
let currentPage = 0;
function changePage(pageNum) {
const previous = pages[currentPage];
const current = pages[pageNum];
if (current == null) {
return;
}
previous.classList.remove('current');
current.classList.add('current');
currentPage = pageNum;
const display = document.getElementById('dest');
display.style.backgroundImage = `url("${current.src}")`;
document.getElementById('page-num')
.innerText = [
(pageNum + 1).toLocaleString(),
pages.length.toLocaleString()
].join('\u200a/\u200a');
}
changePage(0);
document.getElementById('list').onclick = event => {
if (pages.includes(event.target)) {
changePage(pages.indexOf(event.target));
}
};
document.getElementById('image-container').onclick = event => {
const width = document.getElementById('image-container').clientWidth;
const clickPos = event.clientX / width;
if (clickPos < 0.5) {
changePage(currentPage - 1);
} else {
changePage(currentPage + 1);
}
};
document.onkeypress = event => {
switch (event.key.toLowerCase()) {
// Previous Image
case 'w':
case 'a':
changePage(currentPage - 1);
break;
// Return to previous page
case 'q':
window.history.go(-1);
break;
// Next Image
case ' ':
case 's':
case 'd':
changePage(currentPage + 1);
break;
}// remove arrow cause it won't work
};
document.onkeydown = event =>{
switch (event.keyCode) {
case 37: //left
changePage(currentPage - 1);
break;
case 38: //up
changePage(currentPage - 1);
break;
case 39: //right
changePage(currentPage + 1);
break;
case 40: //down
changePage(currentPage + 1);
break;
}
const pages = Array.from(document.querySelectorAll('img.image-item'));
let currentPage = 0;
function changePage(pageNum) {
const previous = pages[currentPage];
const current = pages[pageNum];
if (current == null) {
return;
}
previous.classList.remove('current');
current.classList.add('current');
currentPage = pageNum;
const display = document.getElementById('dest');
display.style.backgroundImage = `url("${current.src}")`;
scroll(0,0)
document.getElementById('page-num')
.innerText = [
(pageNum + 1).toLocaleString(),
pages.length.toLocaleString()
].join('\u200a/\u200a');
}
changePage(0);
document.getElementById('list').onclick = event => {
if (pages.includes(event.target)) {
changePage(pages.indexOf(event.target));
}
};
document.getElementById('image-container').onclick = event => {
const width = document.getElementById('image-container').clientWidth;
const clickPos = event.clientX / width;
if (clickPos < 0.5) {
changePage(currentPage - 1);
} else {
changePage(currentPage + 1);
}
};
document.onkeypress = event => {
switch (event.key.toLowerCase()) {
// Previous Image
case 'w':
scrollBy(0, -40);
break;
case 'a':
changePage(currentPage - 1);
break;
// Return to previous page
case 'q':
window.history.go(-1);
break;
// Next Image
case ' ':
case 's':
scrollBy(0, 40);
break;
case 'd':
changePage(currentPage + 1);
break;
}// remove arrow cause it won't work
};
document.onkeydown = event =>{
switch (event.keyCode) {
case 37: //left
changePage(currentPage - 1);
break;
case 38: //up
break;
case 39: //right
changePage(currentPage + 1);
break;
case 40: //down
break;
}
};

View File

@ -1,69 +1,70 @@
*, *::after, *::before {
box-sizing: border-box;
}
img {
vertical-align: middle;
}
html, body {
display: flex;
background-color: #e8e6e6;
height: 100%;
width: 100%;
padding: 0;
margin: 0;
font-family: sans-serif;
}
#list {
height: 100%;
overflow: auto;
width: 260px;
text-align: center;
}
#list img {
width: 200px;
padding: 10px;
border-radius: 10px;
margin: 15px 0;
cursor: pointer;
}
#list img.current {
background: #0003;
}
#image-container {
flex: auto;
height: 100vh;
background: #222;
color: #fff;
text-align: center;
cursor: pointer;
-webkit-user-select: none;
user-select: none;
position: relative;
}
#image-container #dest {
height: 100%;
width: 100%;
background-size: contain;
background-repeat: no-repeat;
background-position: center;
}
#image-container #page-num {
position: absolute;
font-size: 18pt;
left: 10px;
bottom: 5px;
font-weight: bold;
opacity: 0.75;
text-shadow: /* Duplicate the same shadow to make it very strong */
0 0 2px #222,
0 0 2px #222,
0 0 2px #222;
*, *::after, *::before {
box-sizing: border-box;
}
img {
vertical-align: middle;
}
html, body {
display: flex;
background-color: #e8e6e6;
height: 100%;
width: 100%;
padding: 0;
margin: 0;
font-family: sans-serif;
}
#list {
height: 2000px;
overflow: scroll;
width: 260px;
text-align: center;
}
#list img {
width: 200px;
padding: 10px;
border-radius: 10px;
margin: 15px 0;
cursor: pointer;
}
#list img.current {
background: #0003;
}
#image-container {
flex: auto;
height: 2000px;
background: #222;
color: #fff;
text-align: center;
cursor: pointer;
-webkit-user-select: none;
user-select: none;
position: relative;
}
#image-container #dest {
height: 2000px;
width: 100%;
background-size: contain;
background-repeat: no-repeat;
background-position: top;
}
#image-container #page-num {
position: static;
font-size: 14pt;
left: 10px;
bottom: 5px;
font-weight: bold;
opacity: 0.75;
text-shadow: /* Duplicate the same shadow to make it very strong */
0 0 2px #222,
0 0 2px #222,
0 0 2px #222;
}