Compare commits

..

300 Commits
0.2.7 ... 0.4.6

Author SHA1 Message Date
9c7354be32 0.4.6 2020-11-07 12:04:42 +08:00
7f48b3edd1 Merge pull request #175 from RicterZ/dev
add default value of output dir
2020-10-15 02:10:06 +08:00
d84b827241 add default value of output dir 2020-10-15 02:09:09 +08:00
4ac161a38c Merge pull request #174 from Nontre12/fix-gen-main
Fix change directory output_dir option on gen-main
2020-10-15 01:47:51 +08:00
648b6f87bf Added logo.png to the installation 2020-10-14 12:09:39 +02:00
2ec1283ba8 Fix change directory output_dir option on gen-main 2020-10-14 12:02:57 +02:00
a9bd46b426 Merge pull request #173 from Nontre12/db-ignored
Fix db ignored
2020-10-14 02:44:03 +08:00
c52bc271fc Fix db ignored 2020-10-13 13:39:24 +02:00
f2d22f8e7d Merge pull request #169 from Nontre12/master
Fix running without parameters
2020-10-11 03:48:39 +08:00
ea6089ff31 Fix 2020-10-10 21:15:20 +02:00
670d14c3f3 Merge pull request #4 from RicterZ/master
Update master branch
2020-10-10 20:50:01 +02:00
b46106a5bc Merge pull request #167 from RicterZ/0.4.5
0.4.5
2020-10-11 02:00:02 +08:00
f04359e486 0.4.5 2020-10-11 01:57:37 +08:00
6861cbcbc1 Merge pull request #166 from RicterZ/dev
0.4.4
2020-10-11 01:45:53 +08:00
e0938c5a0e Merge pull request #165 from RicterZ/dev
0.4.4
2020-10-11 01:43:41 +08:00
641f8e4c51 0.4.4 2020-10-11 01:42:02 +08:00
b2fae226f9 use config.json 2020-10-11 01:38:08 +08:00
4aa34c668a Merge pull request #3 from RicterZ/master
Update master branch from origin
2020-10-10 19:11:56 +02:00
f157ac3246 merge to functions 2020-10-11 01:09:13 +08:00
139e01d3ca Merge pull request #163 from Nontre12/dev-page-range
Added --page-all option to download all search results
2020-10-11 00:58:57 +08:00
4d870e36a1 Merge branch 'master' into dev-page-range 2020-10-11 00:53:27 +08:00
74b0df26a9 Merge pull request #164 from RicterZ/fix-page-range
fix page range issue #158
2020-10-11 00:51:58 +08:00
1746e731ec fix page range issue #158 2020-10-11 00:48:36 +08:00
8ad60d9838 Merge pull request #1 from RicterZ/master
Merge pull request #162 from Nontre12/master
2020-10-10 18:31:47 +02:00
be05b9c0eb Added --page-all option to download all search results 2020-10-10 18:29:00 +02:00
9054b98934 Merge pull request #162 from Nontre12/master
Added 'Parodies' output and Updated package version
2020-10-11 00:10:27 +08:00
b82201ff27 Added to -S --show option the "Parodies" output 2020-10-10 12:33:14 +02:00
532c74e075 Update __version__ 2020-10-10 12:31:54 +02:00
5a50a5b1ba Merge pull request #159 from Nontre12/dev
Added --clean-language option
2020-10-10 04:56:51 +08:00
b5fe48746e Added --clean-language option 2020-10-09 17:34:03 +02:00
94d8da655a Fix misspelling 2020-10-09 17:30:11 +02:00
6ff2816d95 Merge pull request #157 from RicterZ/dev
0.4.3
2020-10-02 01:59:50 +08:00
4d89b80e67 Merge branch 'dev' of github.com:RicterZ/nhentai into dev 2020-10-02 01:56:31 +08:00
0a94ef9cf1 Merge pull request #156 from RicterZ/dev
0.4.2
2020-10-02 01:56:04 +08:00
4cc4f35a0d fix bug in search 2020-10-02 01:55:03 +08:00
ad86c49de9 Merge branch 'master' into dev 2020-10-02 01:47:35 +08:00
5a538fe82f add tests and new python version 2020-10-02 01:43:44 +08:00
eb35ba9848 0.4.2 2020-10-02 01:41:02 +08:00
14a53a0953 fix 2020-10-02 01:39:42 +08:00
c5e4b5ffa8 update 2020-10-02 01:39:14 +08:00
b3f25875d0 fix bug on mac #126 2020-10-02 01:32:18 +08:00
91053b98af 0.4.1 2020-10-02 01:02:41 +08:00
7570b6ae7d remove img2pdf in requirements 2020-10-02 00:55:26 +08:00
d2e68c6c45 fix #146 #142 #146 2020-10-02 00:51:37 +08:00
b0902c2d58 Merge pull request #147 from fuchs2711/fix-win32-filename
Fix invalid filenames on Windows
2020-07-19 11:12:25 +08:00
320f36c264 Fix invalid filenames on Windows 2020-07-18 15:19:41 +02:00
1dae63be39 Merge pull request #141 from RicterZ/dev
update tests
2020-06-26 13:32:35 +08:00
78429423d9 fix bug 2020-06-26 13:29:44 +08:00
38ff69d99d add sort options 2020-06-26 13:28:10 +08:00
2ce36204fe update tests 2020-06-26 13:18:08 +08:00
8ed1b89277 Merge pull request #140 from RicterZ/dev
0.4.0
2020-06-26 13:16:55 +08:00
e9864d158f update tests 2020-06-26 13:15:57 +08:00
43013badd4 update .gitignore 2020-06-26 13:12:49 +08:00
7508a2010d 0.4.0 2020-06-26 13:12:37 +08:00
946761477d Merge pull request #139 from RicterZ/master
Merge into dev branch
2020-06-26 12:48:51 +08:00
db80408024 Merge pull request #138 from RicterZ/revert-134-master
Revert "Fix fatal error and keep index of id which from file"
2020-06-26 12:47:25 +08:00
4c85cebb78 Revert "Fix fatal error and keep index of id which from file" 2020-06-26 12:47:10 +08:00
e982a8170c Merge pull request #134 from ODtian/master
Fix fatal error and keep index of id which from file
2020-06-26 12:46:08 +08:00
0b62f0ebd9 Merge pull request #137 from jwfiredragon/patch-1
Fixing typos
2020-06-26 12:45:55 +08:00
37b4ee7d00 Fixing typos
ms-user-select should be -ms-user-select. #0d0d0d9 isn't a valid hex code - I assume it's supposed to be #0d0d0d?
2020-06-23 23:04:09 -07:00
84cad0d475 Update cmdline.py 2020-06-24 12:00:17 +08:00
bf03881ed6 Fix fatal error and keep index of id which from file 2020-06-23 20:39:41 +08:00
f97b814b45 Merge pull request #131 from myzWILLmake/dev
remove args.tag since no tag option in parser
2020-06-22 18:11:18 +08:00
7323eae99b remove args.tag since no tag option in parser 2020-06-15 10:00:23 +08:00
6e07f0426b Merge pull request #130 from jwfiredragon/patch-1
Fixing parser for nhentai site update
2020-06-12 10:32:34 +08:00
44c424a321 Fixing parser for nhentai site update
nhentai's recent site update broke the parser, this fixes it. Based off the work on [my fork here](8c4a4f02bc).
2020-06-10 22:39:35 -07:00
3db77e0ce3 Merge pull request #127 from Tsuribori/dev
Add PDF support
2020-06-08 11:11:42 +08:00
22dbb4dd0d Add PDF support 2020-06-07 19:07:40 +03:00
2be4bd71ce Merge pull request #123 from Alocks/dev
--search fix, removed --tag commands
2020-05-06 19:16:27 +08:00
fc39aeb49e stupid fix 2020-05-02 14:52:24 -03:00
be2ec3f452 updated documentation 2020-05-02 14:35:22 -03:00
0c23f64356 removed all --tag commands since --search API is working again, now --language is a setting, cleaned some code 2020-05-02 14:23:31 -03:00
7e4dff8fec move import statement to function 2020-05-01 22:20:55 +08:00
e2a1d79b1b fix #117 2020-05-01 22:18:03 +08:00
8183f3a7a9 Merge pull request #119 from BachoSeven/master
Updated README
2020-04-26 09:57:39 +08:00
80713d2e00 updated README.rst 2020-04-25 18:19:44 +02:00
a2cd025027 updated README.rst 2020-04-25 18:18:48 +02:00
2f7bb59e58 Update README.rst 2020-04-25 18:04:50 +02:00
e94685d9c5 Merge pull request #116 from AnhNhan/master
write ComicInfo.xml for CBZ files
2020-04-22 12:52:17 +08:00
07d804b047 move ComicInfo.xml behind the --comic-info flag 2020-04-22 06:19:12 +02:00
5552d39337 fix --artist, --character, --parody, --group 2020-04-21 14:54:04 +02:00
d35190f9d0 write ComicInfo.xml for CBZ files 2020-04-21 13:23:50 +02:00
c8bca4240a Merge pull request #115 from RicterZ/dev
fix bug #114
2020-04-20 20:17:09 +08:00
130386054f 0.3.9 2020-04-20 20:16:48 +08:00
df16109788 fix install script on python2 2020-04-20 20:15:06 +08:00
c18cd2aaa5 Merge pull request #112 from RicterZ/dev
0.3.8
2020-04-20 20:07:02 +08:00
197b5e4923 update 2020-04-09 22:04:45 +08:00
9f747dad7e 0.3.8 2020-04-09 21:12:24 +08:00
ca713197cc add sqlite3 db to save download history 2020-04-09 21:07:20 +08:00
49f07de95d remove repeat code 2020-04-09 20:37:13 +08:00
5c7bdae0d7 add a new option #111 2020-04-09 20:32:20 +08:00
d5f41bf37c fix bug of --tag in python2.7 2020-03-15 00:41:40 +08:00
56153015b1 update cookie 2020-03-15 00:25:02 +08:00
140249217a fix 2020-03-15 00:24:12 +08:00
9e537e60f2 reformat file 2020-03-15 00:03:48 +08:00
4df8e1bae0 update tests 2020-03-14 23:59:18 +08:00
c250d9c787 fix #106 2020-03-14 23:56:22 +08:00
a5547696eb Merge pull request #108 from RicterZ/dev
Merge dev to master
2020-03-14 23:35:02 +08:00
49ac1d035d Merge branch 'master' into dev 2020-03-14 23:34:49 +08:00
f234b7234e Merge pull request #104 from myzWILLmake/master
add page_range option for favorites
2020-02-08 16:12:25 +08:00
43a9b981dd add page_range option for favorites 2020-02-07 01:32:51 +08:00
bc29869a8b Merge pull request #101 from reynog/patch-1
Suggested change to viewer
2020-01-18 19:50:04 +08:00
53e1923e67 Changed keyboard nav
In conjunction with styles.css change, changed W, and S keys to scroll image vertically and removed page change from Up and Down, leaving A, D, Left, and Right as keys for changing page. Page returns to the top when changing page. W and S scroll behavior is not smooth. Up and Down scroll relies on browser's in-built keyboard scrolling functionality.
2020-01-16 20:20:42 +01:00
ba6d4047e2 Larger image display
Bodged file edit. Changed image to extend off the screen, and be scrollable. Easier to read speech and other text on smaller displays. Moved page counter to top center. Not quite as nice looking.
2020-01-16 20:12:27 +01:00
dcf22b30a5 Merge pull request #96 from symant233/dev
Add @media to html_viewer (mobile friendly)
2019-12-16 10:41:53 +08:00
0208d9b9e6 remove... 2019-12-13 11:57:42 +08:00
0115285e10 trying to fix conflict 2019-12-13 11:56:36 +08:00
ea8a576f7e remove webkit tap color and outline 2019-12-11 18:52:27 +08:00
05eaa9eebc fix 'max-width' not working 2019-12-11 18:35:53 +08:00
ab2dff4859 Merge remote-tracking branch 'upstream/master' into dev 2019-12-11 11:02:43 +08:00
9592870d85 add html viewer @media 2019-12-11 10:55:50 +08:00
c1a82635bd Merge pull request #94 from Alocks/dev
added filter for main.html and #95 fix
2019-12-09 11:27:32 +08:00
1974644513 download gif images 2019-12-08 20:59:37 -03:00
fe4fb46e30 fixed language tag 2019-12-07 17:50:23 -03:00
6156cf5914 added zoom in index.html and some increments in main.html 2019-12-07 14:36:19 -03:00
75b00fc523 Merge remote-tracking branch 'origin/dev' into dev 2019-12-07 12:58:59 -03:00
ff8af8279f fixed html and removed unused .css properties 2019-12-07 12:58:19 -03:00
e1556b09cc fixed unicode issues with japanese characters 2019-12-07 11:19:49 -03:00
110a2acb7c main page filter fixes 2019-12-06 13:08:16 -03:00
c60f1f34d5 main page filter(2/2) 2019-12-05 18:02:03 -03:00
4f2db83a13 almost gave up 2019-12-04 18:54:40 -03:00
bd8bb42ecd main page filter(1/2) 2019-12-04 00:45:14 -03:00
0abcb048b4 filter for main page(1/2) 2019-12-02 16:46:22 -03:00
411d6c2f30 Merge pull request #93 from Alocks/dev
Added language option and metadata serializer
2019-12-02 11:38:09 +08:00
88c0c1e021 Added language option and metadata serializer 2019-12-01 21:23:41 -03:00
86c43e5d8c Merge pull request #92 from RicterZ/dev
merge & update
2019-11-22 10:49:13 +08:00
39f8729d51 Merge pull request #91 from jwfiredragon/patch-1
Documenting --gen-main
2019-11-22 10:48:14 +08:00
d6461335f8 Adding --gen-main to documentation
--gen-main exists as an option in cmdline.py but is not documented in README
2019-11-21 08:40:57 -08:00
c0c7b33909 Merge pull request #88 from Alocks/dev
changed all map(lambda) to listcomp
2019-11-12 14:47:49 +08:00
893a8c194e removed list(). stupid mistake 2019-11-05 10:41:20 -03:00
e6d2eb554d Merge remote-tracking branch 'Alocks/dev' into dev 2019-11-04 16:17:20 -03:00
25e5acf671 changed every map(lambda) to listcomp 2019-11-04 16:14:52 -03:00
4f33228cec Merge pull request #86 from Alocks/dev
Fixed parser to work with new options, and updated readme
2019-10-23 10:16:09 +08:00
f227c9e897 Update README.rst 2019-10-22 14:18:38 -03:00
9f2f57248b Added commands in README and fixer parser 2019-10-22 14:14:50 -03:00
024f08ca97 Merge pull request #84 from Alocks/master
new options added [--artist, --character, --parody, --group]
2019-10-10 12:42:33 +08:00
3017fff823 Merge branch 'dev' into master 2019-10-08 15:42:35 -03:00
070e8917f4 Fixed whitespaces when using comma² 2019-10-05 15:07:49 -03:00
01caa8d4e5 Fixed if user add white spaces 2019-10-05 15:00:33 -03:00
35e724e206 xablau
Signed-off-by: Alocks <alocksmasao@gmail.com>
2019-10-03 18:26:28 -03:00
d045adfd6a 0.3.6 2019-08-04 22:39:31 +08:00
62e3552c84 update cookiewq 2019-08-04 22:39:31 +08:00
6e2a25cf55 fix bug in tag parser #70 2019-08-04 22:39:31 +08:00
44178a8cfb remove comment 2019-08-04 22:39:31 +08:00
4ca582c104 fix #74 2019-08-04 22:39:31 +08:00
97857b8dc6 "" :) 2019-08-04 22:39:31 +08:00
23774d9526 fix bugs 2019-08-01 21:06:40 +08:00
8dc7a1f40b singleton pool 2019-08-01 18:52:30 +08:00
349e21193b remove print 2019-07-31 19:04:25 +08:00
7e826c5255 use multiprocess instead of threadpool #78 2019-07-31 01:22:54 +08:00
bc70a2071b add test for sorting 2019-07-30 23:04:23 +08:00
1b49911166 code style 2019-07-30 23:03:29 +08:00
7eeed17ea5 Merge pull request #79 from Waiifu/added-sorting
sorting option
2019-07-30 22:53:40 +08:00
f4afcd549e Added sorting option 2019-07-29 09:11:45 +02:00
4fc6303db2 Merge pull request #76 from RicterZ/dev
0.3.6
2019-07-28 12:00:54 +08:00
f2aa65b64b 0.3.6 2019-07-28 11:58:00 +08:00
0a343935ab update cookiewq 2019-07-28 11:55:12 +08:00
03f1aeada7 fix bug in tag parser #70 2019-07-28 11:48:47 +08:00
94395d9165 remove comment 2019-07-28 11:46:48 +08:00
bacaa096e0 fix #74 2019-07-28 11:46:06 +08:00
3e420f05fa "" :) 2019-07-28 11:40:19 +08:00
158b15bda8 Merge pull request #66 from RicterZ/dev
0.3.5
2019-06-12 23:04:08 +08:00
92640d9767 0.3.5 2019-06-12 22:54:22 +08:00
6b97777b7d fix bug 2019-06-12 22:48:41 +08:00
1af195d727 add cookie check 2019-06-12 22:45:44 +08:00
58b2b644c1 fix #64 2019-06-12 22:37:25 +08:00
0cfec34e9e modify cookie 2019-06-12 22:08:32 +08:00
1172282362 fix #50 2019-06-04 08:38:42 +08:00
a909ad6d92 fix --gen-main bugs 2019-06-04 08:35:13 +08:00
440bb0dc38 Merge pull request #58 from symant233/master
fix show info
2019-06-03 17:53:27 +08:00
f5b7d89fb0 fix show info 2019-06-01 11:31:53 +08:00
535b804ef6 Merge pull request #53 from symant233/master
Create a main viewer contains all the sub index.html and thumb pic
2019-05-30 20:10:22 +08:00
9b65544942 add travis-ci test 2019-05-30 20:05:46 +08:00
0935d609c3 fix --gen-main action 2019-05-29 13:43:47 +08:00
f10ae3cf58 store proxy config 2019-05-28 19:47:48 +08:00
86b3a092c7 ignore other folders 2019-05-26 15:57:50 +08:00
710cc86eaf fix codec error for py2 2019-05-21 17:06:42 +08:00
2d327359aa small fix 2019-05-21 16:16:58 +08:00
f78b8bc2cd fix conflict 2019-05-21 15:53:43 +08:00
a95396033b Update README.rst 2019-05-18 22:36:03 +08:00
01c0e73849 fix bug while installing on windows / python3 2019-05-18 22:30:20 +08:00
57e9305849 0.3.3 2019-05-18 22:15:42 +08:00
6bd37f384c fix 2019-05-18 22:14:08 +08:00
2c61fd3a3f add doujinshi folder formatter 2019-05-18 22:13:23 +08:00
cf4291d3c2 new line 2019-05-18 22:01:29 +08:00
450e3689a0 fix 2019-05-18 22:00:33 +08:00
b5deca2704 fix 2019-05-18 21:57:43 +08:00
57dc4a58b9 remove Options block 2019-05-18 21:56:59 +08:00
1e1d03064b readme 2019-05-18 21:56:35 +08:00
40a98881c6 add some shortcut options 2019-05-18 21:53:40 +08:00
a7848c3cd0 fix bug 2019-05-18 21:52:36 +08:00
5df58780d9 add delay #55 2019-05-18 21:51:38 +08:00
56dace81f1 remove readme.md 2019-05-18 20:31:18 +08:00
086e469275 Update README.rst 2019-05-18 20:27:08 +08:00
1f76a8a70e Update README.rst 2019-05-18 20:24:49 +08:00
5d294212e6 Update README.rst 2019-05-18 20:24:15 +08:00
ef274a672b Update README.rst 2019-05-18 20:23:19 +08:00
795f80752f Update README.rst 2019-05-18 20:22:55 +08:00
53c23bb6dc Update README.rst 2019-05-18 20:07:45 +08:00
8d5f12292c update rst 2019-05-18 20:06:10 +08:00
f3141d5726 add rst 2019-05-18 20:04:16 +08:00
475e4db9af 0.3.2 #54 2019-05-18 19:47:04 +08:00
263dba51f3 modify tests #54 2019-05-18 19:40:09 +08:00
049ab4d9ad using cookie rather than login #54 2019-05-18 19:34:54 +08:00
a5eba94064 clean unused style for main.css 2019-05-06 15:41:26 +08:00
6053e302ee fix output_dir make gen-main error 2019-05-05 22:02:24 +08:00
c32b516575 js return to prev page press 'q' 2019-05-05 21:47:23 +08:00
0150e79c49 Add main viewer sources 2019-05-05 21:10:24 +08:00
0cda30385b Main viewer generator 2019-05-05 21:01:49 +08:00
18bdab1962 add main viewer 2019-05-05 21:01:49 +08:00
8e8f935a9b set alias for local:1080 proxy 2019-05-05 21:01:49 +08:00
b173a6c28f slow down #50 2019-05-04 12:12:57 +08:00
b64b718c88 remove eval 2019-05-04 11:31:41 +08:00
8317662664 fix #50 2019-05-04 11:29:01 +08:00
13e60a69e9 Merge pull request #51 from symant233/master
Add viewer arrow support, add README license badge.
2019-05-04 11:11:34 +08:00
b5acbc76fd Update README license badage 2019-05-04 11:07:15 +08:00
1eb1b5c04c Add viewer arrow support & Readme license badage 2019-05-04 11:04:43 +08:00
2acb6a1249 Update README.md 2019-04-25 03:36:31 +08:00
0660cb0fed update user-agent 2019-04-11 22:48:18 +08:00
680b004c24 update README 2019-04-11 22:47:49 +08:00
6709af2a20 0.3.1 - add login session 2019-04-11 22:44:26 +08:00
a3fead2852 pep-8 2019-04-11 22:43:42 +08:00
0728dd8c6d use text rather than content 2019-04-11 22:41:37 +08:00
9160b38c3f bypass the challenge 2019-04-11 22:39:20 +08:00
f74be0c665 add new tests 2019-04-11 22:10:16 +08:00
c30f562a83 Merge pull request #48 from onlymyflower/master
download ids from file
2019-04-11 22:09:30 +08:00
37547cc97f global login session #49 #46 2019-04-11 22:08:19 +08:00
f6fb90aab5 download ids from file 2019-03-06 16:46:47 +08:00
50be89db44 fix extension issue #44 2019-01-27 10:06:12 +08:00
fc0be35b2c 0.3.0 #40 2019-01-15 21:16:14 +08:00
5c3dace937 tag page download #40 2019-01-15 21:12:20 +08:00
b2d622f11a fix tag download issue #40 2019-01-15 21:09:24 +08:00
0c8264bcc6 fix download issues 2019-01-15 20:43:00 +08:00
a6074242fb nhentai suspended api #40 2019-01-15 20:29:10 +08:00
eb6df28fba 0.2.19 2018-12-30 14:13:27 +08:00
1091ea3e0a remove debug 2018-12-30 14:12:38 +08:00
0df51c83e5 change output filename 2018-12-30 14:06:15 +08:00
c5fa98ebd1 Update .travis.yml 2018-11-04 21:44:59 +08:00
3154a94c3d 0.2.18 2018-10-24 22:21:29 +08:00
c47018251f fix #27 2018-10-24 22:20:33 +08:00
74d0499092 add test 2018-10-24 22:07:43 +08:00
7e56d9b901 fix #33 2018-10-24 22:06:49 +08:00
8cbb334d36 fix #31 2018-10-24 21:56:21 +08:00
db6d45efe0 fix bug #34 2018-10-19 10:55:21 +08:00
d412794bce Merge pull request #32 from violetdarkness/patch-1
requirement.txt missing new line
2018-10-08 23:36:38 +08:00
8eedbf077b requirement.txt missing new line
I got error when installing and find this requirement.txt missing newline
2018-10-08 21:13:52 +07:00
c95ecdded4 remove gdb 2018-10-01 15:04:32 +08:00
489e8bf0f4 fix #29 0.2.16 2018-10-01 15:02:04 +08:00
86c31f9b5e Merge pull request #28 from tbinavsl/master
Max retries + misc. language fixes
2018-09-28 13:28:44 +08:00
6f20405f47 adding gif support and fixing yet another english typo 2018-09-09 23:38:30 +02:00
c0143548d1 reverted partially by mistake the max_page commit; also added retries on other features 2018-09-09 22:24:34 +02:00
114c364f03 oops 2018-09-09 21:42:03 +02:00
af26482b6d Max retries + misc. language fixes 2018-09-09 21:33:50 +02:00
b8ea917db2 max page #26 2018-08-24 23:55:34 +08:00
963f4d9ddf fix 2018-08-12 23:22:30 +08:00
ef36e012ce fix unicode error on windows / python2 2018-08-12 23:11:01 +08:00
16e8ce6f45 0.2.15 2018-08-12 22:48:26 +08:00
0632826827 download by tagname #15 2018-08-12 22:43:36 +08:00
8d2cd1974b fix unicodeerror on python3 2018-08-12 18:04:36 +08:00
8c176cd2ad Update README.md 2018-08-11 09:47:32 +08:00
f2c88e8ade Update README.md 2018-08-11 09:46:46 +08:00
2300744c5c Update README.md 2018-08-11 09:46:04 +08:00
7f30c84eff Update README.md 2018-08-11 09:45:04 +08:00
dda849b770 remove python3.7 2018-08-11 09:32:35 +08:00
14b3c82248 remove \r 2018-08-11 09:28:39 +08:00
4577e9df9a fix 2018-08-11 09:24:16 +08:00
de157ccb7f Merge branch 'master' of github.com:RicterZ/nhentai 2018-08-11 09:19:31 +08:00
126bbe8d49 add a test 2018-08-11 09:18:00 +08:00
8546b9e759 fix bug #24 2018-08-11 09:17:05 +08:00
6ff9751c30 fix 2018-07-01 12:50:37 +08:00
ddc4a20251 0.2.12 2018-07-01 12:48:30 +08:00
206aa3710a fix bug 2018-07-01 12:48:05 +08:00
b5b201f61c 🍻 2018-07-01 02:15:26 +08:00
eb8b41cd1d Merge pull request #22 from Pizzacus/master
Rework the HTML Viewer
2018-06-03 22:53:00 +08:00
98bf88d638 Actually use MANIFEST.ini to specify the package data
*considers suicide*
2018-06-03 11:32:06 +02:00
0bc83982e4 Add the viewer to the package_data entry 2018-06-03 11:09:46 +02:00
99edcef9ac Rework the HTML Viewer
* More modern and efficient code, particularily for the JS
 * Also the layout is better, with flexboxes and all
 * The CSS and JS have their own files
 * The sidebar has proper margins around the images
 * You can use A + D and the arrow keys to navigate images, like on nhentai
 * Images with a lot of width are  properly sized
 * There is a page counter on the bottom left
2018-06-02 23:22:37 +02:00
3ddd474aab Merge pull request #21 from mentaterasmus/master
fixing issue 16 and adding functionalities
2018-05-15 23:17:10 +08:00
f2573d5f10 fixing identation 2018-05-14 01:52:38 -03:00
147eec57cf fixing issue 16 and adding functionalities 2018-05-09 15:42:12 -03:00
f316c3243b 0.2.12 2018-04-19 17:29:23 +08:00
967e0b4ff5 fix #18 #19 use nhentai api 2018-04-19 17:21:43 +08:00
22cf2592dd 0.2.11 2018-03-16 23:48:58 +08:00
caa0753adb fix bug #13 2018-03-16 23:45:05 +08:00
0e14dd62d5 fix bug #13 2018-03-16 23:42:24 +08:00
7c9693785e fix #14 2018-03-16 23:39:04 +08:00
08ad73b683 fix bug #13 2018-03-16 23:33:16 +08:00
a56d3ca18c fix bug #13 2018-03-16 23:23:25 +08:00
c1975897d2 save downloaded doujinshi as doujinshi name #13 2018-03-16 23:16:26 +08:00
4ed596ff98 download user fav 2018-03-05 21:47:27 +08:00
debf287fb0 download user fav 2018-03-05 21:45:56 +08:00
308c5277b8 Merge pull request #12 from RomaniukVadim/master
Add install for Gentoo
2018-03-03 19:33:23 +08:00
b425c883c7 Add install for Gentoo 2018-03-02 17:18:22 +02:00
7bf9507bd2 0.2.10 2018-01-09 16:05:52 +08:00
5f5245f70f fix bug 2018-01-09 16:02:16 +08:00
45fb35b950 fix bug and add --html 2018-01-01 17:44:55 +08:00
2271b83d93 0.2.8 2017-08-19 00:50:38 +08:00
0ee000edeb sort #10 2017-08-19 00:48:53 +08:00
a47359f411 tiny bug 2017-07-06 15:41:33 +08:00
48c6fadc98 add viewer image 2017-06-18 16:48:54 +08:00
28 changed files with 1965 additions and 346 deletions

2
.gitignore vendored
View File

@ -5,3 +5,5 @@ dist/
*.egg-info *.egg-info
.python-version .python-version
.DS_Store .DS_Store
output/
venv/

View File

@ -3,15 +3,17 @@ os:
language: python language: python
python: python:
- 2.7 - 3.7
- 2.6 - 3.8
- 3.3
- 3.4
- 3.5.2
install: install:
- python setup.py install - python setup.py install
script: script:
- NHENTAI=https://nhentai.net nhentai --search umaru - echo 268642 > /tmp/test.txt
- NHENTAI=https://nhentai.net nhentai --id=152503,146134 -t 10 --output=/tmp/ - nhentai --cookie "_ga=GA1.2.1651446371.1545407218; __cfduid=d0ed34dfb81167d2a51a1d6392c1768a81601380350; csrftoken=KRN0GR1ft86m3HTefpQA99pp6R1Bo7hUs5QxNGOAIuwB5g4EcJj04fwMB8QKgLaB; sessionid=7hzoowox78c90wi5ud5ibphm4axcck7c"
- nhentai --search umaru
- nhentai --id=152503,146134 -t 10 --output=/tmp/ --cbz
- nhentai -F
- nhentai --file /tmp/test.txt
- nhentai --id=152503,146134 --gen-main --output=/tmp/

View File

@ -1,3 +1,9 @@
include README.md include README.md
include requirements.txt include requirements.txt
include nhentai/doujinshi.html include nhentai/viewer/index.html
include nhentai/viewer/styles.css
include nhentai/viewer/scripts.js
include nhentai/viewer/main.html
include nhentai/viewer/main.css
include nhentai/viewer/main.js
include nhentai/viewer/logo.png

View File

@ -1,55 +0,0 @@
nhentai
=======
_ _ _ _
_ __ | | | | ___ _ __ | |_ __ _(_)
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
| | | | _ | __/ | | | || (_| | |
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
あなたも変態。 いいね?
[![Build Status](https://travis-ci.org/RicterZ/nhentai.svg?branch=master)](https://travis-ci.org/RicterZ/nhentai)
🎉🎉 nhentai 现在支持 Windows 啦!
由于 [http://nhentai.net](http://nhentai.net) 下载下来的种子速度很慢,而且官方也提供在线观看本子的功能,所以可以利用本脚本下载本子。
### 安装
git clone https://github.com/RicterZ/nhentai
cd nhentai
python setup.py install
### 用法
+ 下载指定 id 列表的本子:
nhentai --id=123855,123866
+ 下载某关键词第一页的本子(不推荐):
nhentai --search="tomori" --page=1 --download
`-t, --thread`:指定下载的线程数,最多为 10 线程。
`--path`:指定下载文件的输出路径,默认为当前目录。
`--timeout`:指定下载图片的超时时间,默认为 30 秒。
`--proxy`:指定下载的代理,例如: http://127.0.0.1:8080/
### 自建 nhentai 镜像
如果想用自建镜像下载 nhentai 的本子,需要搭建 nhentai.net 和 i.nhentai.net 的反向代理。
例如用 h.loli.club 来做反向代理的话,需要 h.loli.club 反代 nhentai.neti.h.loli.club 反带 i.nhentai.net。
然后利用环境变量来下载:
NHENTAI=http://h.loli.club nhentai --id 123456
![](./images/search.png)
![](./images/download.png)
### License
MIT
### あなたも変態
![](./images/image.jpg)

209
README.rst Normal file
View File

@ -0,0 +1,209 @@
nhentai
=======
.. code-block::
_ _ _ _
_ __ | | | | ___ _ __ | |_ __ _(_)
| '_ \| |_| |/ _ \ '_ \| __/ _` | |
| | | | _ | __/ | | | || (_| | |
|_| |_|_| |_|\___|_| |_|\__\__,_|_|
あなたも変態。 いいね?
|travis|
|pypi|
|license|
nHentai is a CLI tool for downloading doujinshi from <http://nhentai.net>
===================
Manual Installation
===================
.. code-block::
git clone https://github.com/RicterZ/nhentai
cd nhentai
python setup.py install
==================
Installation (pip)
==================
Alternatively, install from PyPI with pip:
.. code-block::
pip install nhentai
For a self-contained installation, use `Pipx <https://github.com/pipxproject/pipx/>`_:
.. code-block::
pipx install nhentai
=====================
Installation (Gentoo)
=====================
.. code-block::
layman -fa glicOne
sudo emerge net-misc/nhentai
=====
Usage
=====
**IMPORTANT**: To bypass the nhentai frequency limit, you should use `--cookie` option to store your cookie.
*The default download folder will be the path where you run the command (CLI path).*
Set your nhentai cookie against captcha:
.. code-block:: bash
nhentai --cookie "YOUR COOKIE FROM nhentai.net"
**NOTE**: The format of the cookie is `"csrftoken=TOKEN; sessionid=ID"`
Download specified doujinshi:
.. code-block:: bash
nhentai --id=123855,123866
Download doujinshi with ids specified in a file (doujinshi ids split by line):
.. code-block:: bash
nhentai --file=doujinshi.txt
Set search default language
.. code-block:: bash
nhentai --language=english
Search a keyword and download the first page:
.. code-block:: bash
nhentai --search="tomori" --page=1 --download
# you also can download by tags and multiple keywords
nhentai --search="tag:lolicon, artist:henreader, tag:full color"
nhentai --search="lolicon, henreader, full color"
Download your favorites with delay:
.. code-block:: bash
nhentai --favorites --download --delay 1
Format output doujinshi folder name:
.. code-block:: bash
nhentai --id 261100 --format '[%i]%s'
Supported doujinshi folder formatter:
- %i: Doujinshi id
- %t: Doujinshi name
- %s: Doujinshi subtitle (translated name)
- %a: Doujinshi authors' name
Other options:
.. code-block::
Options:
# Operation options
-h, --help show this help message and exit
-D, --download download doujinshi (for search results)
-S, --show just show the doujinshi information
# Doujinshi options
--id=ID doujinshi ids set, e.g. 1,2,3
-s KEYWORD, --search=KEYWORD
search doujinshi by keyword
--tag=TAG download doujinshi by tag
-F, --favorites list or download your favorites.
# Multi-page options
--page=PAGE page number of search results
--max-page=MAX_PAGE The max page when recursive download tagged doujinshi
# Download options
-o OUTPUT_DIR, --output=OUTPUT_DIR
output dir
-t THREADS, --threads=THREADS
thread count for downloading doujinshi
-T TIMEOUT, --timeout=TIMEOUT
timeout for downloading doujinshi
-d DELAY, --delay=DELAY
slow down between downloading every doujinshi
-p PROXY, --proxy=PROXY
uses a proxy, for example: http://127.0.0.1:1080
-f FILE, --file=FILE read gallery IDs from file.
--format=NAME_FORMAT format the saved folder name
# Generating options
--html generate a html viewer at current directory
--no-html don't generate HTML after downloading
--gen-main generate a main viewer contain all the doujin in the folder
-C, --cbz generate Comic Book CBZ File
-P --pdf generate PDF file
--rm-origin-dir remove downloaded doujinshi dir when generated CBZ
or PDF file.
# nHentai options
--cookie=COOKIE set cookie of nhentai to bypass Google recaptcha
==============
nHentai Mirror
==============
If you want to use a mirror, you should set up a reverse proxy of `nhentai.net` and `i.nhentai.net`.
For example:
.. code-block::
i.h.loli.club -> i.nhentai.net
h.loli.club -> nhentai.net
Set `NHENTAI` env var to your nhentai mirror.
.. code-block:: bash
NHENTAI=http://h.loli.club nhentai --id 123456
.. image:: ./images/search.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/download.png?raw=true
:alt: nhentai
:align: center
.. image:: ./images/viewer.png?raw=true
:alt: nhentai
:align: center
============
あなたも変態
============
.. image:: ./images/image.jpg?raw=true
:alt: nhentai
:align: center
.. |travis| image:: https://travis-ci.org/RicterZ/nhentai.svg?branch=master
:target: https://travis-ci.org/RicterZ/nhentai
.. |pypi| image:: https://img.shields.io/pypi/dm/nhentai.svg
:target: https://pypi.org/project/nhentai/
.. |license| image:: https://img.shields.io/github/license/ricterz/nhentai.svg
:target: https://github.com/RicterZ/nhentai/blob/master/LICENSE

5
doujinshi.txt Normal file
View File

@ -0,0 +1,5 @@
184212
204944
222460
244502
261909

0
images/image.jpg Executable file → Normal file
View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

BIN
images/viewer.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 311 KiB

View File

@ -1,3 +1,3 @@
__version__ = '0.2.7' __version__ = '0.4.6'
__author__ = 'Ricter' __author__ = 'RicterZ'
__email__ = 'ricterzheng@gmail.com' __email__ = 'ricterzheng@gmail.com'

View File

@ -1,57 +1,135 @@
# coding: utf-8 # coding: utf-8
from __future__ import print_function from __future__ import print_function
import os
import sys import sys
import json
from optparse import OptionParser from optparse import OptionParser
try: try:
from itertools import ifilter as filter from itertools import ifilter as filter
except ImportError: except ImportError:
pass pass
import nhentai.constant as constant import nhentai.constant as constant
from nhentai.utils import urlparse from nhentai import __version__
from nhentai.utils import urlparse, generate_html, generate_main_html, DB
from nhentai.logger import logger from nhentai.logger import logger
try: try:
reload(sys) if sys.version_info < (3, 0, 0):
sys.setdefaultencoding(sys.stdin.encoding) import codecs
import locale
sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout)
sys.stderr = codecs.getwriter(locale.getpreferredencoding())(sys.stderr)
except NameError: except NameError:
# python3 # python3
pass pass
def banner(): def banner():
logger.info(u'''nHentai: あなたも変態。 いいね? logger.info(u'''nHentai ver %s: あなたも変態。 いいね?
_ _ _ _ _ _ _ _
_ __ | | | | ___ _ __ | |_ __ _(_) _ __ | | | | ___ _ __ | |_ __ _(_)
| '_ \| |_| |/ _ \ '_ \| __/ _` | | | '_ \| |_| |/ _ \ '_ \| __/ _` | |
| | | | _ | __/ | | | || (_| | | | | | | _ | __/ | | | || (_| | |
|_| |_|_| |_|\___|_| |_|\__\__,_|_| |_| |_|_| |_|\___|_| |_|\__\__,_|_|
''') ''' % __version__)
def load_config():
if not os.path.exists(constant.NHENTAI_CONFIG_FILE):
return
try:
with open(constant.NHENTAI_CONFIG_FILE, 'r') as f:
constant.CONFIG = json.load(f)
except json.JSONDecodeError:
logger.error('Failed to load config file.')
write_config()
def write_config():
if not os.path.exists(constant.NHENTAI_HOME):
os.mkdir(constant.NHENTAI_HOME)
with open(constant.NHENTAI_CONFIG_FILE, 'w') as f:
f.write(json.dumps(constant.CONFIG))
def cmd_parser(): def cmd_parser():
load_config()
parser = OptionParser('\n nhentai --search [keyword] --download' parser = OptionParser('\n nhentai --search [keyword] --download'
'\n NHENTAI=http://h.loli.club nhentai --id [ID ...]' '\n NHENTAI=http://h.loli.club nhentai --id [ID ...]'
'\n nhentai --file [filename]'
'\n\nEnvironment Variable:\n' '\n\nEnvironment Variable:\n'
' NHENTAI nhentai mirror url') ' NHENTAI nhentai mirror url')
parser.add_option('--download', dest='is_download', action='store_true', help='download doujinshi (for search result)') # operation options
parser.add_option('--show-info', dest='is_show', action='store_true', help='just show the doujinshi information') parser.add_option('--download', '-D', dest='is_download', action='store_true',
help='download doujinshi (for search results)')
parser.add_option('--show', '-S', dest='is_show', action='store_true', help='just show the doujinshi information')
# doujinshi options
parser.add_option('--id', type='string', dest='id', action='store', help='doujinshi ids set, e.g. 1,2,3') parser.add_option('--id', type='string', dest='id', action='store', help='doujinshi ids set, e.g. 1,2,3')
parser.add_option('--search', type='string', dest='keyword', action='store', help='search doujinshi by keyword') parser.add_option('--search', '-s', type='string', dest='keyword', action='store',
parser.add_option('--page', type='int', dest='page', action='store', default=1, help='search doujinshi by keyword')
help='page number of search result') parser.add_option('--favorites', '-F', action='store_true', dest='favorites',
parser.add_option('--tags', type='string', dest='tags', action='store', help='download doujinshi by tags') help='list or download your favorites.')
parser.add_option('--output', type='string', dest='output_dir', action='store', default='',
# page options
parser.add_option('--page-all', dest='page_all', action='store_true', default=False,
help='all search results')
parser.add_option('--page', '--page-range', type='string', dest='page', action='store', default='',
help='page number of search results. e.g. 1,2-5,14')
parser.add_option('--sorting', dest='sorting', action='store', default='recent',
help='sorting of doujinshi (recent / popular / popular-[today|week])',
choices=['recent', 'popular', 'popular-today', 'popular-week'])
# download options
parser.add_option('--output', '-o', type='string', dest='output_dir', action='store', default='./',
help='output dir') help='output dir')
parser.add_option('--threads', '-t', type='int', dest='threads', action='store', default=5, parser.add_option('--threads', '-t', type='int', dest='threads', action='store', default=5,
help='thread count of download doujinshi') help='thread count for downloading doujinshi')
parser.add_option('--timeout', type='int', dest='timeout', action='store', default=30, parser.add_option('--timeout', '-T', type='int', dest='timeout', action='store', default=30,
help='timeout of download doujinshi') help='timeout for downloading doujinshi')
parser.add_option('--delay', '-d', type='int', dest='delay', action='store', default=0,
help='slow down between downloading every doujinshi')
parser.add_option('--proxy', type='string', dest='proxy', action='store', default='', parser.add_option('--proxy', type='string', dest='proxy', action='store', default='',
help='use proxy, example: http://127.0.0.1:1080') help='store a proxy, for example: -p \'http://127.0.0.1:1080\'')
parser.add_option('--file', '-f', type='string', dest='file', action='store', help='read gallery IDs from file.')
parser.add_option('--format', type='string', dest='name_format', action='store',
help='format the saved folder name', default='[%i][%a][%t]')
# generate options
parser.add_option('--html', dest='html_viewer', action='store_true',
help='generate a html viewer at current directory')
parser.add_option('--no-html', dest='is_nohtml', action='store_true',
help='don\'t generate HTML after downloading')
parser.add_option('--gen-main', dest='main_viewer', action='store_true',
help='generate a main viewer contain all the doujin in the folder')
parser.add_option('--cbz', '-C', dest='is_cbz', action='store_true',
help='generate Comic Book CBZ File')
parser.add_option('--pdf', '-P', dest='is_pdf', action='store_true',
help='generate PDF file')
parser.add_option('--rm-origin-dir', dest='rm_origin_dir', action='store_true', default=False,
help='remove downloaded doujinshi dir when generated CBZ or PDF file.')
# nhentai options
parser.add_option('--cookie', type='str', dest='cookie', action='store',
help='set cookie of nhentai to bypass Google recaptcha')
parser.add_option('--language', type='str', dest='language', action='store',
help='set default language to parse doujinshis')
parser.add_option('--clean-language', dest='clean_language', action='store_true', default=False,
help='set DEFAULT as language to parse doujinshis')
parser.add_option('--save-download-history', dest='is_save_download_history', action='store_true',
default=False, help='save downloaded doujinshis, whose will be skipped if you re-download them')
parser.add_option('--clean-download-history', action='store_true', default=False, dest='clean_download_history',
help='clean download history')
try: try:
sys.argv = list(map(lambda x: unicode(x.decode(sys.stdin.encoding)), sys.argv)) sys.argv = [unicode(i.decode(sys.stdin.encoding)) for i in sys.argv]
print()
except (NameError, TypeError): except (NameError, TypeError):
pass pass
except UnicodeDecodeError: except UnicodeDecodeError:
@ -59,35 +137,78 @@ def cmd_parser():
args, _ = parser.parse_args(sys.argv[1:]) args, _ = parser.parse_args(sys.argv[1:])
if args.tags: if args.html_viewer:
logger.warning('`--tags` is under construction') generate_html()
exit(0) exit(0)
if args.main_viewer and not args.id and not args.keyword and not args.favorites:
generate_main_html()
exit(0)
if args.clean_download_history:
with DB() as db:
db.clean_all()
logger.info('Download history cleaned.')
exit(0)
# --- set config ---
if args.cookie is not None:
constant.CONFIG['cookie'] = args.cookie
logger.info('Cookie saved.')
write_config()
exit(0)
if args.language is not None:
constant.CONFIG['language'] = args.language
logger.info('Default language now set to \'{0}\''.format(args.language))
write_config()
exit(0)
# TODO: search without language
if args.proxy:
proxy_url = urlparse(args.proxy)
if not args.proxy == '' and proxy_url.scheme not in ('http', 'https'):
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
exit(0)
else:
constant.CONFIG['proxy'] = {
'http': args.proxy,
'https': args.proxy,
}
logger.info('Proxy now set to \'{0}\'.'.format(args.proxy))
write_config()
exit(0)
# --- end set config ---
if args.favorites:
if not constant.CONFIG['cookie']:
logger.warning('Cookie has not been set, please use `nhentai --cookie \'COOKIE\'` to set it.')
exit(1)
if args.id: if args.id:
_ = map(lambda id: id.strip(), args.id.split(',')) _ = [i.strip() for i in args.id.split(',')]
args.id = set(map(int, filter(lambda id: id.isdigit(), _))) args.id = set(int(i) for i in _ if i.isdigit())
if (args.is_download or args.is_show) and not args.id and not args.keyword: if args.file:
with open(args.file, 'r') as f:
_ = [i.strip() for i in f.readlines()]
args.id = set(int(i) for i in _ if i.isdigit())
if (args.is_download or args.is_show) and not args.id and not args.keyword and not args.favorites:
logger.critical('Doujinshi id(s) are required for downloading') logger.critical('Doujinshi id(s) are required for downloading')
parser.print_help() parser.print_help()
exit(0) exit(1)
if not args.keyword and not args.id: if not args.keyword and not args.id and not args.favorites:
parser.print_help() parser.print_help()
exit(0) exit(1)
if args.threads <= 0: if args.threads <= 0:
args.threads = 1 args.threads = 1
elif args.threads > 15: elif args.threads > 15:
logger.critical('Maximum number of used threads is 15') logger.critical('Maximum number of used threads is 15')
exit(0) exit(1)
if args.proxy:
proxy_url = urlparse(args.proxy)
if proxy_url.scheme not in ('http', 'https'):
logger.error('Invalid protocol \'{0}\' of proxy, ignored'.format(proxy_url.scheme))
else:
constant.PROXY = {proxy_url.scheme: args.proxy}
return args return args

View File

@ -1,75 +1,104 @@
#!/usr/bin/env python2.7 #!/usr/bin/env python2.7
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
import json
import os import os
import signal import signal
import platform import platform
import time
from nhentai import constant
from nhentai.cmdline import cmd_parser, banner from nhentai.cmdline import cmd_parser, banner
from nhentai.parser import doujinshi_parser, search_parser, print_doujinshi from nhentai.parser import doujinshi_parser, search_parser, print_doujinshi, favorites_parser
from nhentai.doujinshi import Doujinshi from nhentai.doujinshi import Doujinshi
from nhentai.downloader import Downloader from nhentai.downloader import Downloader
from nhentai.logger import logger from nhentai.logger import logger
from nhentai.constant import BASE_URL from nhentai.constant import NHENTAI_CONFIG_FILE, BASE_URL
from nhentai.utils import generate_html, generate_cbz, generate_main_html, generate_pdf, \
paging, check_cookie, signal_handler, DB
def main(): def main():
banner() banner()
logger.info('Using mirror: {0}'.format(BASE_URL))
options = cmd_parser() options = cmd_parser()
logger.info('Using mirror: {0}'.format(BASE_URL))
# CONFIG['proxy'] will be changed after cmd_parser()
if constant.CONFIG['proxy']:
logger.info('Using proxy: {0}'.format(constant.CONFIG['proxy']))
# check your cookie
check_cookie()
doujinshis = []
doujinshi_ids = [] doujinshi_ids = []
doujinshi_list = [] doujinshi_list = []
if options.keyword: page_list = paging(options.page)
doujinshis = search_parser(options.keyword, options.page)
print_doujinshi(doujinshis) if options.favorites:
if options.is_download: if not options.is_download:
doujinshi_ids = map(lambda d: d['id'], doujinshis) logger.warning('You do not specify --download option')
else:
doujinshis = favorites_parser(page=page_list)
elif options.keyword:
if constant.CONFIG['language']:
logger.info('Using default language: {0}'.format(constant.CONFIG['language']))
options.keyword += ' language:{}'.format(constant.CONFIG['language'])
doujinshis = search_parser(options.keyword, sorting=options.sorting, page=page_list,
is_page_all=options.page_all)
elif not doujinshi_ids:
doujinshi_ids = options.id doujinshi_ids = options.id
print_doujinshi(doujinshis)
if options.is_download and doujinshis:
doujinshi_ids = [i['id'] for i in doujinshis]
if options.is_save_download_history:
with DB() as db:
data = map(int, db.get_all())
doujinshi_ids = list(set(doujinshi_ids) - set(data))
if doujinshi_ids: if doujinshi_ids:
for id in doujinshi_ids: for i, id_ in enumerate(doujinshi_ids):
doujinshi_info = doujinshi_parser(id) if options.delay:
doujinshi_list.append(Doujinshi(**doujinshi_info)) time.sleep(options.delay)
else:
exit(0) doujinshi_info = doujinshi_parser(id_)
if doujinshi_info:
doujinshi_list.append(Doujinshi(name_format=options.name_format, **doujinshi_info))
if (i + 1) % 10 == 0:
logger.info('Progress: %d / %d' % (i + 1, len(doujinshi_ids)))
if not options.is_show: if not options.is_show:
downloader = Downloader(path=options.output_dir, downloader = Downloader(path=options.output_dir, size=options.threads,
thread=options.threads, timeout=options.timeout) timeout=options.timeout, delay=options.delay)
for doujinshi in doujinshi_list: for doujinshi in doujinshi_list:
doujinshi.downloader = downloader doujinshi.downloader = downloader
doujinshi.download() doujinshi.download()
if options.is_save_download_history:
with DB() as db:
db.add_one(doujinshi.id)
image_html = '' if not options.is_nohtml and not options.is_cbz and not options.is_pdf:
previous = '' generate_html(options.output_dir, doujinshi)
elif options.is_cbz:
generate_cbz(options.output_dir, doujinshi, options.rm_origin_dir)
elif options.is_pdf:
generate_pdf(options.output_dir, doujinshi, options.rm_origin_dir)
doujinshi_dir = os.path.join(options.output_dir, str(doujinshi.id)) if options.main_viewer:
file_list = os.listdir(doujinshi_dir) generate_main_html(options.output_dir)
for index, image in enumerate(file_list):
try:
next_ = file_list[file_list.index(image) + 1]
except IndexError:
next_ = ''
image_html += '<img src="{0}" class="image-item {1}" attr-prev="{2}" attr-next="{3}">\n'\
.format(image, 'current' if index == 0 else '', previous, next_)
previous = image
with open(os.path.join(os.path.dirname(__file__), 'doujinshi.html'), 'r') as template:
html = template.read()
data = html.format(TITLE=doujinshi.name, IMAGES=image_html)
with open(os.path.join(doujinshi_dir, 'index.html'), 'w') as f:
f.write(data)
logger.log(15, 'HTML Viewer has been write to \'{0}\''.format(os.path.join(doujinshi_dir, 'index.html')))
if not platform.system() == 'Windows': if not platform.system() == 'Windows':
logger.log(15, '🍺 All done.') logger.log(15, '🍻 All done.')
else: else:
logger.log(15, 'All done.') logger.log(15, 'All done.')
@ -77,11 +106,6 @@ def main():
[doujinshi.show() for doujinshi in doujinshi_list] [doujinshi.show() for doujinshi in doujinshi_list]
def signal_handler(signal, frame):
logger.error('Ctrl-C signal received. Quit.')
exit(1)
signal.signal(signal.SIGINT, signal_handler) signal.signal(signal.SIGINT, signal_handler)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,14 +1,37 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
import os import os
from nhentai.utils import urlparse import tempfile
try:
from urlparse import urlparse
except ImportError:
from urllib.parse import urlparse
BASE_URL = os.getenv('NHENTAI', 'https://nhentai.net') BASE_URL = os.getenv('NHENTAI', 'https://nhentai.net')
__api_suspended_DETAIL_URL = '%s/api/gallery' % BASE_URL
DETAIL_URL = '%s/g' % BASE_URL DETAIL_URL = '%s/g' % BASE_URL
SEARCH_URL = '%s/search/' % BASE_URL SEARCH_URL = '%s/api/galleries/search' % BASE_URL
TAG_API_URL = '%s/api/galleries/tagged' % BASE_URL
LOGIN_URL = '%s/login/' % BASE_URL
CHALLENGE_URL = '%s/challenge' % BASE_URL
FAV_URL = '%s/favorites/' % BASE_URL
u = urlparse(BASE_URL) u = urlparse(BASE_URL)
IMAGE_URL = '%s://i.%s/galleries' % (u.scheme, u.hostname) IMAGE_URL = '%s://i.%s/galleries' % (u.scheme, u.hostname)
PROXY = {} NHENTAI_HOME = os.path.join(os.getenv('HOME', tempfile.gettempdir()), '.nhentai')
NHENTAI_HISTORY = os.path.join(NHENTAI_HOME, 'history.sqlite3')
NHENTAI_CONFIG_FILE = os.path.join(NHENTAI_HOME, 'config.json')
CONFIG = {
'proxy': {},
'cookie': '',
'language': '',
}

View File

@ -1,126 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>{TITLE}</title>
<style>
html, body {{
background-color: #e8e6e6;
height: 100%;
padding: 0;
margin: 0;
overflow: hidden;
}}
.container img {{
display: block;
width: 100%;
margin: 30px 0;
padding: 10px;
cursor: pointer;
}}
.container {{
height: 100%;
overflow: scroll;
background: #e8e6e6;
width: 200px;
padding: 30px;
float: left;
}}
.image {{
margin-left: 260px;
height: 100%;
background: #222;
text-align: center;
}}
.image img {{
height: 100%;
}}
.i a {{
display: block;
position: absolute;
top: 0;
width: 50%;
height: 100%;
}}
.i {{
position: relative;
height: 100%;
}}
.current {{
background: #BBB;
border-radius: 10px;
}}
</style>
<script>
function cursorfocus(elem) {{
var container = document.getElementsByClassName('container')[0];
container.scrollTop = elem.offsetTop - 500;
}}
function getImage(type) {{
var current = document.getElementsByClassName("current")[0];
current.className = "image-item";
var img_src = type == 1 ? current.getAttribute('attr-next') : current.getAttribute('attr-prev');
if (img_src === "") {{
img_src = current.src;
}}
var img_list = document.getElementsByClassName("image-item");
for (i=0; i<img_list.length; i++) {{
if (img_list[i].src.endsWith(img_src)) {{
img_list[i].className = "image-item current";
cursorfocus(img_list[i]);
break;
}}
}}
var display = document.getElementById("dest");
display.src = img_src;
display.focus();
}}
</script>
</head>
<body>
<div class="container">
{IMAGES}</div>
<div class="image">
<div class="i">
<img src="" id="dest">
<a href="javascript:getImage(-1)" style="left: 0;"></a>
<a href="javascript:getImage(1)" style="left: 50%;"></a>
</div>
</div>
</body>
<script>
var img_list = document.getElementsByClassName("image-item");
var display = document.getElementById("dest");
display.src = img_list[0].src;
for (var i = 0; i < img_list.length; i++) {{
img_list[i].addEventListener('click', function() {{
var current = document.getElementsByClassName("current")[0];
current.className = "image-item";
this.className = "image-item current";
var display = document.getElementById("dest");
display.src = this.src;
display.focus();
}}, false);
}}
document.onkeypress = function(e) {{
if (e.keyCode == 32) {{
getImage(1);
}}
}}
</script>
</html>

View File

@ -5,6 +5,14 @@ from future.builtins import range
from nhentai.constant import DETAIL_URL, IMAGE_URL from nhentai.constant import DETAIL_URL, IMAGE_URL
from nhentai.logger import logger from nhentai.logger import logger
from nhentai.utils import format_filename
EXT_MAP = {
'j': 'jpg',
'p': 'png',
'g': 'gif',
}
class DoujinshiInfo(dict): class DoujinshiInfo(dict):
@ -19,7 +27,7 @@ class DoujinshiInfo(dict):
class Doujinshi(object): class Doujinshi(object):
def __init__(self, name=None, id=None, img_id=None, ext='jpg', pages=0, **kwargs): def __init__(self, name=None, id=None, img_id=None, ext='', pages=0, name_format='[%i][%a][%t]', **kwargs):
self.name = name self.name = name
self.id = id self.id = id
self.img_id = img_id self.img_id = img_id
@ -29,16 +37,23 @@ class Doujinshi(object):
self.url = '%s/%d' % (DETAIL_URL, self.id) self.url = '%s/%d' % (DETAIL_URL, self.id)
self.info = DoujinshiInfo(**kwargs) self.info = DoujinshiInfo(**kwargs)
name_format = name_format.replace('%i', str(self.id))
name_format = name_format.replace('%a', self.info.artists)
name_format = name_format.replace('%t', self.name)
name_format = name_format.replace('%s', self.info.subtitle)
self.filename = format_filename(name_format)
def __repr__(self): def __repr__(self):
return '<Doujinshi: {0}>'.format(self.name) return '<Doujinshi: {0}>'.format(self.name)
def show(self): def show(self):
table = [ table = [
["Parodies", self.info.parodies],
["Doujinshi", self.name], ["Doujinshi", self.name],
["Subtitle", self.info.subtitle], ["Subtitle", self.info.subtitle],
["Characters", self.info.characters], ["Characters", self.info.characters],
["Authors", self.info.artists], ["Authors", self.info.artists],
["Language", self.info.language], ["Languages", self.info.languages],
["Tags", self.info.tags], ["Tags", self.info.tags],
["URL", self.url], ["URL", self.url],
["Pages", self.pages], ["Pages", self.pages],
@ -46,14 +61,25 @@ class Doujinshi(object):
logger.info(u'Print doujinshi information of {0}\n{1}'.format(self.id, tabulate(table))) logger.info(u'Print doujinshi information of {0}\n{1}'.format(self.id, tabulate(table)))
def download(self): def download(self):
logger.info('Start download doujinshi: %s' % self.name) logger.info('Starting to download doujinshi: %s' % self.name)
if self.downloader: if self.downloader:
download_queue = [] download_queue = []
for i in range(1, self.pages + 1):
download_queue.append('%s/%d/%d.%s' % (IMAGE_URL, int(self.img_id), i, self.ext)) if len(self.ext) != self.pages:
self.downloader.download(download_queue, self.id) logger.warning('Page count and ext count do not equal')
for i in range(1, min(self.pages, len(self.ext)) + 1):
download_queue.append('%s/%d/%d.%s' % (IMAGE_URL, int(self.img_id), i, self.ext[i-1]))
self.downloader.download(download_queue, self.filename)
'''
for i in range(len(self.ext)):
download_queue.append('%s/%d/%d.%s' % (IMAGE_URL, int(self.img_id), i+1, EXT_MAP[self.ext[i]]))
'''
else: else:
logger.critical('Downloader has not be loaded') logger.critical('Downloader has not been loaded')
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,9 +1,15 @@
# coding: utf- # coding: utf-
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
import multiprocessing
import signal
from future.builtins import str as text from future.builtins import str as text
import sys
import os import os
import requests import requests
import threadpool import time
try: try:
from urllib.parse import urlparse from urllib.parse import urlparse
except ImportError: except ImportError:
@ -13,33 +19,55 @@ from nhentai.logger import logger
from nhentai.parser import request from nhentai.parser import request
from nhentai.utils import Singleton from nhentai.utils import Singleton
requests.packages.urllib3.disable_warnings() requests.packages.urllib3.disable_warnings()
semaphore = multiprocessing.Semaphore(1)
class NhentaiImageNotExistException(Exception): class NHentaiImageNotExistException(Exception):
pass pass
class Downloader(Singleton): class Downloader(Singleton):
def __init__(self, path='', thread=1, timeout=30): def __init__(self, path='', size=5, timeout=30, delay=0):
if not isinstance(thread, (int, )) or thread < 1 or thread > 15: self.size = size
raise ValueError('Invalid threads count')
self.path = str(path) self.path = str(path)
self.thread_count = thread
self.threads = []
self.timeout = timeout self.timeout = timeout
self.delay = delay
def _download(self, url, folder='', filename='', retried=0): def download_(self, url, folder='', filename='', retried=0):
logger.info('Start downloading: {0} ...'.format(url)) if self.delay:
time.sleep(self.delay)
logger.info('Starting to download {0} ...'.format(url))
filename = filename if filename else os.path.basename(urlparse(url).path) filename = filename if filename else os.path.basename(urlparse(url).path)
base_filename, extension = os.path.splitext(filename) base_filename, extension = os.path.splitext(filename)
try: try:
if os.path.exists(os.path.join(folder, base_filename.zfill(3) + extension)):
logger.warning('File: {0} exists, ignoring'.format(os.path.join(folder, base_filename.zfill(3) +
extension)))
return 1, url
response = None
with open(os.path.join(folder, base_filename.zfill(3) + extension), "wb") as f: with open(os.path.join(folder, base_filename.zfill(3) + extension), "wb") as f:
i = 0
while i < 10:
try:
response = request('get', url, stream=True, timeout=self.timeout) response = request('get', url, stream=True, timeout=self.timeout)
if response.status_code != 200: if response.status_code != 200:
raise NhentaiImageNotExistException raise NHentaiImageNotExistException
except NHentaiImageNotExistException as e:
raise e
except Exception as e:
i += 1
if not i < 10:
logger.critical(str(e))
return 0, None
continue
break
length = response.headers.get('content-length') length = response.headers.get('content-length')
if length is None: if length is None:
f.write(response.content) f.write(response.content)
@ -50,51 +78,79 @@ class Downloader(Singleton):
except (requests.HTTPError, requests.Timeout) as e: except (requests.HTTPError, requests.Timeout) as e:
if retried < 3: if retried < 3:
logger.warning('Warning: {0}, retrying({1}) ...'.format(str(e), retried)) logger.warning('Warning: {0}, retrying({1}) ...'.format(str(e), retried))
return 0, self._download(url=url, folder=folder, filename=filename, retried=retried+1) return 0, self.download_(url=url, folder=folder, filename=filename, retried=retried+1)
else: else:
return 0, None return 0, None
except NhentaiImageNotExistException as e: except NHentaiImageNotExistException as e:
os.remove(os.path.join(folder, base_filename.zfill(3) + extension)) os.remove(os.path.join(folder, base_filename.zfill(3) + extension))
return -1, url return -1, url
except Exception as e: except Exception as e:
import traceback
traceback.print_stack()
logger.critical(str(e)) logger.critical(str(e))
return 0, None return 0, None
except KeyboardInterrupt:
return -3, None
return 1, url return 1, url
def _download_callback(self, request, result): def _download_callback(self, result):
result, data = result result, data = result
if result == 0: if result == 0:
logger.critical('fatal errors occurred, quit.') logger.warning('fatal errors occurred, ignored')
exit(1) # exit(1)
elif result == -1: elif result == -1:
logger.warning('url {} return status code 404'.format(data)) logger.warning('url {} return status code 404'.format(data))
elif result == -2:
logger.warning('Ctrl-C pressed, exiting sub processes ...')
elif result == -3:
# workers wont be run, just pass
pass
else: else:
logger.log(15, '{0} download successfully'.format(data)) logger.log(15, '{0} downloaded successfully'.format(data))
def download(self, queue, folder=''): def download(self, queue, folder=''):
if not isinstance(folder, (text)): if not isinstance(folder, text):
folder = str(folder) folder = str(folder)
if self.path: if self.path:
folder = os.path.join(self.path, folder) folder = os.path.join(self.path, folder)
if not os.path.exists(folder): if not os.path.exists(folder):
logger.warn('Path \'{0}\' not exist.'.format(folder)) logger.warn('Path \'{0}\' does not exist, creating.'.format(folder))
try: try:
os.makedirs(folder) os.makedirs(folder)
except EnvironmentError as e: except EnvironmentError as e:
logger.critical('{0}'.format(str(e))) logger.critical('{0}'.format(str(e)))
exit(1)
else: else:
logger.warn('Path \'{0}\' already exist.'.format(folder)) logger.warn('Path \'{0}\' already exist.'.format(folder))
queue = [([url], {'folder': folder}) for url in queue] queue = [(self, url, folder) for url in queue]
self.thread_pool = threadpool.ThreadPool(self.thread_count) pool = multiprocessing.Pool(self.size, init_worker)
requests_ = threadpool.makeRequests(self._download, queue, self._download_callback) [pool.apply_async(download_wrapper, args=item) for item in queue]
[self.thread_pool.putRequest(req) for req in requests_]
self.thread_pool.wait() pool.close()
pool.join()
def download_wrapper(obj, url, folder=''):
if sys.platform == 'darwin' or semaphore.get_value():
return Downloader.download_(obj, url=url, folder=folder)
else:
return -3, None
def init_worker():
signal.signal(signal.SIGINT, subprocess_signal)
def subprocess_signal(signal, frame):
if semaphore.acquire(timeout=1):
logger.warning('Ctrl-C pressed, exiting sub processes ...')
raise KeyboardInterrupt

View File

@ -104,6 +104,9 @@ class ColorizingStreamHandler(logging.StreamHandler):
text = parts.pop(0) text = parts.pop(0)
if text: if text:
if sys.version_info < (3, 0, 0):
write(text.encode('utf-8'))
else:
write(text) write(text)
if parts: if parts:

View File

@ -1,20 +1,107 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
from bs4 import BeautifulSoup import os
import re import re
import requests import time
from bs4 import BeautifulSoup
from tabulate import tabulate from tabulate import tabulate
import nhentai.constant as constant import nhentai.constant as constant
from nhentai.utils import request
from nhentai.logger import logger from nhentai.logger import logger
def request(method, url, **kwargs): def _get_csrf_token(content):
if not hasattr(requests, method): html = BeautifulSoup(content, 'html.parser')
raise AttributeError('\'requests\' object has no attribute \'{0}\''.format(method)) csrf_token_elem = html.find('input', attrs={'name': 'csrfmiddlewaretoken'})
if not csrf_token_elem:
raise Exception('Cannot find csrf token to login')
return csrf_token_elem.attrs['value']
return requests.__dict__[method](url, proxies=constant.PROXY, verify=False, **kwargs)
def login(username, password):
logger.warning('This feature is deprecated, please use --cookie to set your cookie.')
csrf_token = _get_csrf_token(request('get', url=constant.LOGIN_URL).text)
if os.getenv('DEBUG'):
logger.info('Getting CSRF token ...')
if os.getenv('DEBUG'):
logger.info('CSRF token is {}'.format(csrf_token))
login_dict = {
'csrfmiddlewaretoken': csrf_token,
'username_or_email': username,
'password': password,
}
resp = request('post', url=constant.LOGIN_URL, data=login_dict)
if 'You\'re loading pages way too quickly.' in resp.text or 'Really, slow down' in resp.text:
csrf_token = _get_csrf_token(resp.text)
resp = request('post', url=resp.url, data={'csrfmiddlewaretoken': csrf_token, 'next': '/'})
if 'Invalid username/email or password' in resp.text:
logger.error('Login failed, please check your username and password')
exit(1)
if 'You\'re loading pages way too quickly.' in resp.text or 'Really, slow down' in resp.text:
logger.error('Using nhentai --cookie \'YOUR_COOKIE_HERE\' to save your Cookie.')
exit(2)
def _get_title_and_id(response):
result = []
html = BeautifulSoup(response, 'html.parser')
doujinshi_search_result = html.find_all('div', attrs={'class': 'gallery'})
for doujinshi in doujinshi_search_result:
doujinshi_container = doujinshi.find('div', attrs={'class': 'caption'})
title = doujinshi_container.text.strip()
title = title if len(title) < 85 else title[:82] + '...'
id_ = re.search('/g/(\d+)/', doujinshi.a['href']).group(1)
result.append({'id': id_, 'title': title})
return result
def favorites_parser(page=None):
result = []
html = BeautifulSoup(request('get', constant.FAV_URL).content, 'html.parser')
count = html.find('span', attrs={'class': 'count'})
if not count:
logger.error("Can't get your number of favorited doujins. Did the login failed?")
return []
count = int(count.text.strip('(').strip(')').replace(',', ''))
if count == 0:
logger.warning('No favorites found')
return []
pages = int(count / 25)
if page:
page_range_list = page
else:
if pages:
pages += 1 if count % (25 * pages) else 0
else:
pages = 1
logger.info('You have %d favorites in %d pages.' % (count, pages))
if os.getenv('DEBUG'):
pages = 1
page_range_list = range(1, pages + 1)
for page in page_range_list:
try:
logger.info('Getting doujinshi ids of page %d' % page)
resp = request('get', constant.FAV_URL + '?page=%d' % page).content
result.extend(_get_title_and_id(resp))
except Exception as e:
logger.error('Error: %s, continue', str(e))
return result
def doujinshi_parser(id_): def doujinshi_parser(id_):
@ -28,10 +115,17 @@ def doujinshi_parser(id_):
url = '{0}/{1}/'.format(constant.DETAIL_URL, id_) url = '{0}/{1}/'.format(constant.DETAIL_URL, id_)
try: try:
response = request('get', url).content response = request('get', url)
if response.status_code in (200,):
response = response.content
else:
logger.debug('Slow down and retry ({}) ...'.format(id_))
time.sleep(1)
return doujinshi_parser(str(id_))
except Exception as e: except Exception as e:
logger.critical(str(e)) logger.warn('Error: {}, ignored'.format(str(e)))
exit(1) return None
html = BeautifulSoup(response, 'html.parser') html = BeautifulSoup(response, 'html.parser')
doujinshi_info = html.find('div', attrs={'id': 'info'}) doujinshi_info = html.find('div', attrs={'id': 'info'})
@ -43,53 +137,46 @@ def doujinshi_parser(id_):
doujinshi['subtitle'] = subtitle.text if subtitle else '' doujinshi['subtitle'] = subtitle.text if subtitle else ''
doujinshi_cover = html.find('div', attrs={'id': 'cover'}) doujinshi_cover = html.find('div', attrs={'id': 'cover'})
img_id = re.search('/galleries/([\d]+)/cover\.(jpg|png)$', doujinshi_cover.a.img.attrs['data-src']) img_id = re.search('/galleries/([\d]+)/cover\.(jpg|png|gif)$', doujinshi_cover.a.img.attrs['data-src'])
ext = []
for i in html.find_all('div', attrs={'class': 'thumb-container'}):
_, ext_name = os.path.basename(i.img.attrs['data-src']).rsplit('.', 1)
ext.append(ext_name)
if not img_id: if not img_id:
logger.critical('Tried yo get image id failed') logger.critical('Tried yo get image id failed')
exit(1) exit(1)
doujinshi['img_id'] = img_id.group(1) doujinshi['img_id'] = img_id.group(1)
doujinshi['ext'] = img_id.group(2) doujinshi['ext'] = ext
pages = 0 for _ in doujinshi_info.find_all('div', class_='tag-container field-name'):
for _ in doujinshi_info.find_all('div', class_=''): if re.search('Pages:', _.text):
pages = re.search('([\d]+) pages', _.text) pages = _.find('span', class_='name').string
if pages:
pages = pages.group(1)
break
doujinshi['pages'] = int(pages) doujinshi['pages'] = int(pages)
# gain information of the doujinshi # gain information of the doujinshi
information_fields = doujinshi_info.find_all('div', attrs={'class': 'field-name'}) information_fields = doujinshi_info.find_all('div', attrs={'class': 'field-name'})
needed_fields = ['Characters', 'Artists', 'Language', 'Tags'] needed_fields = ['Characters', 'Artists', 'Languages', 'Tags', 'Parodies', 'Groups', 'Categories']
for field in information_fields: for field in information_fields:
field_name = field.contents[0].strip().strip(':') field_name = field.contents[0].strip().strip(':')
if field_name in needed_fields: if field_name in needed_fields:
data = [sub_field.contents[0].strip() for sub_field in data = [sub_field.find('span', attrs={'class': 'name'}).contents[0].strip() for sub_field in
field.find_all('a', attrs={'class': 'tag'})] field.find_all('a', attrs={'class': 'tag'})]
doujinshi[field_name.lower()] = ', '.join(data) doujinshi[field_name.lower()] = ', '.join(data)
time_field = doujinshi_info.find('time')
if time_field.has_attr('datetime'):
doujinshi['date'] = time_field['datetime']
return doujinshi return doujinshi
def search_parser(keyword, page): def old_search_parser(keyword, sorting='date', page=1):
logger.debug('Searching doujinshis of keyword {0}'.format(keyword)) logger.debug('Searching doujinshis of keyword {0}'.format(keyword))
result = [] response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page, 'sort': sorting}).content
try:
response = request('get', url=constant.SEARCH_URL, params={'q': keyword, 'page': page}).content
except requests.ConnectionError as e:
logger.critical(e)
logger.warn('If you are in China, please configure the proxy to fu*k GFW.')
exit(1)
html = BeautifulSoup(response, 'html.parser') result = _get_title_and_id(response)
doujinshi_search_result = html.find_all('div', attrs={'class': 'gallery'})
for doujinshi in doujinshi_search_result:
doujinshi_container = doujinshi.find('div', attrs={'class': 'caption'})
title = doujinshi_container.text.strip()
title = (title[:85] + '..') if len(title) > 85 else title
id_ = re.search('/g/(\d+)/', doujinshi.a['href']).group(1)
result.append({'id': id_, 'title': title})
if not result: if not result:
logger.warn('Not found anything of keyword {}'.format(keyword)) logger.warn('Not found anything of keyword {}'.format(keyword))
@ -101,8 +188,97 @@ def print_doujinshi(doujinshi_list):
return return
doujinshi_list = [(i['id'], i['title']) for i in doujinshi_list] doujinshi_list = [(i['id'], i['title']) for i in doujinshi_list]
headers = ['id', 'doujinshi'] headers = ['id', 'doujinshi']
logger.info('Search Result\n' + logger.info('Search Result || Found %i doujinshis \n' % doujinshi_list.__len__() +
tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst')) tabulate(tabular_data=doujinshi_list, headers=headers, tablefmt='rst'))
def search_parser(keyword, sorting, page, is_page_all=False):
# keyword = '+'.join([i.strip().replace(' ', '-').lower() for i in keyword.split(',')])
result = []
if not page:
page = [1]
if is_page_all:
url = request('get', url=constant.SEARCH_URL, params={'query': keyword}).url
init_response = request('get', url.replace('%2B', '+')).json()
page = range(1, init_response['num_pages']+1)
total = '/{0}'.format(page[-1]) if is_page_all else ''
for p in page:
i = 0
logger.info('Searching doujinshis using keywords "{0}" on page {1}{2}'.format(keyword, p, total))
while i < 3:
try:
url = request('get', url=constant.SEARCH_URL, params={'query': keyword,
'page': p, 'sort': sorting}).url
response = request('get', url.replace('%2B', '+')).json()
except Exception as e:
logger.critical(str(e))
break
if 'result' not in response:
logger.warn('No result in response in page {}'.format(p))
break
for row in response['result']:
title = row['title']['english']
title = title[:85] + '..' if len(title) > 85 else title
result.append({'id': row['id'], 'title': title})
if not result:
logger.warn('No results for keywords {}'.format(keyword))
return result
def __api_suspended_doujinshi_parser(id_):
if not isinstance(id_, (int,)) and (isinstance(id_, (str,)) and not id_.isdigit()):
raise Exception('Doujinshi id({0}) is not valid'.format(id_))
id_ = int(id_)
logger.log(15, 'Fetching information of doujinshi id {0}'.format(id_))
doujinshi = dict()
doujinshi['id'] = id_
url = '{0}/{1}'.format(constant.DETAIL_URL, id_)
i = 0
while 5 > i:
try:
response = request('get', url).json()
except Exception as e:
i += 1
if not i < 5:
logger.critical(str(e))
exit(1)
continue
break
doujinshi['name'] = response['title']['english']
doujinshi['subtitle'] = response['title']['japanese']
doujinshi['img_id'] = response['media_id']
doujinshi['ext'] = ''.join([i['t'] for i in response['images']['pages']])
doujinshi['pages'] = len(response['images']['pages'])
# gain information of the doujinshi
needed_fields = ['character', 'artist', 'language', 'tag', 'parody', 'group', 'category']
for tag in response['tags']:
tag_type = tag['type']
if tag_type in needed_fields:
if tag_type == 'tag':
if tag_type not in doujinshi:
doujinshi[tag_type] = {}
tag['name'] = tag['name'].replace(' ', '-')
tag['name'] = tag['name'].lower()
doujinshi[tag_type][tag['name']] = tag['id']
elif tag_type not in doujinshi:
doujinshi[tag_type] = tag['name']
else:
doujinshi[tag_type] += ', ' + tag['name']
return doujinshi
if __name__ == '__main__': if __name__ == '__main__':
print(doujinshi_parser("32271")) print(doujinshi_parser("32271"))

126
nhentai/serializer.py Normal file
View File

@ -0,0 +1,126 @@
# coding: utf-8
import json
import os
from xml.sax.saxutils import escape
def serialize_json(doujinshi, dir):
metadata = {'title': doujinshi.name,
'subtitle': doujinshi.info.subtitle}
if doujinshi.info.date:
metadata['upload_date'] = doujinshi.info.date
if doujinshi.info.parodies:
metadata['parody'] = [i.strip() for i in doujinshi.info.parodies.split(',')]
if doujinshi.info.characters:
metadata['character'] = [i.strip() for i in doujinshi.info.characters.split(',')]
if doujinshi.info.tags:
metadata['tag'] = [i.strip() for i in doujinshi.info.tags.split(',')]
if doujinshi.info.artists:
metadata['artist'] = [i.strip() for i in doujinshi.info.artists.split(',')]
if doujinshi.info.groups:
metadata['group'] = [i.strip() for i in doujinshi.info.groups.split(',')]
if doujinshi.info.languages:
metadata['language'] = [i.strip() for i in doujinshi.info.languages.split(',')]
metadata['category'] = doujinshi.info.categories
metadata['URL'] = doujinshi.url
metadata['Pages'] = doujinshi.pages
with open(os.path.join(dir, 'metadata.json'), 'w') as f:
json.dump(metadata, f, separators=','':')
def serialize_comicxml(doujinshi, dir):
from iso8601 import parse_date
with open(os.path.join(dir, 'ComicInfo.xml'), 'w') as f:
f.write('<?xml version="1.0" encoding="utf-8"?>\n')
f.write('<ComicInfo xmlns:xsd="http://www.w3.org/2001/XMLSchema" '
'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">\n')
xml_write_simple_tag(f, 'Manga', 'Yes')
xml_write_simple_tag(f, 'Title', doujinshi.name)
xml_write_simple_tag(f, 'Summary', doujinshi.info.subtitle)
xml_write_simple_tag(f, 'PageCount', doujinshi.pages)
xml_write_simple_tag(f, 'URL', doujinshi.url)
xml_write_simple_tag(f, 'NhentaiId', doujinshi.id)
xml_write_simple_tag(f, 'Genre', doujinshi.info.categories)
xml_write_simple_tag(f, 'BlackAndWhite', 'No' if doujinshi.info.tags and 'full color' in doujinshi.info.tags else 'Yes')
if doujinshi.info.date:
dt = parse_date(doujinshi.info.date)
xml_write_simple_tag(f, 'Year', dt.year)
xml_write_simple_tag(f, 'Month', dt.month)
xml_write_simple_tag(f, 'Day', dt.day)
if doujinshi.info.parodies:
xml_write_simple_tag(f, 'Series', doujinshi.info.parodies)
if doujinshi.info.characters:
xml_write_simple_tag(f, 'Characters', doujinshi.info.characters)
if doujinshi.info.tags:
xml_write_simple_tag(f, 'Tags', doujinshi.info.tags)
if doujinshi.info.artists:
xml_write_simple_tag(f, 'Writer', ' & '.join([i.strip() for i in doujinshi.info.artists.split(',')]))
# if doujinshi.info.groups:
# metadata['group'] = [i.strip() for i in doujinshi.info.groups.split(',')]
if doujinshi.info.languages:
languages = [i.strip() for i in doujinshi.info.languages.split(',')]
xml_write_simple_tag(f, 'Translated', 'Yes' if 'translated' in languages else 'No')
[xml_write_simple_tag(f, 'Language', i) for i in languages if i != 'translated']
f.write('</ComicInfo>')
def xml_write_simple_tag(f, name, val, indent=1):
f.write('{}<{}>{}</{}>\n'.format(' ' * indent, name, escape(str(val)), name))
def merge_json():
lst = []
output_dir = "./"
os.chdir(output_dir)
doujinshi_dirs = next(os.walk('.'))[1]
for folder in doujinshi_dirs:
files = os.listdir(folder)
if 'metadata.json' not in files:
continue
data_folder = output_dir + folder + '/' + 'metadata.json'
json_file = open(data_folder, 'r')
json_dict = json.load(json_file)
json_dict['Folder'] = folder
lst.append(json_dict)
return lst
def serialize_unique(lst):
dictionary = {}
parody = []
character = []
tag = []
artist = []
group = []
for dic in lst:
if 'parody' in dic:
parody.extend([i for i in dic['parody']])
if 'character' in dic:
character.extend([i for i in dic['character']])
if 'tag' in dic:
tag.extend([i for i in dic['tag']])
if 'artist' in dic:
artist.extend([i for i in dic['artist']])
if 'group' in dic:
group.extend([i for i in dic['group']])
dictionary['parody'] = list(set(parody))
dictionary['character'] = list(set(character))
dictionary['tag'] = list(set(tag))
dictionary['artist'] = list(set(artist))
dictionary['group'] = list(set(group))
return dictionary
def set_js_database():
with open('data.js', 'w') as f:
indexed_json = merge_json()
unique_json = json.dumps(serialize_unique(indexed_json), separators=','':')
indexed_json = json.dumps(indexed_json, separators=','':')
f.write('var data = ' + indexed_json)
f.write(';\nvar tags = ' + unique_json)

View File

@ -1,6 +1,38 @@
# coding: utf-8 # coding: utf-8
from __future__ import unicode_literals, print_function from __future__ import unicode_literals, print_function
import sys
import re
import os
import string
import zipfile
import shutil
import requests
import sqlite3
from nhentai import constant
from nhentai.logger import logger
from nhentai.serializer import serialize_json, serialize_comicxml, set_js_database
def request(method, url, **kwargs):
session = requests.Session()
session.headers.update({
'Referer': constant.LOGIN_URL,
'User-Agent': 'nhentai command line client (https://github.com/RicterZ/nhentai)',
'Cookie': constant.CONFIG['cookie']
})
return getattr(session, method)(url, proxies=constant.CONFIG['proxy'], verify=False, **kwargs)
def check_cookie():
response = request('get', constant.BASE_URL).text
username = re.findall('"/users/\d+/(.*?)"', response)
if not username:
logger.error('Cannot get your username, please check your cookie or use `nhentai --cookie` to set your cookie')
else:
logger.info('Login successfully! Your username: {}'.format(username[0]))
class _Singleton(type): class _Singleton(type):
""" A metaclass that creates a Singleton base class when called. """ """ A metaclass that creates a Singleton base class when called. """
@ -23,3 +55,246 @@ def urlparse(url):
from urllib.parse import urlparse from urllib.parse import urlparse
return urlparse(url) return urlparse(url)
def readfile(path):
loc = os.path.dirname(__file__)
with open(os.path.join(loc, path), 'r') as file:
return file.read()
def generate_html(output_dir='.', doujinshi_obj=None):
image_html = ''
if doujinshi_obj is not None:
doujinshi_dir = os.path.join(output_dir, doujinshi_obj.filename)
else:
doujinshi_dir = '.'
file_list = os.listdir(doujinshi_dir)
file_list.sort()
for image in file_list:
if not os.path.splitext(image)[1] in ('.jpg', '.png'):
continue
image_html += '<img src="{0}" class="image-item"/>\n'\
.format(image)
html = readfile('viewer/index.html')
css = readfile('viewer/styles.css')
js = readfile('viewer/scripts.js')
if doujinshi_obj is not None:
serialize_json(doujinshi_obj, doujinshi_dir)
name = doujinshi_obj.name
if sys.version_info < (3, 0):
name = doujinshi_obj.name.encode('utf-8')
else:
name = {'title': 'nHentai HTML Viewer'}
data = html.format(TITLE=name, IMAGES=image_html, SCRIPTS=js, STYLES=css)
try:
if sys.version_info < (3, 0):
with open(os.path.join(doujinshi_dir, 'index.html'), 'w') as f:
f.write(data)
else:
with open(os.path.join(doujinshi_dir, 'index.html'), 'wb') as f:
f.write(data.encode('utf-8'))
logger.log(15, 'HTML Viewer has been written to \'{0}\''.format(os.path.join(doujinshi_dir, 'index.html')))
except Exception as e:
logger.warning('Writing HTML Viewer failed ({})'.format(str(e)))
def generate_main_html(output_dir='./'):
"""
Generate a main html to show all the contain doujinshi.
With a link to their `index.html`.
Default output folder will be the CLI path.
"""
image_html = ''
main = readfile('viewer/main.html')
css = readfile('viewer/main.css')
js = readfile('viewer/main.js')
element = '\n\
<div class="gallery-favorite">\n\
<div class="gallery">\n\
<a href="./{FOLDER}/index.html" class="cover" style="padding:0 0 141.6% 0"><img\n\
src="./{FOLDER}/{IMAGE}" />\n\
<div class="caption">{TITLE}</div>\n\
</a>\n\
</div>\n\
</div>\n'
os.chdir(output_dir)
doujinshi_dirs = next(os.walk('.'))[1]
for folder in doujinshi_dirs:
files = os.listdir(folder)
files.sort()
if 'index.html' in files:
logger.info('Add doujinshi \'{}\''.format(folder))
else:
continue
image = files[0] # 001.jpg or 001.png
if folder is not None:
title = folder.replace('_', ' ')
else:
title = 'nHentai HTML Viewer'
image_html += element.format(FOLDER=folder, IMAGE=image, TITLE=title)
if image_html == '':
logger.warning('No index.html found, --gen-main paused.')
return
try:
data = main.format(STYLES=css, SCRIPTS=js, PICTURE=image_html)
if sys.version_info < (3, 0):
with open('./main.html', 'w') as f:
f.write(data)
else:
with open('./main.html', 'wb') as f:
f.write(data.encode('utf-8'))
shutil.copy(os.path.dirname(__file__)+'/viewer/logo.png', './')
set_js_database()
logger.log(
15, 'Main Viewer has been written to \'{0}main.html\''.format(output_dir))
except Exception as e:
logger.warning('Writing Main Viewer failed ({})'.format(str(e)))
def generate_cbz(output_dir='.', doujinshi_obj=None, rm_origin_dir=False, write_comic_info=False):
if doujinshi_obj is not None:
doujinshi_dir = os.path.join(output_dir, doujinshi_obj.filename)
if write_comic_info:
serialize_comicxml(doujinshi_obj, doujinshi_dir)
cbz_filename = os.path.join(os.path.join(doujinshi_dir, '..'), '{}.cbz'.format(doujinshi_obj.filename))
else:
cbz_filename = './doujinshi.cbz'
doujinshi_dir = '.'
file_list = os.listdir(doujinshi_dir)
file_list.sort()
logger.info('Writing CBZ file to path: {}'.format(cbz_filename))
with zipfile.ZipFile(cbz_filename, 'w') as cbz_pf:
for image in file_list:
image_path = os.path.join(doujinshi_dir, image)
cbz_pf.write(image_path, image)
if rm_origin_dir:
shutil.rmtree(doujinshi_dir, ignore_errors=True)
logger.log(15, 'Comic Book CBZ file has been written to \'{0}\''.format(doujinshi_dir))
def generate_pdf(output_dir='.', doujinshi_obj=None, rm_origin_dir=False):
try:
import img2pdf
except ImportError:
logger.error("Please install img2pdf package by using pip.")
"""Write images to a PDF file using img2pdf."""
if doujinshi_obj is not None:
doujinshi_dir = os.path.join(output_dir, doujinshi_obj.filename)
pdf_filename = os.path.join(
os.path.join(doujinshi_dir, '..'),
'{}.pdf'.format(doujinshi_obj.filename)
)
else:
pdf_filename = './doujinshi.pdf'
doujinshi_dir = '.'
file_list = os.listdir(doujinshi_dir)
file_list.sort()
logger.info('Writing PDF file to path: {}'.format(pdf_filename))
with open(pdf_filename, 'wb') as pdf_f:
full_path_list = (
[os.path.join(doujinshi_dir, image) for image in file_list]
)
pdf_f.write(img2pdf.convert(full_path_list))
if rm_origin_dir:
shutil.rmtree(doujinshi_dir, ignore_errors=True)
logger.log(15, 'PDF file has been written to \'{0}\''.format(doujinshi_dir))
def format_filename(s):
"""Take a string and return a valid filename constructed from the string.
Uses a whitelist approach: any characters not present in valid_chars are
removed. Also spaces are replaced with underscores.
Note: this method may produce invalid filenames such as ``, `.` or `..`
When I use this method I prepend a date string like '2009_01_15_19_46_32_'
and append a file extension like '.txt', so I avoid the potential of using
an invalid filename.
"""
# maybe you can use `--format` to select a suitable filename
valid_chars = "-_.()[] %s%s" % (string.ascii_letters, string.digits)
filename = ''.join(c for c in s if c in valid_chars)
if len(filename) > 100:
filename = filename[:100] + '...]'
# Remove [] from filename
filename = filename.replace('[]', '').strip()
return filename
def signal_handler(signal, frame):
logger.error('Ctrl-C signal received. Stopping...')
exit(1)
def paging(page_string):
# 1,3-5,14 -> [1, 3, 4, 5, 14]
if not page_string:
return []
page_list = []
for i in page_string.split(','):
if '-' in i:
start, end = i.split('-')
if not (start.isdigit() and end.isdigit()):
raise Exception('Invalid page number')
page_list.extend(list(range(int(start), int(end)+1)))
else:
if not i.isdigit():
raise Exception('Invalid page number')
page_list.append(int(i))
return page_list
class DB(object):
conn = None
cur = None
def __enter__(self):
self.conn = sqlite3.connect(constant.NHENTAI_HISTORY)
self.cur = self.conn.cursor()
self.cur.execute('CREATE TABLE IF NOT EXISTS download_history (id text)')
self.conn.commit()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.conn.close()
def clean_all(self):
self.cur.execute('DELETE FROM download_history WHERE 1')
self.conn.commit()
def add_one(self, data):
self.cur.execute('INSERT INTO download_history VALUES (?)', [data])
self.conn.commit()
def get_all(self):
data = self.cur.execute('SELECT id FROM download_history')
return [i[0] for i in data]

25
nhentai/viewer/index.html Normal file
View File

@ -0,0 +1,25 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, viewport-fit=cover" />
<title>{TITLE}</title>
<style>
{STYLES}
</style>
</head>
<body>
<nav id="list">
{IMAGES}</nav>
<div id="image-container">
<span id="page-num"></span>
<div id="dest"></div>
</div>
<script>
{SCRIPTS}
</script>
</body>
</html>

BIN
nhentai/viewer/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

332
nhentai/viewer/main.css Normal file
View File

@ -0,0 +1,332 @@
/*! normalize.css v5.0.0 | MIT License | github.com/necolas/normalize.css */
/* Original from https://static.nhentai.net/css/main_style.9bb9b703e601.css */
a {
background-color: transparent;
-webkit-text-decoration-skip: objects
}
img {
border-style: none
}
html {
box-sizing: border-box
}
*,:after,:before {
box-sizing: inherit
}
body,html {
font-family: 'Noto Sans',sans-serif;
font-size: 14px;
line-height: 1.42857143;
height: 100%;
margin: 0;
text-align: center;
color: #34495e;
background-color: #fff;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale
}
a {
text-decoration: none;
color: #34495e
}
blockquote {
border: 0
}
.container {
display: block;
clear: both;
margin-left: 15rem;
margin-right: 0.5rem;
margin-bottom: 5px;
margin-top: 5px;
padding: 4px;
border-radius: 9px;
background-color: #ecf0f1;
width: 100% - 15rem;
max-width: 1500px
}
.gallery,.gallery-favorite,.thumb-container {
display: inline-block;
vertical-align: top
}
.gallery img,.gallery-favorite img,.thumb-container img {
display: block;
max-width: 100%;
height: auto
}
@media screen and (min-width: 980px) {
.gallery,.gallery-favorite,.thumb-container {
width:19%;
margin: 3px;
}
}
@media screen and (max-width: 979px) {
.gallery,.gallery-favorite,.thumb-container {
width:24%;
margin: 2px
}
}
@media screen and (max-width: 772px) {
.gallery,.gallery-favorite,.thumb-container {
width:32%;
margin: 1.5px
}
}
@media screen and (max-width: 500px) {
.gallery,.gallery-favorite,.thumb-container {
width:49%;
margin: .5px
}
}
.gallery a,.gallery-favorite a {
display: block
}
.gallery a img,.gallery-favorite a img {
position: absolute
}
.caption {
line-height: 15px;
left: 0;
right: 0;
top: 100%;
position: absolute;
z-index: 10;
overflow: hidden;
width: 100%;
max-height: 34px;
padding: 3px;
background-color: #fff;
font-weight: 700;
display: block;
text-align: center;
text-decoration: none;
color: #34495e
}
.gallery {
position: relative;
margin-bottom: 3em
}
.gallery:hover .caption {
max-height: 100%;
box-shadow: 0 10px 20px rgba(100,100,100,.5)
}
.gallery-favorite .gallery {
width: 100%
}
.sidenav {
height: 100%;
width: 15rem;
position: fixed;
z-index: 1;
top: 0;
left: 0;
background-color: #0d0d0d;
overflow: hidden;
padding-top: 20px;
-webkit-touch-callout: none; /* iOS Safari */
-webkit-user-select: none; /* Safari */
-khtml-user-select: none; /* Konqueror HTML */
-moz-user-select: none; /* Old versions of Firefox */
-ms-user-select: none; /* Internet Explorer/Edge */
user-select: none;
}
.sidenav a {
background-color: #eee;
padding: 5px 0px 5px 15px;
text-decoration: none;
font-size: 15px;
color: #0d0d0d;
display: block;
text-align: left;
}
.sidenav img {
width:100%;
padding: 0px 5px 0px 5px;
}
.sidenav h1 {
font-size: 1.5em;
margin: 0px 0px 10px;
}
.sidenav a:hover {
color: white;
background-color: #EC2754;
}
.accordion {
font-weight: bold;
background-color: #eee;
color: #444;
padding: 10px 0px 5px 8px;
width: 100%;
border: none;
text-align: left;
outline: none;
font-size: 15px;
transition: 0.4s;
cursor:pointer;
}
.accordion:hover {
background-color: #ddd;
}
.accordion.active{
background-color:#ddd;
}
.nav-btn {
font-weight: bold;
background-color: #eee;
color: #444;
padding: 8px 8px 5px 9px;
width: 100%;
border: none;
text-align: left;
outline: none;
font-size: 15px;
}
.hidden {
display:none;
}
.nav-btn a{
font-weight: normal;
padding-right: 10px;
border-radius: 15px;
cursor: crosshair
}
.options {
display:block;
padding: 0px 0px 0px 0px;
background-color: #eee;
max-height: 0;
overflow: hidden;
transition: max-height 0.2s ease-out;
cursor:pointer;
}
.search{
background-color: #eee;
padding-right:40px;
white-space: nowrap;
padding-top: 5px;
height:43px;
}
.search input{
border-top-right-radius:10px;
padding-top:0;
padding-bottom:0;
font-size:1em;
width:100%;
height:38px;
vertical-align:top;
}
.btn{
border-top-left-radius:10px;
color:#fff;
font-size:100%;
padding: 8px;
width:38px;
background-color:#ed2553;
}
#tags{
text-align:left;
display: flex;
width:15rem;
justify-content: start;
margin: 2px 2px 2px 0px;
flex-wrap: wrap;
}
.btn-2{
font-weight:700;
padding-right:0.5rem;
padding-left:0.5rem;
color:#fff;
border:0;
font-size:100%;
height:1.25rem;
outline: 0;
border-radius: 0.3rem;
cursor: pointer;
margin:0.15rem;
transition: all 1s linear;
}
.btn-2#parody{
background-color: red;
}
.btn-2#character{
background-color: blue;
}
.btn-2#tag{
background-color: green;
}
.btn-2#artist{
background-color: fuchsia;
}
.btn-2#group{
background-color: teal;
}
.btn-2.hover{
filter: saturate(20%)
}
input,input:focus{
border:none;
outline:0;
}
html.theme-black,html.theme-black body {
color: #d9d9d9;
background-color: #0d0d0d
}
html.theme-black #thumbnail-container,html.theme-black .container {
background-color: #1f1f1f
}
html.theme-black .gallery:hover .caption {
box-shadow: 0 10px 20px rgba(0,0,0,.5)
}
html.theme-black .caption {
background-color: #404040;
color: #d9d9d9
}

51
nhentai/viewer/main.html Normal file
View File

@ -0,0 +1,51 @@
<!doctype html>
<html lang="en" class=" theme-black">
<head>
<meta charset="utf-8" />
<meta name="theme-color" content="#1f1f1f" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, viewport-fit=cover" />
<title>nHentai Viewer</title>
<script type="text/javascript" src="data.js"></script>
<!-- <link rel="stylesheet" href="./main.css"> -->
<style>
{STYLES}
</style>
</head>
<body>
<div id="content">
<nav class="sidenav">
<img src="logo.png">
<h1>nHentai Viewer</h1>
<button class="accordion">Language</button>
<div class="options" id="language">
<a>English</a>
<a>Japanese</a>
<a>Chinese</a>
</div>
<button class="accordion">Category</button>
<div class="options" id ="category">
<a>Doujinshi</a>
<a>Manga</a>
</div>
<button class="nav-btn hidden">Filters</button>
<div class="search">
<input autocomplete="off" type="search" id="tagfilter" name="q" value="" autocapitalize="none" required="">
<svg class="btn" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="white" d="M505 442.7L405.3 343c-4.5-4.5-10.6-7-17-7H372c27.6-35.3 44-79.7 44-128C416 93.1 322.9 0 208 0S0 93.1 0 208s93.1 208 208 208c48.3 0 92.7-16.4 128-44v16.3c0 6.4 2.5 12.5 7 17l99.7 99.7c9.4 9.4 24.6 9.4 33.9 0l28.3-28.3c9.4-9.4 9.4-24.6.1-34zM208 336c-70.7 0-128-57.2-128-128 0-70.7 57.2-128 128-128 70.7 0 128 57.2 128 128 0 70.7-57.2 128-128 128z"/></svg>
<div id="tags">
</div>
</nav>
<div class="container" id="favcontainer">
{PICTURE}
</div> <!-- container -->
</div>
<script>
{SCRIPTS}
</script>
</body>
</html>

177
nhentai/viewer/main.js Normal file
View File

@ -0,0 +1,177 @@
//------------------------------------navbar script------------------------------------
var menu = document.getElementsByClassName("accordion");
for (var i = 0; i < menu.length; i++) {
menu[i].addEventListener("click", function() {
var panel = this.nextElementSibling;
if (panel.style.maxHeight) {
this.classList.toggle("active");
panel.style.maxHeight = null;
} else {
panel.style.maxHeight = panel.scrollHeight + "px";
this.classList.toggle("active");
}
});
}
var language = document.getElementById("language").children;
for (var i = 0; i < language.length; i++){
language[i].addEventListener("click", function() {
toggler = document.getElementById("language")
toggler.style.maxHeight = null;
document.getElementsByClassName("accordion")[0].classList.toggle("active");
filter_maker(this.innerText, "language");
});
}
var category = document.getElementById("category").children;
for (var i = 0; i < category.length; i++){
category[i].addEventListener("click", function() {
document.getElementById("category").style.maxHeight = null;
document.getElementsByClassName("accordion")[1].classList.toggle("active");
filter_maker(this.innerText, "category");
});
}
//-----------------------------------------------------------------------------------
//----------------------------------Tags Script--------------------------------------
tag_maker(tags);
var tag = document.getElementsByClassName("btn-2");
for (var i = 0; i < tag.length; i++){
tag[i].addEventListener("click", function() {
filter_maker(this.innerText, this.id);
});
}
var input = document.getElementById("tagfilter");
input.addEventListener("input", function() {
var tags = document.querySelectorAll(".btn-2");
if (this.value.length > 0) {
for (var i = 0; i < tags.length; i++) {
var tag = tags[i];
var nome = tag.innerText;
var exp = new RegExp(this.value, "i");;
if (exp.test(nome)) {
tag.classList.remove("hidden");
}
else {
tag.classList.add("hidden");
}
}
} else {
for (var i = 0; i < tags.length; i++) {
var tag = tags[i];
tag.classList.add('hidden');
}
}
});
input.addEventListener('keypress', function (e) {
enter_search(e, this.value);
});
//-----------------------------------------------------------------------------------
//------------------------------------Functions--------------------------------------
function enter_search(e, input){
var count = 0;
var key = e.which || e.keyCode;
if (key === 13 && input.length > 0) {
var all_tags = document.getElementById("tags").children;
for(i = 0; i < all_tags.length; i++){
if (!all_tags[i].classList.contains("hidden")){
count++;
var tag_name = all_tags[i].innerText;
var tag_id = all_tags[i].id;
if (count>1){break}
}
}
if (count == 1){
filter_maker(tag_name, tag_id);
}
}
}
function filter_maker(text, class_value){
var check = filter_checker(text);
var nav_btn = document.getElementsByClassName("nav-btn")[0];
if (nav_btn.classList.contains("hidden")){
nav_btn.classList.toggle("hidden");
}
if (check == true){
var node = document.createElement("a");
var textnode = document.createTextNode(text);
node.appendChild(textnode);
node.classList.add(class_value);
nav_btn.appendChild(node);
filter_searcher();
}
}
function filter_searcher(){
var verifier = null;
var tags_filter = [];
var doujinshi_id = [];
var filter_tag = document.getElementsByClassName("nav-btn")[0].children;
filter_tag[filter_tag.length-1].addEventListener("click", function() {
this.remove();
try{
filter_searcher();
}
catch{
var gallery = document.getElementsByClassName("gallery-favorite");
for (var i = 0; i < gallery.length; i++){
gallery[i].classList.remove("hidden");
}
}
});
for (var i=0; i < filter_tag.length; i++){
var fclass = filter_tag[i].className;
var fname = filter_tag[i].innerText.toLowerCase();
tags_filter.push([fclass, fname])
}
for (var i=0; i < data.length; i++){
for (var j=0; j < tags_filter.length; j++){
try{
if(data[i][tags_filter[j][0]].includes(tags_filter[j][1])){
verifier = true;
}
else{
verifier = false;
break
}
}
catch{
verifier = false;
break
}
}
if (verifier){doujinshi_id.push(data[i].Folder);}
}
var gallery = document.getElementsByClassName("gallery-favorite");
for (var i = 0; i < gallery.length; i++){
gtext = gallery [i].children[0].children[0].children[1].innerText;
if(doujinshi_id.includes(gtext)){
gallery[i].classList.remove("hidden");
}
else{
gallery[i].classList.add("hidden");
}
}
}
function filter_checker(text){
var filter_tags = document.getElementsByClassName("nav-btn")[0].children;
if (filter_tags == null){return true;}
for (var i=0; i < filter_tags.length; i++){
if (filter_tags[i].innerText == text){return false;}
}
return true;
}
function tag_maker(data){
for (i in data){
for (j in data[i]){
var node = document.createElement("button");
var textnode = document.createTextNode(data[i][j]);
node.appendChild(textnode);
node.classList.add("btn-2");
node.setAttribute('id', i);
node.classList.add("hidden");
document.getElementById("tags").appendChild(node);
}
}
}

85
nhentai/viewer/scripts.js Normal file
View File

@ -0,0 +1,85 @@
const pages = Array.from(document.querySelectorAll('img.image-item'));
let currentPage = 0;
function changePage(pageNum) {
const previous = pages[currentPage];
const current = pages[pageNum];
if (current == null) {
return;
}
previous.classList.remove('current');
current.classList.add('current');
currentPage = pageNum;
const display = document.getElementById('dest');
display.style.backgroundImage = `url("${current.src}")`;
scroll(0,0)
document.getElementById('page-num')
.innerText = [
(pageNum + 1).toLocaleString(),
pages.length.toLocaleString()
].join('\u200a/\u200a');
}
changePage(0);
document.getElementById('list').onclick = event => {
if (pages.includes(event.target)) {
changePage(pages.indexOf(event.target));
}
};
document.getElementById('image-container').onclick = event => {
const width = document.getElementById('image-container').clientWidth;
const clickPos = event.clientX / width;
if (clickPos < 0.5) {
changePage(currentPage - 1);
} else {
changePage(currentPage + 1);
}
};
document.onkeypress = event => {
switch (event.key.toLowerCase()) {
// Previous Image
case 'w':
scrollBy(0, -40);
break;
case 'a':
changePage(currentPage - 1);
break;
// Return to previous page
case 'q':
window.history.go(-1);
break;
// Next Image
case ' ':
case 's':
scrollBy(0, 40);
break;
case 'd':
changePage(currentPage + 1);
break;
}// remove arrow cause it won't work
};
document.onkeydown = event =>{
switch (event.keyCode) {
case 37: //left
changePage(currentPage - 1);
break;
case 38: //up
break;
case 39: //right
changePage(currentPage + 1);
break;
case 40: //down
break;
}
};

70
nhentai/viewer/styles.css Normal file
View File

@ -0,0 +1,70 @@
*, *::after, *::before {
box-sizing: border-box;
}
img {
vertical-align: middle;
}
html, body {
display: flex;
background-color: #e8e6e6;
height: 100%;
width: 100%;
padding: 0;
margin: 0;
font-family: sans-serif;
}
#list {
height: 2000px;
overflow: scroll;
width: 260px;
text-align: center;
}
#list img {
width: 200px;
padding: 10px;
border-radius: 10px;
margin: 15px 0;
cursor: pointer;
}
#list img.current {
background: #0003;
}
#image-container {
flex: auto;
height: 2000px;
background: #222;
color: #fff;
text-align: center;
cursor: pointer;
-webkit-user-select: none;
user-select: none;
position: relative;
}
#image-container #dest {
height: 2000px;
width: 100%;
background-size: contain;
background-repeat: no-repeat;
background-position: top;
}
#image-container #page-num {
position: static;
font-size: 14pt;
left: 10px;
bottom: 5px;
font-weight: bold;
opacity: 0.75;
text-shadow: /* Duplicate the same shadow to make it very strong */
0 0 2px #222,
0 0 2px #222,
0 0 2px #222;
}

View File

@ -1,5 +1,7 @@
requests>=2.5.0 requests>=2.5.0
soupsieve<2.0
BeautifulSoup4>=4.0.0 BeautifulSoup4>=4.0.0
threadpool>=1.2.7 threadpool>=1.2.7
tabulate>=0.7.5 tabulate>=0.7.5
future>=0.15.2 future>=0.15.2
iso8601 >= 0.1

View File

@ -1,16 +1,20 @@
# coding: utf-8 # coding: utf-8
from __future__ import print_function, unicode_literals from __future__ import print_function, unicode_literals
import sys
import codecs import codecs
from setuptools import setup, find_packages from setuptools import setup, find_packages
from nhentai import __version__, __author__, __email__ from nhentai import __version__, __author__, __email__
with open('requirements.txt') as f: with open('requirements.txt') as f:
requirements = [l for l in f.read().splitlines() if l] requirements = [l for l in f.read().splitlines() if l]
def long_description(): def long_description():
with codecs.open('README.md', 'rb') as f: with codecs.open('README.rst', 'rb') as readme:
return str(f.read()) if not sys.version_info < (3, 0, 0):
return readme.read().decode('utf-8')
setup( setup(
name='nhentai', name='nhentai',
@ -19,7 +23,7 @@ setup(
author=__author__, author=__author__,
author_email=__email__, author_email=__email__,
keywords='nhentai, doujinshi', keywords=['nhentai', 'doujinshi', 'downloader'],
description='nhentai.net doujinshis downloader', description='nhentai.net doujinshis downloader',
long_description=long_description(), long_description=long_description(),
url='https://github.com/RicterZ/nhentai', url='https://github.com/RicterZ/nhentai',