-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathjpinfo.py
More file actions
1209 lines (1145 loc) · 64.7 KB
/
jpinfo.py
File metadata and controls
1209 lines (1145 loc) · 64.7 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#! /usr/bin/python
# -*- coding: utf-8 -*-
# japanese web/irc/maillist/bbs/rss etc spider
# indexes to order kanjis, words and phrases by popularity/usefulness
# integrates with entrophilia quizzes to:
# a) find articles/paragraphs/pages consisting of learnt words/kanjis
# b) order dictionary to drive learning of words
# c) fine-tune order to encompass "all common usaes of a kanji" so we can pass tests
# d) provide a clean, friendly reader interface.
# i) pop-up furigana
# ii) colour-coding for learnt/learning/new/unknown/kana
# iii) options to override learnt/order
# iv) faster interface to entrophilia quizzes via REST
# hopefully uses jmdict/edict from stardict-dic-ja package
# hmm. stardict appears to be able to add the furigana and english pop-up facility to anything anyway!
# well, stardict is a great find and dead useful. probably going to be most useful here for testing/comparison.
# sdcv appears to be a c++ command line client. so perhaps there is an API in there somewhere! guess what - doesn't compile.
# maybe come back to this as an alternative. i suspect reading the XML will be simpler for now.
# use wget/curl for spider function?
# wget has recursive retrieval. if i run a parallel task that watches ....
# i don't really want to search randomly. i want to scan wikipedia and then add feeds and new headline pages as i find good ones.
# if i'm not getting useful stats from the index ... add more feeds or find another big one like wikipedia.
# python+mysql+unicode seems a bit nightmarish. for now try to make a dictionary index in memory and use the XML dict and urlencoding for the urls.
# should we bother with frames and iframes? shouldn't be too tricky.
# first assumption failed - looks like ja.wikipedia.org doesn't have _any_ external links! still, it's got lots of text (which is what we're actually after) and external references.
# 2ch very quickly generates a massive index of orz.2ch.io
# j-wave has a lot of 404s but still finds plenty
# amazon.co.jp seems to have a lot of pages that appear to be mostly english linked to from very japanese pages.
import urllib.request, urllib.error, urllib.parse
import sys, traceback
import os
import time
import keyring
import psycopg2 as py_db
import socket
from html.parser import HTMLParser
from urllib.parse import urljoin, urlparse, urlunparse
from optparse import OptionParser
from datetime import datetime, timedelta
import threading
import subprocess
import psycopg2.errors
#from jmdict import jmdict
sys.path.append("..")
#from cmdmsg import cmdmsg
class vctHTMLParser(HTMLParser):
readings = {}
intd = False
inth = False
row = 0
column = 0
atype = ""
btype = [['past', 'neg_past', 'nonpast', 'neg', 'imp', 'te', 'vol'],
['pol_past', 'pol_negpast', 'pol_nonpast', 'pol_neg', 'pol_imp', 'pol_te', 'pol_vol'],
['tari', 'i', 'condra', 'condeba', 'pass', 'cause', 'pot']]
def handle_starttag(self, tag, attrs):
if tag == "td":
self.intd = True
if self.row > 0: self.column += 1
if tag == "th" and len(attrs) == 1 and attrs[0][0] == "colspan" and attrs[0][1] == "7":
self.inth = True
self.row = 0
if tag == "tr":
self.row += 1
self.column = 0
def handle_endtag(self, tag):
if tag == "td":
self.intd = False
if tag == "th":
if self.atype != "" and self.atype not in self.readings: self.readings[self.atype] = {}
if self.atype == "vs-i" and "vs" not in self.readings: self.readings["vs"] = {}
self.inth = False
def handle_data(self, data):
udata = data
if self.intd and self.row > 0 and self.column > 1 and self.atype != "" and data != "" and data != "-":
if data == "| [root]": udata = u""
if udata.find(u"(") >= 0 and udata.find(u")") >= 0:
udata = udata[:udata.find(u"(")] + udata[udata.find(u")") + 1:] + u" + " + udata.replace(u"(", u"").replace(u")", u"")
num = 1
for udatum in udata.split(u" + "):
udatum = udatum.strip().strip(u" /")
datum = udatum.encode("utf-8")
thistype = self.btype[self.row - 1][self.column - 2]
if num > 1:
if self.atype == "vd" and self.row == 3: thistype = "pol_" + thistype
else: thistype += str(num)
if self.atype == "vs-i": self.readings["vs"][thistype] = datum
if self.atype == "vk":
self.readings[self.atype+"_k"][thistype] = datum
self.readings[self.atype][thistype] = (u"く" + udatum[1:]).encode("utf-8")
else: self.readings[self.atype][thistype] = datum
num += 1
if self.inth:
if data.find("Weak verbs") >= 0: self.atype = "v1"
elif data.find("Strong verbs") >= 0:
if udata.find(u"ぶ") >= 0: self.atype = "v5b"
elif udata.find(u"ぐ") >= 0: self.atype = "v5g"
elif udata.find(u"く") >= 0: self.atype = "v5k"
elif udata.find(u"む") >= 0: self.atype = "v5m"
elif udata.find(u"ぬ") >= 0: self.atype = "v5n"
elif udata.find(u"す") >= 0: self.atype = "v5s"
elif udata.find(u"つ") >= 0: self.atype = "v5t"
elif udata.find(u"う") >= 0: self.atype = "v5u"
elif udata.find(u"る") >= 0: self.atype = "v5r"
elif data.find("する") >= 0: self.atype = "vs-i"
elif data.find("くる") >= 0: self.atype = "vk"
elif data.find("だ") >= 0: self.atype = "vd"
else: self.atype = ""
class jaHTMLParser(HTMLParser):
currenttext=u""
# TODO should find the following in the HTML specs
# try http://dev.w3.org/html5/spec/Overview.html#attributes-1
# 13oct2011 ignore noscript - it's usually just "you need script".
# and noframes for much the same reason
blocktag = ['p','div', 'table', 'tbody', 'tr', 'td', 'body', 'title', 'h1', 'h2', 'h3', 'form', 'input', 'ul', 'li', 'ol', 'label', 'hr', 'br', 'iframe', 'script', 'blockquote', 'button', 'area', 'map', 'object', 'h4', 'dl', 'dt', 'dd', 'h5', 'h6', 'select', 'option', 'center', 'spacer', 'address', 'frameset', 'frame', 'textarea', 'th', 'code', 'pre', 'thead', 'tfoot', 'caption', 'layer', 'fieldset', 'xhtml', 'o:p', 'optgroup', 'applet', 'rdf:description', 'rdf:rdf', 'fn:distribution', 'fn:type', 'x-claris-window', 'x-claris-tagview', 'headline', 'nolayer', 'bpdy', 'xml', 'q', 'samp', 'kbd', 'quote', 'tabletable', 'noembed', 'rb', 'rp', 'rt', 'ruby', 'left', 'sub', 'del']
ignoretag = ['head', 'html', 'meta', 'style', 'link', 'colgroup', 'comment', 'ajj_comment', 'lastmod', 'dis.n', 'noscript', 'noframes']
unknowntag = ['param', 'embed', 'sup', 'nobr', 'tt', 'content', 'col', 'ins', 'marquee', 'im_bodey', 'im_page', 'wbr', 'base', 'basefont', 'ssinfo', 'fragmentinstance', 'region', 'element', 'classes', 'class', 'legend', 'metal:pan', 'csaction', 'csscriptdict', 'csactiondict', 'csobj', 'kanhanbypass', 'csactions']
inlinetag = ['span', 'a', 'img', 'b', 'i', 'strong', 'em', 'style', 'font', 'small', 'u', 'big', 'abbr', 'number', 'cite', 'blink', 'acronym', 'kdb', 's', 'picture', 'mars:language', 'nowrap', 'insert', 'copy', 'strike', 'hyperlink', 'ilayer', 'en', 'var', 'defanged_link', 'dfn', 'o:smarttagtype', 'st1:place', 'st1:placename', 'st1:placetype', 'st1:state', 'm', 'l', 'fx']
# linktags complete from html5 spec (plus 'frame') as of 13oct2011
# probably want to ignore some of these .. not sure which or how/where yet
linktag = {'href': ['a', 'area', 'link', 'base'], 'src': ['video', 'audio', 'frame', 'iframe', 'embed', 'img', 'input', 'script', 'source', 'track'], 'action': ['form'], 'cite': ['blockquote', 'del', 'ins', 'q'], 'data': ['object'], 'formaction': ['button', 'input'], 'icon': ['command'], 'manifest': ['html'], 'poster': ['video']}
# 'block' tags used often without necessarily breaking a sentence.
jointag=['br']
unentities = {}
jaentities = {
'MA': "martial arts term",
'X': "rude or X-rated term (not displayed in educational software)",
'abbr': "abbreviation",
'adj-i': "adjective (keiyoushi)",
'adj-na': "adjectival nouns or quasi-adjectives (keiyodoshi)",
'adj-no': "nouns which may take the genitive case particle `no'",
'adj-pn': "pre-noun adjectival (rentaishi)",
'adj-t': "`taru' adjective",
'adj-f': "noun or verb acting prenominally",
'adj': "former adjective classification (being removed)",
'adv': "adverb (fukushi)",
'adv-to': "adverb taking the `to' particle",
'arch': "archaism",
'ateji': "ateji (phonetic) reading",
'aux': "auxiliary",
'aux-v': "auxiliary verb",
'aux-adj': "auxiliary adjective",
'Buddh': "Buddhist term",
'chn': "children's language",
'col': "colloquialism",
'comp': "computer terminology",
'conj': "conjunction",
'ctr': "counter",
'derog': "derogatory",
'eK': "exclusively kanji",
'ek': "exclusively kana",
'exp': "Expressions (phrases, clauses, etc.)",
'fam': "familiar language",
'fem': "female term or language",
'food': "food term",
'geom': "geometry term",
'gikun': "gikun (meaning) reading",
'hon': "honorific or respectful (sonkeigo) language",
'hum': "humble (kenjougo) language",
'iK': "word containing irregular kanji usage",
'id': "idiomatic expression",
'ik': "word containing irregular kana usage",
'int': "interjection (kandoushi)",
'io': "irregular okurigana usage",
'iv': "irregular verb",
'ling': "linguistics terminology",
'm-sl': "manga slang",
'male': "male term or language",
'male-sl': "male slang",
'math': "mathematics",
'mil': "military",
'n': "noun (common) (futsuumeishi)",
'n-adv': "adverbial noun (fukushitekimeishi)",
'n-suf': "noun, used as a suffix",
'n-pref': "noun, used as a prefix",
'n-t': "noun (temporal) (jisoumeishi)",
'num': "numeric",
'oK': "word containing out-dated kanji",
'obs': "obsolete term",
'obsc': "obscure term",
'ok': "out-dated or obsolete kana usage",
'poet': "poetical term",
'pol': "polite (teineigo) language",
'pref': "prefix",
'prt': "particle",
'physics': "physics terminology",
'rare': "rare",
'sens': "sensitive",
'sl': "slang",
'suf': "suffix",
'uK': "word usually written using kanji alone",
'uk': "word usually written using kana alone",
'v1': "Ichidan verb",
'v4r': "Yondan verb with `ru' ending (archaic)",
'v5': "Godan verb (not completely classified)",
'v5aru': "Godan verb - -aru special class",
'v5b': "Godan verb with `bu' ending",
'v5g': "Godan verb with `gu' ending",
'v5k': "Godan verb with `ku' ending",
'v5k-s': "Godan verb - Iku/Yuku special class",
'v5m': "Godan verb with `mu' ending",
'v5n': "Godan verb with `nu' ending",
'v5r': "Godan verb with `ru' ending",
'v5r-i': "Godan verb with `ru' ending (irregular verb)",
'v5s': "Godan verb with `su' ending",
'v5t': "Godan verb with `tsu' ending",
'v5u': "Godan verb with `u' ending",
'v5u-s': "Godan verb with `u' ending (special class)",
'v5uru': "Godan verb - Uru old class verb (old form of Eru)",
'v5z': "Godan verb with `zu' ending",
'vz': "Ichidan verb - zuru verb (alternative form of -jiru verbs)",
'vi': "intransitive verb",
'vk': "Kuru verb - special class",
'vn': "irregular nu verb",
'vs': "noun or participle which takes the aux. verb suru",
'vs-s': "suru verb - special class",
'vs-i': "suru verb - irregular",
'kyb': "Kyoto-ben",
'osb': "Osaka-ben",
'ksb': "Kansai-ben",
'ktb': "Kantou-ben",
'tsb': "Tosa-ben",
'thb': "Touhoku-ben",
'tsug': "Tsugaru-ben",
'kyu': "Kyuushuu-ben",
'rkb': "Ryuukyuu-ben",
'vt': "transitive verb",
'vulg': "vulgar expression or word"
}
# from http://en.wikipedia.org/wiki/Japanese_verb_conjugations, http://en.wikibooks.org/wiki/Japanese/Verb_conjugation_table and http://www.guidetojapanese.org
tenses = {'nonpast':'present/future', 'past':'past', 'condra':'if and when', 'condraba':'if and when - formal',
'neg':'not', 'i':'for/formal', 'te':'and/command', 'pot':'able to do', 'cause':'cause or enable to do', 'condeba':'if able to',
'imp':'an instruction', 'pass':'indirectly/regrettably', 'vol':'possibility', 'pos': "possesive", 'to': "to adverb",
'negte': "don't", 'polv': "polite verb", 'tai': "expressing a wish", 'washinai': "strong negative intention",
'nasai': "a command", 'na': "a command", 'yasui': "easy", 'nikui': "difficult", 'sugiru': "excessive",
'yagaru': "disrespectful", 'ni': "purpose", 'n': "polite", 'kure': "request", 'kudasai': "request",
'kudasaik': "request", 'teiru': "currently", 'teoku': "completed for later", 'tearu': "completed object",
'teshimau': "unexpected", 'temiru': "attempt", 'teiku': "continuous or changing",
'tekuru': "continuous or changed", 'waikenai': "must not", 'wadame': "must not", 'moii': "permitted to",
'mokamawanai': "allowed/invited", 'hoshii': "requested", 'sumimasen': "sorry for doing",
'neg_past': "negative past", 'pol_past': "past (polite)", 'pol_negpast': "negative past (polite)", 'pol_nonpast': "present/future (polite)",
'pol_neg': "negative (polite)", 'pol_imp': "an instruction (polite)", 'pol_te': "(polite)", 'pol_vol': "possibility (polite)",
'tari': "tari"}
endings = {
'vs': {'nonpast':'する', 'past':'した', 'neg':'しない', 'i':'し', 'te':'して', 'pot':'できる', 'pot2':'せる',
'cause':'させる', 'condeba':'すれば', 'imp':'しろ', 'imp2':'せよ', 'pass':'される', 'vol':'しよう', 'vol2':'せよう'},
'v5u': {'past':'った', 'neg':'わない', 'i':'い', 'te':'って', 'pot':'える', 'cause':'わせる', 'condeba':'えば', 'imp':'え', 'pass':'われる', 'vol':'おう'},
'v5k': {'past':'いた', 'neg':'かない', 'i':'き', 'te':'いて', 'pot':'ける', 'cause':'かせる', 'condeba':'けば', 'imp':'け', 'pass':'かれる', 'vol':'こう'},
'v5g': {'past':'いだ', 'neg':'がない', 'i':'ぎ', 'te':'いで', 'pot':'げる', 'cause':'がせる', 'condeba':'げば', 'imp':'げ', 'pass':'がれる', 'vol':'ごう'},
'v5s': {'past':'した', 'neg':'さない', 'i':'し', 'te':'して', 'pot':'せる', 'cause':'させる', 'condeba':'せば', 'imp':'せ', 'pass':'される', 'vol':'そう'},
'v5t': {'past':'った', 'neg':'たない', 'i':'ち', 'te':'って', 'pot':'てる', 'cause':'たせる', 'condeba':'てば', 'imp':'て', 'pass':'たれる', 'vol':'とう'},
'v5n': {'past':'んだ', 'neg':'なない', 'i':'に', 'te':'んで', 'pot':'ねる', 'cause':'なせる', 'condeba':'ねば', 'imp':'ね', 'pass':'なれる', 'vol':'のう'},
'v5b': {'past':'んだ', 'neg':'ばない', 'i':'び', 'te':'んで', 'pot':'べる', 'cause':'ばせる', 'condeba':'べば', 'imp':'べ', 'pass':'ばれる', 'vol':'ぼう'},
'v5m': {'past':'んだ', 'neg':'まない', 'i':'み', 'te':'んで', 'pot':'める', 'cause':'ませる', 'condeba':'めば', 'imp':'め', 'pass':'まれる', 'vol':'もう'},
'v5r': {'past':'った', 'neg':'らない', 'i':'り', 'te':'って', 'pot':'れる', 'cause':'らせる', 'condeba':'れば', 'imp':'れ', 'pass':'られる', 'vol':'ろう'},
'adj-i': {'past':'かった', 'neg':'くない', 'te':'くて', 'cause':'くさせる', 'condeba':'ければ'},
'adj-na': {'past':'だった', 'neg':'ではない', 'neg2':'じゃない', 'te':'で', 'cause':'にさせる', 'condeba':'であれば'},
'adj-no': {'pos': "の"},
'adv-to': {'to': "と"},
'eK': {},
'ek': {},
'n-suf': {},
'n-pref': {},
'pref': {},
'suf': {},
'uK': {},
'uk': {},
'v1': {'past':'た', 'neg':'ない', 'i':'', 'te':'て', 'pot':'られる', 'pot2':'れる',
'cause':'させる', 'condeba':'れば', 'imp':'ろ', 'imp2':'よ', 'pass':'られる', 'vol':'よう'},
'v5k-s': {'nonpast':'いく', 'past':'いった', 'te':'いって'},
'v5r-i': {'neg':'ない'},
'v5aru': {},
'vn': {},
'vs-s': {},
'vs-i': {'nonpast':'する', 'past':'した', 'neg':'しない', 'i':'し', 'te':'して', 'pot':'できる', 'pot2':'せる',
'cause':'させる', 'condeba':'すれば', 'imp':'しろ', 'imp2':'せよ', 'pass':'される', 'vol':'しよう', 'vol2':'せよう'},
'n': {'nonpast': "ます", 'past': "ました", 'neg': "ません", 'te': "まして", 'imp': "ませ", 'vol': "ましょう"}, # -masu endings
'condra': {'condraba': "ば"},
'past': {'condra': 'ら'},
'nai': {'negte': "いで", 'condeba': "ければ"}, # special for neg te - "don't do this"
'i': {'n': "", 'tai': "たい", 'washinai': "はしない", 'nasai': "なさい", 'na': "な", 'yasui': "やすい", 'nikui': "にくい", 'sugiru': "すぎる", 'yagaru': "やがる", 'ni': "に"},
'te': {'kure': "くれ", 'kudasai': "ください", 'kudasai_k': "下さい", 'teiru': "いる", 'teoku': "おく", 'tearu': "ある",
'teshimau': "しまう", 'temiru': "みる", 'teiku': "いく", 'tekuru': "くる", 'waikenai': "はいけない", 'wadame': "はだめ",
'moii': "もいい", 'mokamawanai': "もかまわない", 'hoshii': "ほしい", 'hoshii_k': "欲しい", 'sumimasen': "すみません"},
'nul': {'dict': ""}
}
def __init__(self, db_cursor):
HTMLParser.__init__(self)
# TODO: read in entities from specified DTD. maybe do that outside and pass as an option
for k,v in self.jaentities.items(): self.unentities[v] = k
self.db_cursor = db_cursor
self.dicttable = None
self.biggest = None
self.ja = None
self.nonja = None
self.cursor = None
self.indextable = None
self.thisid = None
self.block = None
self.missedtag = None
self.tags = None
self.error = None
def iscjk(self, string):
paraja = 0
for c in string:
u8 = c.encode("utf-8")
if u8 == u"\u00d7".encode("utf-8"): paraja += 1
elif u8 >= u"\u0370".encode("utf-8") and u8 <= u"\u03ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u2000".encode("utf-8") and u8 <= u"\u206f".encode("utf-8"): paraja += 1
elif u8 >= u"\u2190".encode("utf-8") and u8 <= u"\u22ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u25a0".encode("utf-8") and u8 <= u"\u25ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u3000".encode("utf-8") and u8 <= u"\u30ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u4e00".encode("utf-8") and u8 <= u"\u9fff".encode("utf-8"): paraja += 1
elif u8 >= u"\uff00".encode("utf-8") and u8 <= u"\uffef".encode("utf-8"): paraja += 1
return paraja
def isreading(self, string):
paraja = 0
for c in string:
u8 = c.encode("utf-8")
if u8 >= u"\u2200".encode("utf-8") and u8 <= u"\u22ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u3000".encode("utf-8") and u8 <= u"\u303f".encode("utf-8"): paraja += 1
elif u8 >= u"\u3040".encode("utf-8") and u8 <= u"\u309f".encode("utf-8"): paraja += 1
elif u8 >= u"\u30a0".encode("utf-8") and u8 <= u"\u30ff".encode("utf-8"): paraja += 1
elif u8 >= u"\uff00".encode("utf-8") and u8 <= u"\uffef".encode("utf-8"): paraja += 1
return paraja
# strlen is the length of string which matches kanji/reading or skanji/sreading
# word is the current form of the string - not necessarily matching string[:strlen]
def doending(self, wtype, string, conjs, word, reading):
#print "TRACE1:", wtype, string.encode("utf-8"), conjs, word.encode("utf-8"), reading.encode("utf-8")
composite = {"pass": ["v1"], "cause": ["v1"], "pot": ["v1"], "neg": ["adj-i", "nai"],
"i": ["i"], "n":["n"], "tai": ["adj-i"], "washinai": ["adj-i"], "yasui": ["adj-i"], "nikui": ["adj-i"], "sugiru": ["v5r"], "yagaru": ["v5r"],
"te": ["te"], 'teiru': ["v1"], 'teoku': ["v5k"], 'tearu': ["v5r-i"], 'teshimau': ["v5u"], 'temiru': ["v1"], 'teiku': ["v5k"], 'tekuru': ["vk"],
"pol_te": ["te"]}
appenders = ["condra", "past", "i", "adj-na","adj-no","adv-to","n","vz","vs","vs-s", "te"]
doublers = ["vs-i", "vk"]
if wtype in appenders:
wordroot = word
readroot = reading
elif wtype in doublers:
wordroot = word[:-2]
readroot = reading[:-2]
else:
wordroot = word[:-1]
readroot = reading[:-1]
rconjs = []
bestword = ""
bestreading = ""
besttenses = []
if wtype not in self.endings: return bestword, bestreading, besttenses
for v in conjs:
if v[-1:] == '2': v = v[:-1]
elif v[:-2] == "_k": v = v[:-2]
elif v[:4] == "pol_": v = v[4:]
if v not in rconjs: rconjs.append(v)
for tense, ending in self.endings[wtype].iteritems():
ending = ending.decode('utf-8')
if tense[-1:] == '2': rtense = tense[:-1]
elif tense[:-2] == "_k": rtense = tense[:-2]
elif tense[:4] == "pol_": rtense = tense[4:]
else: rtense = tense
if rtense in rconjs: continue
thistenses = conjs+[tense]
newword = wordroot+ending
if len(newword) > len(string) + 1: continue
if tense[-2:] == "_k": newreading = readroot+self.endings[wtype][tense[:-2]].decode('utf-8')
else: newreading = readroot+ending
if string[:len(newword) - 1] != newword[:len(newword) - 1]: continue
if string[:len(newword)] == newword and len(newword) > len(bestword):
bestword = newword
bestreading = newreading
besttenses = thistenses[:]
# see if it can be extended further
# following are rules for making one ending out of another, rather than a composite tense out of both.
if rtense in ["condra", "past"]:
thisword, thisreading, thistenses = self.doending(rtense, string, conjs, newword, newreading)
if string[:len(thisword)] == thisword and len(thisword) > len(bestword):
bestword = thisword
bestreading = thisreading
besttenses = thistenses[:]
# whereas these are composite tenses
if rtense in composite:
for newtense in composite[rtense]:
thisword, thisreading, thistenses = self.doending(newtense, string, conjs+[tense], newword, newreading)
if string[:len(thisword)] == thisword and len(thisword) > len(bestword):
bestword = thisword
bestreading = thisreading
besttenses = thistenses[:]
if len(bestword) and len(besttenses) == 0:
print("lack of conjugations weirdness!")
sys.exit(1)
if len(bestword) and bestword != string[:len(bestword)]:
print("assert thingy - unmatching weirdness!", bestword, "doesn't match", string[:len(bestword)], "in", string, repr(besttenses), wtype)
sys.exit(1)
#print "TRACE3:", bestword.encode("utf-8"), bestreading.encode("utf-8"), besttenses
return bestword, bestreading, besttenses
def lookup(self, string, suffix, kr):
#if suffix: where = "substr(%s, 1, char_length(%s) - %d)" % (kr, kr, suffix)
if suffix == 2: where = "ss" + kr
elif suffix == 1: where = "s" + kr
else: where = kr
self.db_cursor.execute(
"select jmdictID,senseID, tense, type, %s,reading from USERDATA.`{}` where %s='%s' order by score".format(self.dicttable),
(kr, where, string))
return self.db_cursor
def getamatch4(self, string):
atype = None
row = None
if not self.iscjk(string[0]): return 0, None, None
# TODO: when multi-threaded we should requery self.biggest?
# or possibly just miss it out.
debug = False
#if string[:2] == u"です": debug = True
strlen = min(
self.biggest, len(string),
len(string.partition(u'、')[0]), len(string.partition(u'。')[0]), len(string.partition(u'・')[0]),
len(string.partition(u')')[0]), len(string.partition(u'(')[0]), len(string.partition(u'」')[0]), len(string.partition(u'「')[0]),
len(string.partition(u'(')[0]), len(string.partition(u'『')[0]), len(string.partition(u'』')[0]), len(string.partition(u')')[0]),
len(string.partition(u':')[0]), len(string.partition(u':')[0]), len(string.partition(u' ')[0]), len(string.partition(u'“')[0]),
len(string.partition(u',')[0]), len(string.partition(u']')[0]), len(string.partition(u'[')[0]), len(string.partition(u'”')[0]),
len(string.partition(u'、')[0]), len(string.partition(u'!')[0]), len(string.partition(u'?')[0]))
beststrlen = 0
bestatype = None
bestestrow = None
bestesttenses = None
bestestlen = None
bestinsert = None
bestinsert_params = None
# deal with the most irregular verb as a one-off special case - "to be"
for tobetype, tobeending in self.endings['vd'].iteritems():
utobe = tobeending.decode('utf-8')
if string[:strlen] == utobe:
if string[:2] == u"じゃ": bestestrow = [["1005900", "5", "vd-"+tobetype, "auxiliary"]]
elif string[:1] == u"で": bestestrow = [["1628500", "0", "vd-"+tobetype, "auxiliary"]]
elif string[:1] == u"だ": bestestrow = [["2089020", "0", "vd-"+tobetype, "auxiliary"]]
else: continue
# nara has a dict entry of its own. .. or that might be something else
beststrlen = strlen
bestatype = "reading"
bestesttenses = ["vd", tobetype]
bestinsert = "reading='%s', sreading='%s', ssreading='%s', type='%s', kanji='%s', skanji='%s', sskanji='%s'"
bestinsert_params = (utobe, utobe[:-1], utobe[:-2], "auxiliary", utobe, utobe[:-1], utobe[:-2])
bestestlen = len(tobeending)
# if it's not "to be"
if not bestestrow:
# for each decreasing substring
while strlen:
bestword = ""
bestreading = ""
bestrow = []
besttenses = None
bestsuffix = 0
if self.isreading(string[:strlen]) < strlen: atype = "kanji"
else: atype = "reading"
allrows = []
exact = False
# find an exact match
res = self.lookup(string[:strlen], 0, atype)
row = res.fetchone()
while row:
exact = True
allrows.append(row[:])
#if debug: print "exact", row[0][2], row[0][3], row[0][4], row[0][5]
row = res.fetchone()
if exact:
bestrow = allrows[-1][:]
bestword = bestrow[0][4].decode('utf-8')
# if there's enough text then look for a partial match
if strlen > 1:
res = self.lookup(string[:strlen - 1], 1, atype)
row = res.fetchone()
while row:
allrows.append(row[:])
#if debug: print "approx", row[0][2], row[0][3], row[0][4], row[0][5]
row = res.fetchone()
if strlen > 2:
res = self.lookup(string[:strlen - 2], 2, atype)
row = res.fetchone()
while row:
allrows.append(row[:])
#if debug: print "approx", row[0][2], row[0][3], row[0][4], row[0][5]
row = res.fetchone()
# attempt to conjugate each one
for row in allrows:
thesetenses = row[0][2].split("-")
conjlen = 0
if row[0][2] == "dict":
wtypes = []
for btype in row[0][3].split(';'): wtypes.append(self.unentities[btype.strip()])
else: wtypes = thesetenses[-1:]
for wtype in wtypes:
if row[0][2] == "dict": thesetenses = [wtype.replace("-", "_")]
thisword, thisreading, thistenses = self.doending(wtype, string, thesetenses, row[0][4].decode('utf-8'), row[0][5].decode('utf-8'))
# remember the best conjugated result
if len(thisword) > len(bestword):
bestword = thisword
bestreading = thisreading
besttenses = thistenses
bestrow = row[:]
row = bestrow[:]
# if there was an exact match or a better conjugation and it was the best so far
# if len(bestword) and (len(bestword) > beststrlen or (len(bestword) == beststrlen and not exact)):
if debug and len(bestword): print(beststrlen, strlen, bestword.encode('utf-8'))
if len(bestword) and (len(bestword) >= beststrlen):
# if conjugated
if besttenses:
if debug: print("dealing with conjugation")
# figure out the kanji and reading versions from the dict form
self.cursor.execute("select reading, kanji, meaning from USERDATA.`" + self.dicttable + "` where jmdictID=%s and senseID=%s and tense='%s'" % (row[0][0], row[0][1], row[0][2]))
row2 = self.cursor.fetchone()[0]
if bestword == bestreading and row2[0] != row2[1]:
# the original is non-kanji but there's a kanji version in the dictionary
# so derive a new bestword
rword = row2[0].decode('utf-8')
kword = row2[1].decode('utf-8')
suffixpos = 0
while suffixpos < len(rword) and suffixpos < len(bestword) and rword[suffixpos] == bestword[suffixpos]: suffixpos += 1
suffixlen = suffixpos - len(rword)
if suffixlen == 0: prefix = kword
else: prefix = kword[:suffixlen]
newsuffixlen = suffixpos - len(bestword)
if newsuffixlen == 0: suffix = ""
else: suffix = bestword[newsuffixlen:]
bestword = prefix+suffix
# we might have changed bestword to the kanji by now so check reading too
if bestword != string[:len(bestword)] and bestreading != string[:len(bestreading)]:
print(string + " (" + bestword + " from " + row[0][4] + ") doesn't match " + bestword + " or " + bestreading + " : " + row2[2])
sys.exit(1)
# we'll always pick up the non-exact conjugation before the existing exact match
#elif len(bestword) > beststrlen:
# print "already got", bestword.encode('utf-8'), "or", bestreading.encode('utf-8'), "from", string.encode('utf-8')
# sys.exit(1)
row = [[row[0][0], row[0][1], "-".join(besttenses), row[0][3]]]
if self.isreading(string[:len(bestword)]) < len(string[:len(bestword)]): atype = "kanji"
else: atype = "reading"
if len(bestword) >= beststrlen:
beststrlen = len(bestword)
bestatype = atype
bestestrow = row[:]
if debug: print("conjugated", row[0][2], row[0][3])
bestesttenses = besttenses[:]
bestinsert = "reading='%s', sreading='%s', ssreading='%s', type='%s', kanji='%s', skanji='%s', sskanji='%s'"
bestinsert_params = (bestreading, bestreading[:-1], bestreading[:-2], row[0][3], bestword, bestword[:-1], bestword[:-2])
bestestlen = max(len(bestreading), len(bestword))
# if it wasn't conjugated then it must be an exact match
else:
beststrlen = strlen
bestatype = atype
bestestrow = row[:]
if debug: print("exact", row[0][2], row[0][3])
bestinsert = None
strlen -= 1 # try a shorter string
#if strlen < 3 and len(string) >= 3 and string[1:3] == u"され" and not self.isreading(string): sys.exit(1)
if beststrlen and bestinsert:
row = bestestrow[:]
# TODO: lock tables
self.cursor.execute("select count(*) from USERDATA.`%s` where jmdictID=%s and senseID=%s and tense='%s'" % (self.dicttable, row[0][0], row[0][1], row[0][2]))
if int(self.cursor.fetchone()[0][0]) == 0:
#cursor.execute("select max(ID/100000000000) from USERDATA.`" + self.dicttable + "` where jmdictID=%s and senseID=%s" % (row[0][0], row[0][1]))
#maxtense = int(float(cursor.fetchone()[0][0])) + 1
self.cursor.execute("select meaning from USERDATA.`" + self.dicttable + "` where jmdictID=%s and senseID=%s and tense='dict'" % (row[0][0], row[0][1]))
dictmeaning = self.cursor.fetchone()[0][0]
longtense = []
numtenses = len(bestesttenses)
tenseno = 1
for tense in bestesttenses[1:]:
if tense[-1:] == "2": rtense = tense[:-1]
elif tense[-2:] == "_k": rtense = tense[:-2]
else: rtense = tense
tenseno += 1
if rtense in ["te", "i"] and tenseno < numtenses: continue
longtense.append(self.tenses[rtense])
# FIXME: if we haven't narrowed it down then add for each dict with same atype
params = [self.dicttable, row[0][0]] + list(bestinsert_params) + [dictmeaning+" ("+ ", ".join(longtense) + ")", row[0][1], row[0][0]+row[0][1]+row[0][2], row[0][2]]
self.cursor.execute(
"insert into USERDATA.`%s` set jmdictID=%s, %s, meaning='%s', senseID=%s, ID=conv(substring(sha1('%s'), 1, 15), 16, 10), tense='%s'", params)
if bestestlen > self.biggest: self.biggest = bestestlen
if string[:2] == u"です" and beststrlen == 1:
print("desu error!!!")
sys.exit(1)
return beststrlen, bestatype, bestestrow
def process_text(self):
paraja = 0
paranonja = 0
order = 0
self.currenttext = self.currenttext.strip()
self.currenttext = self.currenttext.replace(u"\n", u" ")
self.currenttext = self.currenttext.replace(u"\t", u" ")
while self.currenttext.find(u" ") > -1: self.currenttext = self.currenttext.replace(u" ", u" ")
janonja = ""
# check symbols
# uses ranges scanned from jmdict using checkdict.py
for c in self.currenttext:
u8 = c.encode("utf-8")
if u8 == u"\u00d7".encode("utf-8"): paraja += 1
elif u8 >= u"\u0370".encode("utf-8") and u8 <= u"\u03ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u2000".encode("utf-8") and u8 <= u"\u206f".encode("utf-8"): paraja += 1
elif u8 >= u"\u2190".encode("utf-8") and u8 <= u"\u22ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u25a0".encode("utf-8") and u8 <= u"\u25ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u3000".encode("utf-8") and u8 <= u"\u30ff".encode("utf-8"): paraja += 1
elif u8 >= u"\u4e00".encode("utf-8") and u8 <= u"\u9fff".encode("utf-8"): paraja += 1
elif u8 >= u"\uff00".encode("utf-8") and u8 <= u"\uffef".encode("utf-8"): paraja += 1
else: paranonja += 1
if paraja:
#if True or paranonja == 0 or (float(paraja) / float(paranonja)) > 3.0:
#print self.currenttext
strpos = 0
unknown = u""
english = u""
japanese = u""
lastquery = ""
lastvals = tuple()
while strpos < len(self.currenttext): # there's something to match
strlen, kr, row = self.getamatch4(self.currenttext[strpos:])
if not strlen: # didn't find any match
unknown += self.currenttext[strpos:strpos+1]
strpos += 1
else: # found something!
#print self.thisid, self.block, order, row[0][0], row[0][1], unknown, self.currenttext[strpos:strpos+strlen]
# FIXME stick all the senses together as one string?
# or apply some rules to select one
self.ja += strlen
self.nonja += len(unknown)
# if kanji: kr = "kanji"
# else: kr="reading"
if order > 0:
if unknown != u"" and unknown != u" ":
lastquery += ", postunknown='%s'"
lastvals += (unknown,)
#print lastquery % lastvals
self.cursor.execute(lastquery, lastvals)
if order == 0 and unknown != u"" and unknown != u" ":
lastquery="insert into `" + self.indextable + "` set parsestamp=now(), urlID=%d, block=%d, `order`=%d, jmdictID=%s, jmsenseID=%s, tense='%s', preunknown='%s', kr='%s'"
lastvals=(self.thisid, self.block, order, row[0][0], row[0][1], row[0][2], unknown, kr)
else:
lastquery="insert into `" + self.indextable + "` set parsestamp=now(), urlID=%d, block=%d, `order`=%d, jmdictID=%s, jmsenseID=%s, tense='%s', kr='%s'"
lastvals=(self.thisid, self.block, order, row[0][0], row[0][1], row[0][2], kr)
order += 1
unknown = u""
strpos += strlen
if order:
if unknown != u"" and unknown != u" ":
lastquery += ", postunknown='%s'"
lastvals += (unknown,)
#print lastquery % lastvals
self.cursor.execute(lastquery, lastvals)
else: self.nonja += len(self.currenttext)
self.currenttext = u""
self.block += 1
def handle_endtag(self, tag):
if tag.lower() in self.blocktag and tag.lower() not in self.jointag:
# process current text string
if self.currenttext != "": self.process_text()
def _fixencoding(self, encoding):
# complete list is here: http://www.iana.org/assignments/character-sets
# python list is here: http://docs.python.org/lib/standard-encodings.html
# fix common (and uncommon) mistakes, typos and things missing from python
# looks like 'ibm_cp943c' is a superset of sjis and cp932 - http://www.bugbearr.jp/?文字化け
encoding = encoding.lower().replace("-", "_")
if encoding in ("", "none", "x_sjis", "sjis_jp", "shift_sjis", "x_sjis_jp", "sihft_jis", "windows_31j", "sift_jis", "cp943c") : encoding = "shift_jis"
elif encoding in ("euc", "x_euc", "x_euc_jp") : encoding = "euc_jp"
elif encoding in ("iso_8859_8_i") : encoding = "iso8859_8"
elif encoding in ("windows_874") : encoding = "cp874"
elif encoding in ("big5_8859_1") : encoding = "big5"
return encoding
def handle_starttag(self, tag, attrs):
if tag.lower() not in self.blocktag and tag.lower() not in self.ignoretag and tag.lower() not in self.inlinetag and tag.lower() not in self.unknowntag and tag.lower() not in self.missedtag:
self.missedtag.append(tag.lower())
if tag.lower() in self.blocktag and tag.lower() not in self.jointag:
# process current text string
if self.currenttext != "": self.process_text()
isrobots = False
contents = ""
for attr in attrs:
if attr[0] in self.linktag and attr[1] and not attr[1].startswith("javascript:") and tag.lower() in self.linktag[attr[0]]:
self.tags.append(attr[1].replace("\n", "").strip())
elif tag.lower() == 'meta' and len(attr) >= 2 and attr[0] == "content" and attr[1] is not None and attr[1].find("charset=") != -1 :
self.encoding = self._fixencoding(attr[1].split("charset=")[1])
elif tag.lower() == 'meta' and len(attr) == 2 and attr[0] == "name" and attr[1] is not None and attr[1]== "robots" : isrobots = True
elif tag.lower() == 'meta' and len(attr) == 2 and attr[0] == "content" and attr[1] is not None: contents = attr[1]
elif tag.lower() == 'html' and len(attr) == 2 and attr[0] == "lang" and attr[1] is not None and attr[1].lower() != "ja":
self.error("not japanese - language '%s'" % attr[1])
if isrobots and contents.find("noindex") != -1: self.error("meta 'noindex' found")
def handle_data(self, data):
if (data in ("/*", "*/")): return
if (data.isspace()): return
self.currenttext += data
def unknown_decl(self, data): # no, i really don't care about these!
pass
def unquoteutf8(url, encoding = None):
# RFC3986 says "use UTF-8" .. so we actually ignore encoding
# 'url' should always be unicode on input but utf-8 should be output for DB
# however, links in pages _may_ be using the page encoding
try:
return urllib.parse.unquote(url.encode(encoding), encoding=encoding).encode("utf-8")
except: # no encoding or wrong encoding specified - utf-8 .. even if that's what failed!
return urllib.parse.unquote(url.encode("utf-8"))
# see if there's already an entry which should stop us from checking (again)
# implicitly initialise the entry if it's not already there.
def crawlcheck(db_cursor, url, referrer = None, recent = 0, encoding = None):
url = unquoteutf8(url, encoding = encoding)
if globals()['options'].verbose: print ("checking for previous scan of", url)
refid = None
if referrer:
referrer = unquoteutf8(referrer, encoding = encoding)
db_cursor.execute("select ID from webcrawl where url='%s'", (referrer, ))
row = db_cursor.fetchone()
if row : refid = row[0][0]
# when called with referrer set we never look at the returned value anyway
else : return None
# attempt to insert that url -
insert = "insert into webcrawl set url='%s', stamp=now()"
params = [url]
if referrer:
insert += ", referrer=%s"
params.append(refid)
insert += " returning id"
try:
db_cursor.execute(insert, params)
thisis = db_cursor.fetchone()[0]
except psycopg2.errors.UniqueViolation:
# hopefully that just means it's already there
thisis=-1
if recent:
db_cursor.execute("select ID from webcrawl where url='%s' and timestampdiff(hour, now(), stamp)<164 and (ja is not null or redirect is not null)", url)
if db_cursor.fetchone():
return None
if thisis == -1:
db_cursor.execute("select ID from webcrawl where url='%s'", url)
row = db_cursor.fetchone()
if row: return int(row[0][0])
else:
print("couln't find or add url", url, referrer, recent)
return None
return thisis
def crawlupdate(db_cursor, url, ja=None, redirect = None, status = None, line=None, char=None, details=None, encoding = None):
url = unquoteutf8(url, encoding = encoding)
if redirect:
reid = crawlcheck(db_cursor, redirect)
redirect = unquoteutf8(redirect, encoding = encoding)
if reid is None: return None
else:
reid = None
if type(status) == bytes: status = status.encode("utf-8") # for DB
if type(details) == bytes: details = details.encode("utf-8") # for DB
update = "update webcrawl set stamp=now()"
params = []
if redirect:
update += ", redirect="
params.append(str(reid))
if ja is not None:
update += ", ja=%d"
params.append(ja)
if status is not None:
update += ", status='%s'"
params.append(status)
if line is not None:
update += ", errline=%s"
params.append(line)
if char is not None:
update += ", errchar=%s"
params.append(char)
if details is not None:
update += ", errdetail='%s'"
params.append(details)
update += " where url='%s'"
params.append(url)
db_cursor.execute(update, params)
if db_cursor.rowcount != 1: sys.stderr.write("warning: couldn't update crawl status of url '%s' (%s)\n" % (url, update))
return None
def scan2(cursor, dicttable, indextable, url, thisid):
# db connection for the thread
scan_cursor=py_db.connect(dbname="jmdict").cursor()
scan_cursor.execute("SET NAMES utf8")
scan_cursor.execute("SET CHARACTER SET utf8")
scan_cursor.execute("SET character_set_connection=utf8")
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')] # wikipedia doesn't work without this - gives 403 forbidden
reader = jaHTMLParser(scan_cursor)
# grrr. "instance attributes"??!
for atype, treads in globals()['vct'].readings.iteritems():
if atype not in reader.endings:
if globals()['options'].verbose: print("adding readings for", atype)
reader.endings[atype] = {}
elif globals()['options'].verbose: print("updating", atype)
for btype, reading in treads.iteritems():
if btype not in reader.endings[atype]:
if globals()['options'].verbose: print("added ending", btype)
reader.endings[atype][btype] = reading
elif globals()['options'].verbose: print("checked", btype)
if reading != reader.endings[atype][btype]:
print("please change reading for", atype, btype, "from", reading, "to", reader.endings[atype][btype])
# looks like the wikipedia table had errors
# reader.endings[atype][btype] = reading
reader.tags = []
reader.ja = 0
reader.nonja = 0
reader.block = 0
reader.cursor = cursor
reader.dicttable = dicttable
reader.indextable = indextable
reader.encoding = "Shift_JIS" # the most common encoding for jp pages which don't specify any encoding
reader.CDATA_CONTENT_ELEMENTS = tuple() # don't treat CDATA in <style> amd <script> tags as 'stuff' - they're decls which i'd like to skip.
reader.missedtag = []
reader.biggest = globals()['biggest']
realurl = url
status = "OK"
lineno = None
offset = None
details = None
parsetime = timedelta(seconds = 0)
try:
urlparts = urlparse(url)
req = urllib.request.Request(str(urlunparse((urlparts[0], urlparts[1].encode("idna")) + urlparts[2:])))
page = opener.open(req).read().encode("utf-8")
charset = ["utf-8", "Shift_JIS", "euc_jp", "iso8859_8", "cp874", "big5"]
if "charset=" in page.headers['content-type']:
charset = [reader._fixencoding(page.headers['content-type'].split('charset=')[-1])] + charset
# ASSUME geturl returns utf-8??
# wikipedia returns urlencoded page-encoding (i assume) uri
realurl = page.geturl().decode("utf-8")
if realurl != url:
print("REDIRECT?", url, "=>", realurl)
crawlupdate(cursor, url, redirect=realurl, encoding = charset[0])
thisid = crawlcheck(cursor, realurl, recent=1)
if not globals()['options'].force and thisid is None: return parsetime
if page.info().getheader("Content-Type") and page.info().getheader("Content-Type").split(";")[0] != "text/html":
crawlupdate(scan_cursor, url, ja=0, status="Content type: " + page.info().getheader("Content-Type").split(";")[0], encoding = charset[0])
home = "/mnt/stuff/media/web/" + page.info().getheader("Content-Type").split(";")[0]
cantcreate = makedir(home)
if not cantcreate:
#print home
pieces = urlparse(url)
try:
os.stat(home + "/"+pieces[1]+pieces[2])
except: # hopefully "file doesn't already exist"
subprocess.check_call(['sh', '/usr/local/bin/get_file', home, url])
return parsetime
reader.thisid = thisid
#print datetime.now(), "scanning", urllib.unquote(realurl)
parsestart=datetime.now()
# read sometimes produces weird ValueError: invalid literal for int() with base 16: '' @ httplib.py:548
# 8oct2010 now feed sometimes produces unicode error in re._compile
content = page.read()
# let's guess it's utf-8 and convert to unicode just in case
# 8oct2011 actually, let's assume urllib2 is doing the right thing and decoding for us
# or try decoding using trick from stackoverflow 1020892
# third time lucky - assume web server lied to us - let the reader sort it out
# problem is html parser can't scan encoded content - tends to fall over on attributes
decode_errors = {}
leastworstenc = ""
leastworst = 0
while len(charset):
try:
content = content.encode(charset[0])
reader.encoding = charset[0]
break
except LookupError:
print("unknown encoding in", page.headers['content-type'])
except UnicodeDecodeError as decstatus:
decode_errors[charset[0]] = decstatus
if " in position " in str(decstatus):
posn = str(decstatus).split(" in position ")[1].split("-")[0].split(":")[0]
if posn.isdigit() and int(posn) > leastworst:
leastworst = int(posn)
leastworstenc = charset[0]
#else: print "decode status no good", decstatus, "best is", leastworst
else: print("weird decode status", decstatus)
if len(charset) == 1:
if leastworst:
print("decoding from", leastworstenc, "with replacements")
content = content.encode(leastworstenc, "replace")
reader.encoding = leastworstenc
else:
print("failed character decoding:")
for k in decode_errors:
print(k, decode_errors[k])
raise
charset = charset[1:]
reader.feed(content)
reader.close()
parsetime=datetime.now() - parsestart
except socket.timeout:
status = "Connection: no response from server - timed out"
except socket.gaierror as e:
status = str(e)
except urllib.error.HTTPError as e:
if hasattr(e, "reason"):
status = "HTTP: " + e.reason
else:
status = "HTTP %d" % e.code
except urllib.error.URLError as e:
# URLError.reason appears to be another object - timeout, gaierror etc
# those are the socket exceptions: error, herror, gaierror and timeout
# if hasattr(e, "reason"): status = "URL error: " + e.reason
# ah - but it doesn't have a 'code' either!
# status = "URL %d" % e.code
status = "URL: " + str(type(e.reason))
except socket.error as e:
status = str(e)
except ValueError as status:
print(traceback.format_exc())
status = str(status)
globals()['biggest'] = reader.biggest
for a in reader.missedtag:
print("unhandled element:", a)
if reader.nonja + reader.ja == 0: japness = 0
else : japness = float(100 * reader.ja) / float(reader.nonja + reader.ja)
# log url to DB with timestamp and ja score
if globals()['options'].showpages: print(unquoteutf8(realurl, encoding = reader.encoding), status, "ja = %.1f" % japness)
thisID = crawlupdate(cursor, realurl, ja=japness, status=status, line=lineno, char=offset, details=details, encoding = reader.encoding)
#if status != "OK": print urllib.unquote(realurl), "is how japanese (CJK)? .. %d%%" % japness, status
# if it's japanese enough and we've not gone too deep and we've not seen it before (recently) then follow the link
if japness > 25:
#print reader.encoding
for child in reader.tags:
# if child is http://something where something contains no / then add a trailing /
if child[0:7] == "http://" and child[7:].find("/") == -1: child += "/"
if child[0:8] == "https://" and child[8:].find("/") == -1: child += "/"
child = str(urljoin(realurl, child).replace("/..", ""))
if child[0:7] == "http://" and child[7:].find("/") == -1: child += "/"
if child[0:8] == "https://" and child[8:].find("/") == -1: child += "/"
try:
parts = urlparse(child)
newchild = parts[0] + "://" + parts[1]
if parts[2][0] != "/": newchild += "/"
newchild += parts[2]
if parts[3]: newchild += ";" + parts[3]
if parts[4]: newchild += "?" + parts[4]
if parts[0] in ("http", "https"):
crawlcheck(cursor, url = newchild, referrer = realurl, encoding = reader.encoding)
if globals()['options'].links: print(urllib.parse.unquote(newchild))
except IndexError: # error in urlparse that tries to 'find' on a None value
print(urllib.parse.unquote(realurl), " - problem parsing child: ", child)
crawlcheck(cursor, url = child, referrer = realurl, encoding = reader.encoding)
# if the child isn't japanese enough then modify rules accordingly
# otherwise make a note of the urls you've not followed with a timestamp and ref to the page that wasn't japanese enough
# then if you get to one of those pages by another route and find it's good you can modify the rules accordingly
# we like self-tuning algorithms :-)
return parsetime
def makedir(home):
cantcreate = False
try:
os.makedirs(home)
except OSError as err:
if not str(err).startswith("[Errno 17]"): # dir exists
sys.stderr.write(str(err) + "\n")
cantcreate = True