{"id":139,"date":"2006-09-14T03:13:09","date_gmt":"2006-09-13T18:13:09","guid":{"rendered":"https:\/\/fugutabetai.com\/blog\/2006\/09\/14\/ipsj-in-shinjuku-day-two\/"},"modified":"2006-09-14T03:13:09","modified_gmt":"2006-09-13T18:13:09","slug":"ipsj-in-shinjuku-day-two","status":"publish","type":"post","link":"https:\/\/fugutabetai.com\/blog\/2006\/09\/14\/ipsj-in-shinjuku-day-two\/","title":{"rendered":"IPSJ in Shinjuku Day two"},"content":{"rendered":"<p>Wednesday was the final day of the IPSJ meeting.  I&#8217;ve got more comments on the papers that I saw that day below.<\/p>\n<p><!-- readmore --><\/p>\n<p>Yoshihisa Shinozawa &#8211; &#8220;Extended simple recurrent networks by using<br \/>\nbigram&#8221; &#8211; Keio University<\/p>\n<p> I had a tough time understanding this paper: I don&#8217;t know much about<br \/>\n word nets using perceptrons.  I also had a tough time following his<br \/>\n Japanese.  <\/p>\n<p>&#8212;<\/p>\n<p>Takashi Kawakami, Hisashi Suzuki &#8211; &#8220;A calculation of Word Similarity<br \/>\nusing Decision Lists&#8221; &#8211; Chuo University.  <\/p>\n<p>Given two &#8220;words&#8221; (kana, kanji or kanji, kanji or probably kana, kana)<br \/>\nthe system will return a number from 0 to 1, where 0 is not similar,<br \/>\nand 1 similar.  Can we use this for disambiguation?  They are using<br \/>\ndecision lists.  The look at thing have been categorized and not \u590f\u306f<br \/>\n\u5bd2\u3044\u3067\u3059 and \u51ac\u306f\u5bd2\u3044\u3067\u3059\u3000and check how similar the two are to each<br \/>\nother.  Use 12 novels available for free on the internet.  In the<br \/>\npaper they present a table with similar terms.  There are lots of<br \/>\nnumbers (1 and 2 are similar, and some antonyms as well, such as<br \/>\nmother, father.)  <\/p>\n<p>There were lots of questions on this paper as well. In the table he<br \/>\npresents, also &#8220;\u6708&#8221; and &#8220;\u5e74&#8221; came out to be similar.  In some ways<br \/>\nthey are only similar in some certain contexts.  <\/p>\n<p>&#8212;<\/p>\n<p>Akiko Aizawaq &#8211; &#8220;On the Effect of Corpus Size in Words Similarity<br \/>\nCalculation&#8221; &#8211; National Institute of Informatics<\/p>\n<p>The main focus is on how the corpus quality for extracting synonyms<br \/>\nchanges the output.  There are two ways to do the extraction:<br \/>\npattern-based (A such as B) and co-occurrence vectors.  When the<br \/>\ncorpora are made larger, does that help?  (Particularly in the case of<br \/>\nthe vector-based approaches?)  The conclusion is that larger corpora<br \/>\nhelp, but you need to use a simple filter to avoid bias that emerges<br \/>\nfrom high frequency words. <\/p>\n<p>&#8212;<\/p>\n<p>Takahiro Ono, Akira Suganuma, Rin-ichiro Taniguchi &#8211; &#8220;Extraction of<br \/>\nthe sentences whose modification relation is misunderstood for a<br \/>\nwriting tool&#8221; &#8211; Kyuushyuu Daigaku<\/p>\n<p>They do work on automatic text revision.  Their focus here is<br \/>\nindicating sentences which are difficult to understand and might<br \/>\neasily be misinterpreted.  They focus on nouns with multiple<br \/>\nmodifiers and dependency structure between clauses.  It was very<br \/>\ninteresting for me, but of course I had trouble following some of the<br \/>\nJapanese grammar vocabulary.  <\/p>\n<p>They did an experiment with humans.  <\/p>\n<p>&#8212;<\/p>\n<p>Yu Akiyama, Masahiro Fukaya, Hajime Ohiwa, Masakazu Tateno &#8211;<br \/>\n&#8220;Extending Kwic Concordance by Standardization of sentence pattern&#8221; &#8211;<br \/>\nKeio University \/ Fuji Xerox<\/p>\n<p>This one was skipped &#8211; cancelled at the last minute.<\/p>\n<p>&#8212;<\/p>\n<p>Yoshinobu Kano, Yusuke Miyao, Junichi Tsuji &#8211; &#8220;Candidate Reduction in<br \/>\nSyntactic Structure Analysis with Pure Incremental Processing&#8221; &#8211;<br \/>\nUniversity of Tokyo<\/p>\n<p>I didn&#8217;t really follow this talk.  I&#8217;m not strong on parsing, and<br \/>\ncertainly not strong on parsing when it is discussed in Japanese.<\/p>\n<p>&#8212;<\/p>\n<p>Manuel Medina Gonzalex and Hirosato Nomura &#8211; &#8220;A Cross-Lingual Grammar<br \/>\nModel and its Application to Japanese-Spanish Machine Translation&#8221; &#8211;<br \/>\nKyuushyuu University<\/p>\n<p>First talk in English. Their translation model is based on the Alt\/JE<br \/>\nmodel. They try to predict certain Spanish features based on the<br \/>\nJapanese sentence (for features that do not exist in Spanish.)  For<br \/>\nexample, gender and number.  <\/p>\n<p>&#8212;<\/p>\n<p>Yohei Seki, Koji Eguchi, Noriko Kando, Masaki Aono &#8211; &#8220;An Analysis of<br \/>\nOpinion-focused Summarization using Opinion Annotation&#8221; &#8211; Toyohashi<br \/>\nUniversity of Technology \/ National Institute of Informatics<\/p>\n<p>&#8212;<\/p>\n<p>Shunpei Tatebayashi, Makoto Haraguchi &#8211; &#8220;A Coherent Text Summarization<br \/>\nMethod based on Semantic Correlations between Sentences&#8221; &#8211; Hokkaidou<br \/>\nDaigaku <\/p>\n<p>They want to summarize the important parts of long stories,<br \/>\nabstracting out the events and common themes in the stories.  I<br \/>\nthink.  They are looking at summarization the preserves and reflects<br \/>\nthe structure of the input &#8211; so analyze the segments in the text, and<br \/>\nwhen summarizing only extract segments that are related.  They compare<br \/>\nto some graph-based summarization methods too, so it might be<br \/>\ninterested to read in further detail later.  <\/p>\n<p>&#8212;<\/p>\n<p>Hu Bai, Ueda Yoshihiro, Oka Mamiko &#8211; &#8220;Phrase-Representation<br \/>\nSummarization Method for Chinese&#8221; &#8211; Fuji Xerox<\/p>\n<p>The second (and final) talk in English.<br \/>\nPhrase based summarization for Chinese for IR support.  The Japanese<br \/>\nversion that this is based on does something like sentence<br \/>\nsimplification for summarization based on a dependency structure<br \/>\nparse.  They use LFG to parse the Chinese, and generate the summary<br \/>\nsentence using a syntactical pattern.  <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Wednesday was the final day of the IPSJ meeting. I&#8217;ve got more comments on the papers that I saw that day below. Yoshihisa Shinozawa &#8211; &#8220;Extended simple recurrent networks by using bigram&#8221; &#8211; Keio University I had a tough time understanding this paper: I don&#8217;t know much about word nets using perceptrons. I also had [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[10],"tags":[],"_links":{"self":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/posts\/139"}],"collection":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/comments?post=139"}],"version-history":[{"count":0,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/posts\/139\/revisions"}],"wp:attachment":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/media?parent=139"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/categories?post=139"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/tags?post=139"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}