Notes from Thursday 2007-03-22 Natural Language Processing Meeting in Japan

テーマセッション1 (2): 教育を支援する言語学・言語処理

Theme Session 1 (2): Linguistics and Language Processing in Support of Education

  • S2-1 英語例文オーサリングのための可算性決定プロセスの可視化
    ○永田亮 (兵庫教育大), 河合敦夫 (三重大), 森広浩一郎 (兵庫教育大), 井須尚紀 (三重大)
  • S2-2 統計的自動翻訳に基づく日本人学習者の英文訳質分析
    ○鍔木元 (早大), 安田圭志, 匂坂芳典 (NICT/ATR)
  • S2-3 日本語読解支援のための語義毎の用例抽出機能について
    ○小林朋幸, 大山浩美, 坂田浩亮, 谷口雄作, 太田ふみ, Noah Evans, 浅原正幸, 松本裕治 (NAIST)
  • S2-4 外国人が作成した日本語文書に対する自動校正技術
    ○祖国威, 加納敏行 (東芝ソリューション)
  • S2-5 コーパスを用いた言語習得度の推定
    ○坂田浩亮, 新保仁, 松本裕治 (NAIST)
  • S2-6 日本語学習者作文支援のための機械学習による日本語格助詞の正誤判定
    ○大山浩美 (NAIST)
  • S2-7 Dynamic situation based sentence generation used in creating questions for students of Japanese
    ○Christopher Waple, Yasushi Tsubota, Masatake Dantsuji, 河原達也 (京大)
  • S2-8 漢字の読み誤りの自動生成における候補生成能力の評価
    ○Bora Savas, 林良彦 (阪大)

S2-1 英語例文オーサリングのための可算性決定プロセスの可視化

“A process for Visualizing countability for authoring English Example Sentences”, ○永田亮 (兵庫教育大), 河合敦夫 (三重大), 森広浩一郎 (兵庫教育大), 井須尚紀 (三重大)

For learning about noun countability, it is very important to have examples of things that are and are not countable. They have a probabilistic model for determining countability given a sentences’ context. They create a training corpus for each word and take the words before and after a given noun for context. Then they could up features that would point to countability or not (a, the, “s”, etc.) They did an evaluation over 25 x 2 (countable, uncountable) nouns using the BNC corpus. For countability they have an 80% success rate using a limit of probability > 0.5 as countable. They also did an evaluation of re-writing countable sentences.

S2-2 統計的自動翻訳に基づく日本人学習者の英文訳質分析

“An Analysis of Japanese English Learners’ translation quality through statistical automatic machine translation”, ○鍔木元 (早大), 安田圭志, 匂坂芳典 (NICT/ATR)

They are looking at Japanese to English translation, using the ATR travel translation corpus. 162,318 sentences. They used trigram and bigram models (? but I am not sure how)

S2-3 日本語読解支援のための語義毎の用例抽出機能について

○小林朋幸, 大山浩美, 坂田浩亮, 谷口雄作, 太田ふみ, Noah Evans, 浅原正幸, 松本裕治 (NAIST)

I didn’t understand these slides: they were very text heavy and the mike was low so I couldn’t hear well.

S2-4 外国人が作成した日本語文書に対する自動校正技術

“Automatic proofreading of Japanese text written by foreigners”, ○祖国威, 加納敏行 (東芝ソリューション)

Lots of foreigners have started to read and write Japanese because of globalization. Companies want to decrease risk, etc., so we want a way to automatically check / proof-read sentences. Most offshoring in Japan is going to China (ASEAN, Taiwan, Korea, India very low.) So they are targeting Chinese. In Japanese when they send text to offshoring companies, a particularity of Japanese is the vagueness that can be difficult for foreigners to understand. Another problem is foreigners use expressions that Japanese are not familiar with. They have a system that searches for vague Japanese expressions and tries to make them more understandable. Over an eight month period they broke down problems into six categories. The largest problem was grammar, and in particular particles.

They have a system that takes an input sentence and parses it. Then it checks if the particle usage is correct, using what looks like a rule-based system (rules could be learned though.) They focuses on “が” and “を”. Some of the particle decisions need to be made with semantic information, but some can be done using their rules. They plan to expand to other particles after this.

S2-5 コーパスを用いた言語習得度の推定

“Estimating the educational value of a corpus” ○坂田浩亮, 新保仁, 松本裕治 (NAIST)

They have a Japanese English learner’s corpus (NICT JLE) that has been graded from 1-9. So they take their corpus split into 9 types by grade and create 1-5 gram cosine metrics between them. They use the plot of vector scores and associate manual labelling and take new data to find out which level it is most similar to.

S2-6 日本語学習者作文支援のための機械学習による日本語格助詞の正誤判定

“Deciding the correctness of multiple Japanese particles for Japanese language learners’ writing using machine learning techniques”, ○大山浩美 (NAIST)

Foreign students of Japanese are increasing a lot. Some people are studying without teachers using the internet. This paper is also about particle choice. They use SVMs to check between ga, wo, ni, de. They used Mainichi Shimbun data from 2003 (half a year) as the training data. They use a there word left and right window. They did some experiments to see if 3,4,5 window contexts were better, but when you go out to 200,000 words they are all about the same (looked at for each specific particle in their set.)

S2-7 Dynamic situation based sentence generation used in creating questions for students of Japanese

○Christopher Waple, Yasushi Tsubota, Masatake Dantsuji, 河原達也 (京大)

The first English presentation I’ve seen. They have a system, CallJ, that shows a diagram and then students have to make a Japanese sentence explaining the concept in the diagram. The system has dynamically generated systems. Their system can give hints in stages (first grammatical POS of word, then character-by-character). They generate a question using a concept generation template, then generate a diagram for that concept, then make the question. The system identifies errors in the student input and gives explanations for them – but the entry is broken down into appropriate boxes (not free text entry.) There is a scoring system for error types based on a weighting calculated by experimental data. They ran an experiment with multiple users looking at whether their usage of the system can be used to predict their level using SVM with the system as input.

S2-8 漢字の読み誤りの自動生成における候補生成能力の評価

○Bora Savas, 林良彦 (阪大)


Presented by Bora in Japanese. They have a system for automatically creating incorrect readings for kanji. They have a pattern base for doing this. They don’t just take potential readings for characters, but add wrong ones (like シ to ジ). They also do things like replace similar looking characters with other characters, e.g., 自 to 目. Or 北 to one of 南,東,西. Their system takes into account the level of the user when generating candidate misreadings. They also add in proper misreadings based on the possible on and kun readings.

大量情報からの価値創出

「情報爆発」プロジェクト

“Creating value out of immense amounts of information: the ‘Information Explosion’ project”, Special Lecture by 喜連川優 (東大) Kitsuregawa from Tokyo University.

An introduction to the project. There is also the “Grand Information Navigation” project and one on information security / trust. The information explosion project is aimed more at basic research than applied or commercial research. A breakdown of the funding and research areas.

Session D5: Summarization

  • D5-1 登場人物の感情表現に着目した物語要約
    ○横野光 (岡山大)
  • D5-2 確率的な手法による日本語文簡約
    ○福冨諭, 高木一幸, 尾関和彦 (電通大)
  • D5-3 句単位の複数文要約に向けての基礎的検討
    ○渋木英潔 (北海学園大), 荒木健治 (北大), 桃内佳雄, 栃内香次 (北海学園大)
  • D5-4 機能語の補完を用いた濃縮還元型要約モデル
    ○池田諭史, 牧野恵, 山本和英 (長岡技科大)

  • D5-5 Multi-lingual Opinion Analysis Applied to World News: A Case Study
    ○Evans, David Kirk, 神門典子 (NII)

D5-1 登場人物の感情表現に着目した物語要約

“Summarizing stories paying attention to the emotions shown by characters that appear”, ○横野光 (岡山大)

Summarization has often focused on news text which has a well-known structure. There hasn’t been as much work on story summarization. The structure is not as well-defined. Tools also don’t work as well over story text. Is there a reason to summarize stories? What sort of story summarization is possible? Information about characters, information about the story. One form on story content model says that important bits are plot unit related (Lehnert 1981) and that those are reflected by the emotional response to characters in the story. So important text for the story should be reflected by parts that are strongly related to characters emotions.

They have a method for estimating whether something is a character or not – addresses the problem where not all characters have human names. They extract important passages that show character emotion, or character entrance / exit and scene changes. They have a dictionary that lists emotive expressions. They also try to extract sentences that are the cause of the emotional display. They did an evaluation over whether the extracted sentences are important or not, and whether the summary is understandable or not.

D5-2 確率的な手法による日本語文簡約

“A probabilistic approach to Japanese Sentence Simplification”, ○福冨諭, 高木一幸, 尾関和彦 (電通大)

Nice example: 昨日、本屋に行き、本を買った。ー> 昨日、本を買った。
Looks like they are using a Bayesian model. They used Mainichi from 2002/5 to 2003/3 with about 28k documents, parsed with JUMAN and KNP. I’m not clear how they developed the training set, but it looks like they are using patterns over the parsed data to extract short sentences? They did an evaluation over 50 articles with 11 evaluators from 1-5 scale over 3 features at 70%, 50%, and 30%. Naturalness, Importance, Overall. I think they get around the major grammaticality problems by doing their cuts based on the KNP parse.

I was reminded of James Clarke’s work on sentence compression.

D5-3 句単位の複数文要約に向けての基礎的検討

“A basic research investigation into steps toward unit-based multi-document summarization”, ○渋木英潔 (北海学園大), 荒木健治 (北大), 桃内佳雄, 栃内香次 (北海学園大)

They parse text with Cabochya, build dependency chains, and then create some “virtual” nodes to connect sentences. Then they extract important keywords using web information and tf*idf and link counts. The tf*idf counts are computed based on term counts from the web, and the links likewise. It looks like they do the node extraction based on the number of links a node has over a threshold? They did experiments with keyword extraction over two data sets (maybe fiction? I don’t know.) They did an evaluation using ROUGE-1 but I’m not sure where they got the data or summaries from. This isn’t just full sentence extraction because it works over sentence clauses.

D5-4 機能語の補完を用いた濃縮還元型要約モデル

“A summarization model using keyword completion (?) for text compression”, ○池田諭史, 牧野恵, 山本和英 (長岡技科大)

People create summaries by taking important terms from a sentence, ordering them, then creating a new sentence. They had a corpus of Nikkei Shimbum stuff of about 3300 articles with the summarized version shown on the Shinkansen news thing that they did an evaluation over. They had 10 people evaluate 100 sentences on a 1-4 scale for readability and meaning.

D5-5 Multi-lingual Opinion Analysis Applied to World News: A Case Study

○Evans, David Kirk, 神門典子 (NII)

Not much to say about this one since I presented.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *