{"id":191,"date":"2007-03-22T05:20:00","date_gmt":"2007-03-21T20:20:00","guid":{"rendered":"https:\/\/fugutabetai.com\/blog\/2007\/03\/22\/notes-from-thursday-2007-03-22-natural-language-processing-meeting-in-japan\/"},"modified":"2007-03-22T05:20:00","modified_gmt":"2007-03-21T20:20:00","slug":"notes-from-thursday-2007-03-22-natural-language-processing-meeting-in-japan","status":"publish","type":"post","link":"https:\/\/fugutabetai.com\/blog\/2007\/03\/22\/notes-from-thursday-2007-03-22-natural-language-processing-meeting-in-japan\/","title":{"rendered":"Notes from Thursday 2007-03-22 Natural Language Processing Meeting in Japan"},"content":{"rendered":"<h2>\u30c6\u30fc\u30de\u30bb\u30c3\u30b7\u30e7\u30f31 (2): \u6559\u80b2\u3092\u652f\u63f4\u3059\u308b\u8a00\u8a9e\u5b66\u30fb\u8a00\u8a9e\u51e6\u7406<\/h2>\n<p>Theme Session 1 (2): Linguistics and Language Processing in Support of Education<\/p>\n<ul>\n<li>S2-1  \t\u82f1\u8a9e\u4f8b\u6587\u30aa\u30fc\u30b5\u30ea\u30f3\u30b0\u306e\u305f\u3081\u306e\u53ef\u7b97\u6027\u6c7a\u5b9a\u30d7\u30ed\u30bb\u30b9\u306e\u53ef\u8996\u5316<br \/>\n\t\u25cb\u6c38\u7530\u4eae (\u5175\u5eab\u6559\u80b2\u5927), \u6cb3\u5408\u6566\u592b (\u4e09\u91cd\u5927), \u68ee\u5e83\u6d69\u4e00\u90ce (\u5175\u5eab\u6559\u80b2\u5927), \u4e95\u9808\u5c1a\u7d00 (\u4e09\u91cd\u5927)<\/li>\n<li>S2-2 \t\u7d71\u8a08\u7684\u81ea\u52d5\u7ffb\u8a33\u306b\u57fa\u3065\u304f\u65e5\u672c\u4eba\u5b66\u7fd2\u8005\u306e\u82f1\u6587\u8a33\u8cea\u5206\u6790<br \/>\n\t\u25cb\u9354\u6728\u5143 (\u65e9\u5927), \u5b89\u7530\u572d\u5fd7, \u5302\u5742\u82b3\u5178 (NICT\/ATR)<\/li>\n<li>S2-3 \t\u65e5\u672c\u8a9e\u8aad\u89e3\u652f\u63f4\u306e\u305f\u3081\u306e\u8a9e\u7fa9\u6bce\u306e\u7528\u4f8b\u62bd\u51fa\u6a5f\u80fd\u306b\u3064\u3044\u3066<br \/>\n\t\u25cb\u5c0f\u6797\u670b\u5e78, \u5927\u5c71\u6d69\u7f8e, \u5742\u7530\u6d69\u4eae, \u8c37\u53e3\u96c4\u4f5c, \u592a\u7530\u3075\u307f, Noah Evans, \u6d45\u539f\u6b63\u5e78, \u677e\u672c\u88d5\u6cbb (NAIST)<\/li>\n<li>S2-4 \t\u5916\u56fd\u4eba\u304c\u4f5c\u6210\u3057\u305f\u65e5\u672c\u8a9e\u6587\u66f8\u306b\u5bfe\u3059\u308b\u81ea\u52d5\u6821\u6b63\u6280\u8853<br \/>\n\t\u25cb\u7956\u56fd\u5a01, \u52a0\u7d0d\u654f\u884c (\u6771\u829d\u30bd\u30ea\u30e5\u30fc\u30b7\u30e7\u30f3)<\/li>\n<li>S2-5 \t\u30b3\u30fc\u30d1\u30b9\u3092\u7528\u3044\u305f\u8a00\u8a9e\u7fd2\u5f97\u5ea6\u306e\u63a8\u5b9a<br \/>\n\t\u25cb\u5742\u7530\u6d69\u4eae, \u65b0\u4fdd\u4ec1, \u677e\u672c\u88d5\u6cbb (NAIST)<\/li>\n<li>S2-6 \t\u65e5\u672c\u8a9e\u5b66\u7fd2\u8005\u4f5c\u6587\u652f\u63f4\u306e\u305f\u3081\u306e\u6a5f\u68b0\u5b66\u7fd2\u306b\u3088\u308b\u65e5\u672c\u8a9e\u683c\u52a9\u8a5e\u306e\u6b63\u8aa4\u5224\u5b9a<br \/>\n\t\u25cb\u5927\u5c71\u6d69\u7f8e (NAIST)<\/li>\n<li>S2-7 \tDynamic situation based sentence generation used in creating questions for students of Japanese<br \/>\n\t\u25cbChristopher Waple, Yasushi Tsubota, Masatake Dantsuji, \u6cb3\u539f\u9054\u4e5f (\u4eac\u5927)<\/li>\n<li>S2-8 \t\u6f22\u5b57\u306e\u8aad\u307f\u8aa4\u308a\u306e\u81ea\u52d5\u751f\u6210\u306b\u304a\u3051\u308b\u5019\u88dc\u751f\u6210\u80fd\u529b\u306e\u8a55\u4fa1<br \/>\n\t\u25cbBora Savas, \u6797\u826f\u5f66 (\u962a\u5927)<\/li>\n<\/ul>\n<p><!-- readmore --><\/p>\n<h3>S2-1  \t\u82f1\u8a9e\u4f8b\u6587\u30aa\u30fc\u30b5\u30ea\u30f3\u30b0\u306e\u305f\u3081\u306e\u53ef\u7b97\u6027\u6c7a\u5b9a\u30d7\u30ed\u30bb\u30b9\u306e\u53ef\u8996\u5316<\/h3>\n<p>&#8220;A process for Visualizing countability for authoring English Example Sentences&#8221;, \u25cb\u6c38\u7530\u4eae (\u5175\u5eab\u6559\u80b2\u5927), \u6cb3\u5408\u6566\u592b (\u4e09\u91cd\u5927), \u68ee\u5e83\u6d69\u4e00\u90ce (\u5175\u5eab\u6559\u80b2\u5927), \u4e95\u9808\u5c1a\u7d00 (\u4e09\u91cd\u5927)<P\/><\/p>\n<p>For learning about noun countability, it is very important to have examples of things that are and are not countable.  They have a probabilistic model for determining countability given a sentences&#8217; context.  They create a training corpus for each word and take the words before and after a given noun for context.  Then they could up features that would point to countability or not (a, the, &#8220;s&#8221;, etc.)  They did an evaluation over 25 x 2 (countable, uncountable) nouns using the BNC corpus.  For countability they have an 80% success rate using a limit of probability > 0.5 as countable.  They also did an evaluation of re-writing countable sentences.  <\/p>\n<h3>S2-2 \t\u7d71\u8a08\u7684\u81ea\u52d5\u7ffb\u8a33\u306b\u57fa\u3065\u304f\u65e5\u672c\u4eba\u5b66\u7fd2\u8005\u306e\u82f1\u6587\u8a33\u8cea\u5206\u6790<\/h3>\n<p>&#8220;An Analysis of Japanese English Learners&#8217; translation quality through statistical automatic machine translation&#8221;, \u25cb\u9354\u6728\u5143 (\u65e9\u5927), \u5b89\u7530\u572d\u5fd7, \u5302\u5742\u82b3\u5178 (NICT\/ATR)<P\/><\/p>\n<p>They are looking at Japanese to English translation, using the ATR travel translation corpus.  162,318 sentences.  They used trigram and bigram models (? but I am not sure how) <\/p>\n<h3>S2-3 \t\u65e5\u672c\u8a9e\u8aad\u89e3\u652f\u63f4\u306e\u305f\u3081\u306e\u8a9e\u7fa9\u6bce\u306e\u7528\u4f8b\u62bd\u51fa\u6a5f\u80fd\u306b\u3064\u3044\u3066<\/h3>\n<p>\u25cb\u5c0f\u6797\u670b\u5e78, \u5927\u5c71\u6d69\u7f8e, \u5742\u7530\u6d69\u4eae, \u8c37\u53e3\u96c4\u4f5c, \u592a\u7530\u3075\u307f, Noah Evans, \u6d45\u539f\u6b63\u5e78, \u677e\u672c\u88d5\u6cbb (NAIST) <P\/><\/p>\n<p>I didn&#8217;t understand these slides: they were very text heavy and the mike was low so I couldn&#8217;t hear well.  <\/p>\n<h3>S2-4 \t\u5916\u56fd\u4eba\u304c\u4f5c\u6210\u3057\u305f\u65e5\u672c\u8a9e\u6587\u66f8\u306b\u5bfe\u3059\u308b\u81ea\u52d5\u6821\u6b63\u6280\u8853<\/h3>\n<p>&#8220;Automatic proofreading of Japanese text written by foreigners&#8221;, \u25cb\u7956\u56fd\u5a01, \u52a0\u7d0d\u654f\u884c (\u6771\u829d\u30bd\u30ea\u30e5\u30fc\u30b7\u30e7\u30f3)<P\/><\/p>\n<p>Lots of foreigners have started to read and write Japanese because of globalization.  Companies want to decrease risk, etc., so we want a way to automatically check \/ proof-read sentences.  Most offshoring in Japan is going to China (ASEAN, Taiwan, Korea, India very low.)  So they are targeting Chinese.  In Japanese when they send text to offshoring companies, a particularity of Japanese is the vagueness that can be difficult for foreigners to understand.  Another problem is foreigners use expressions that Japanese are not familiar with.  They have a system that searches for vague Japanese expressions and tries to make them more understandable.  Over an eight month period they broke down problems into six categories.  The largest problem was grammar, and in particular particles.<br \/>\n<P\/>  They have a system that takes an input sentence and parses it.  Then it checks if the particle usage is correct, using what looks like a rule-based system (rules could be learned though.)  They focuses on &#8220;\u304c&#8221; and &#8220;\u3092&#8221;. Some of the particle decisions need to be made with semantic information, but some can be done using their rules.  They plan to expand to other particles after this. <\/p>\n<h3>S2-5 \t\u30b3\u30fc\u30d1\u30b9\u3092\u7528\u3044\u305f\u8a00\u8a9e\u7fd2\u5f97\u5ea6\u306e\u63a8\u5b9a<\/h3>\n<p>&#8220;Estimating the educational value of a corpus&#8221;\t\u25cb\u5742\u7530\u6d69\u4eae, \u65b0\u4fdd\u4ec1, \u677e\u672c\u88d5\u6cbb (NAIST)<P\/><\/p>\n<p>They have a Japanese English learner&#8217;s corpus (NICT JLE) that has been graded from 1-9.  So they take their corpus split into 9 types by grade and create 1-5 gram cosine metrics between them.  They use the plot of vector scores and associate manual labelling and take new data to find out which level it is most similar to.  <\/p>\n<h3>S2-6 \t\u65e5\u672c\u8a9e\u5b66\u7fd2\u8005\u4f5c\u6587\u652f\u63f4\u306e\u305f\u3081\u306e\u6a5f\u68b0\u5b66\u7fd2\u306b\u3088\u308b\u65e5\u672c\u8a9e\u683c\u52a9\u8a5e\u306e\u6b63\u8aa4\u5224\u5b9a<\/h3>\n<p>&#8220;Deciding the correctness of multiple Japanese particles for Japanese language learners&#8217; writing using machine learning techniques&#8221;,\t\u25cb\u5927\u5c71\u6d69\u7f8e (NAIST)<P\/><\/p>\n<p>Foreign students of Japanese are increasing a lot.  Some people are studying without teachers using the internet.  This paper is also about particle choice.  They use SVMs to check between ga, wo, ni, de.  They used Mainichi Shimbun data from 2003 (half a year) as the training data.  They use a there word left and right window.  They did some experiments to see if 3,4,5 window contexts were better, but when you go out to 200,000 words they are all about the same (looked at for each specific particle in their set.)  <\/p>\n<h3>S2-7 \tDynamic situation based sentence generation used in creating questions for students of Japanese<\/h3>\n<p>\t\u25cbChristopher Waple, Yasushi Tsubota, Masatake Dantsuji, \u6cb3\u539f\u9054\u4e5f (\u4eac\u5927)<P\/><\/p>\n<p>The first English presentation I&#8217;ve seen.  They have a system, CallJ, that shows a diagram and then students have to make a Japanese sentence explaining the concept in the diagram.  The system has dynamically generated systems.  Their system can give hints in stages (first grammatical POS of word, then character-by-character).  They generate a question using a concept generation template, then generate a diagram for that concept, then make the question.  The system identifies errors in the student input and gives explanations for them &#8211; but the entry is broken down into appropriate boxes (not free text entry.)  There is a scoring system for error types based on a weighting calculated by experimental data.  They ran an experiment with multiple users looking at whether their usage of the system can be used to predict their level using SVM with the system as input.  <\/p>\n<h3>S2-8 \t\u6f22\u5b57\u306e\u8aad\u307f\u8aa4\u308a\u306e\u81ea\u52d5\u751f\u6210\u306b\u304a\u3051\u308b\u5019\u88dc\u751f\u6210\u80fd\u529b\u306e\u8a55\u4fa1<\/h3>\n<p>\t\u25cbBora Savas, \u6797\u826f\u5f66 (\u962a\u5927)<P\/><br \/>\nPresented by Bora in Japanese.  They have a system for automatically creating incorrect readings for kanji.  They have a pattern base for doing this.  They don&#8217;t just take potential readings for characters, but add wrong ones (like \u30b7 to \u30b8).  They also do things like replace similar looking characters with other characters, e.g., \u81ea to \u76ee.  Or \u5317 to one of \u5357,\u6771,\u897f.  Their system takes into account the level of the user when generating candidate misreadings.  They also add in proper misreadings based on the possible on and kun readings.  <\/p>\n<h1>\u5927\u91cf\u60c5\u5831\u304b\u3089\u306e\u4fa1\u5024\u5275\u51fa<\/h1>\n<h2>\u300c\u60c5\u5831\u7206\u767a\u300d\u30d7\u30ed\u30b8\u30a7\u30af\u30c8<\/h2>\n<p>&#8220;Creating value out of immense amounts of information: the &#8216;Information Explosion&#8217; project&#8221;, Special Lecture by \u559c\u9023\u5ddd\u512a (\u6771\u5927)\u3000Kitsuregawa from Tokyo University. <\/p>\n<p>An introduction to the project.  There is also the &#8220;Grand Information Navigation&#8221; project and one on information security \/ trust.  The information explosion project is aimed more at basic research than applied or commercial research.  A breakdown of the funding and research areas.  <\/p>\n<h2>Session D5: Summarization<\/h2>\n<ul>\n<li>D5-1  \t\u767b\u5834\u4eba\u7269\u306e\u611f\u60c5\u8868\u73fe\u306b\u7740\u76ee\u3057\u305f\u7269\u8a9e\u8981\u7d04<br \/>\n\t\u25cb\u6a2a\u91ce\u5149 (\u5ca1\u5c71\u5927)<\/li>\n<li>D5-2 \t\u78ba\u7387\u7684\u306a\u624b\u6cd5\u306b\u3088\u308b\u65e5\u672c\u8a9e\u6587\u7c21\u7d04<br \/>\n\t\u25cb\u798f\u51a8\u8aed, \u9ad8\u6728\u4e00\u5e78, \u5c3e\u95a2\u548c\u5f66 (\u96fb\u901a\u5927)<\/li>\n<li>D5-3 \t\u53e5\u5358\u4f4d\u306e\u8907\u6570\u6587\u8981\u7d04\u306b\u5411\u3051\u3066\u306e\u57fa\u790e\u7684\u691c\u8a0e<br \/>\n\t\u25cb\u6e0b\u6728\u82f1\u6f54 (\u5317\u6d77\u5b66\u5712\u5927), \u8352\u6728\u5065\u6cbb (\u5317\u5927), \u6843\u5185\u4f73\u96c4, \u6803\u5185\u9999\u6b21 (\u5317\u6d77\u5b66\u5712\u5927)<\/li>\n<li>D5-4 \t\u6a5f\u80fd\u8a9e\u306e\u88dc\u5b8c\u3092\u7528\u3044\u305f\u6fc3\u7e2e\u9084\u5143\u578b\u8981\u7d04\u30e2\u30c7\u30eb<br \/>\n\t\u25cb\u6c60\u7530\u8aed\u53f2, \u7267\u91ce\u6075, \u5c71\u672c\u548c\u82f1 (\u9577\u5ca1\u6280\u79d1\u5927)<\/p>\n<li\/>\n<li>D5-5 \tMulti-lingual Opinion Analysis Applied to World News: A Case Study<br \/>\n\t\u25cbEvans, David Kirk, \u795e\u9580\u5178\u5b50 (NII)<\/li>\n<\/ul>\n<h3>D5-1  \t\u767b\u5834\u4eba\u7269\u306e\u611f\u60c5\u8868\u73fe\u306b\u7740\u76ee\u3057\u305f\u7269\u8a9e\u8981\u7d04<\/h3>\n<p>&#8220;Summarizing stories paying attention to the emotions shown by characters that appear&#8221;,\t\u25cb\u6a2a\u91ce\u5149 (\u5ca1\u5c71\u5927)<P\/><\/p>\n<p>Summarization has often focused on news text which has a well-known structure.  There hasn&#8217;t been as much work on story summarization.  The structure is not as well-defined.  Tools also don&#8217;t work as well over story text.  Is there a reason to summarize stories?  What sort of story summarization is possible?  Information about characters, information about the story.  One form on story content model says that important bits are plot unit related (Lehnert 1981) and that those are reflected by the emotional response to characters in the story.  So important text for the story should be reflected by parts that are strongly related to characters emotions.  <P\/><\/p>\n<p>They have a method for estimating whether something is a character or not &#8211; addresses the problem where not all characters have human names.  They extract important passages that show character emotion, or character entrance \/ exit and scene changes.  They have a dictionary that lists emotive expressions.  They also try to extract sentences that are the cause of the emotional display.  They did an evaluation over whether the extracted sentences are important or not, and whether the summary is understandable or not.  <\/p>\n<h3>D5-2 \t\u78ba\u7387\u7684\u306a\u624b\u6cd5\u306b\u3088\u308b\u65e5\u672c\u8a9e\u6587\u7c21\u7d04<\/h3>\n<p>&#8220;A probabilistic approach to Japanese Sentence Simplification&#8221;,\t\u25cb\u798f\u51a8\u8aed, \u9ad8\u6728\u4e00\u5e78, \u5c3e\u95a2\u548c\u5f66 (\u96fb\u901a\u5927)<P\/><\/p>\n<p>Nice example: \u6628\u65e5\u3001\u672c\u5c4b\u306b\u884c\u304d\u3001\u672c\u3092\u8cb7\u3063\u305f\u3002\u30fc\uff1e\u3000\u6628\u65e5\u3001\u672c\u3092\u8cb7\u3063\u305f\u3002<br \/>\nLooks like they are using a Bayesian model.  They used Mainichi from 2002\/5 to 2003\/3 with about 28k documents, parsed with JUMAN and KNP.  I&#8217;m not clear how they developed the training set, but it looks like they are using patterns over the parsed data to extract short sentences?  They did an evaluation over 50 articles with 11 evaluators from 1-5 scale over 3 features at 70%, 50%, and 30%.  Naturalness, Importance, Overall.  I think they get around the major grammaticality problems by doing their cuts based on the KNP parse.  <\/p>\n<p>I was reminded of <a href=\"http:\/\/homepages.inf.ed.ac.uk\/s0460084\/\">James Clarke&#8217;s<\/a> work on sentence compression.  <\/p>\n<h3>D5-3 \t\u53e5\u5358\u4f4d\u306e\u8907\u6570\u6587\u8981\u7d04\u306b\u5411\u3051\u3066\u306e\u57fa\u790e\u7684\u691c\u8a0e<\/h3>\n<p>&#8220;A basic research investigation into steps toward unit-based multi-document summarization&#8221;,\t\u25cb\u6e0b\u6728\u82f1\u6f54 (\u5317\u6d77\u5b66\u5712\u5927), \u8352\u6728\u5065\u6cbb (\u5317\u5927), \u6843\u5185\u4f73\u96c4, \u6803\u5185\u9999\u6b21 (\u5317\u6d77\u5b66\u5712\u5927)<P\/><\/p>\n<p>They parse text with Cabochya, build dependency chains, and then create some &#8220;virtual&#8221; nodes to connect sentences.  Then they extract important keywords using web information and tf*idf and link counts.  The tf*idf counts are computed based on term counts from the web, and the links likewise.  It looks like they do the node extraction based on the number of links a node has over a threshold?  They did experiments with keyword extraction over two data sets (maybe fiction?  I don&#8217;t know.)  They did an evaluation using ROUGE-1 but I&#8217;m not sure where they got the data or summaries from.  This isn&#8217;t just full sentence extraction because it works over sentence clauses.  <\/p>\n<h3>D5-4 \t\u6a5f\u80fd\u8a9e\u306e\u88dc\u5b8c\u3092\u7528\u3044\u305f\u6fc3\u7e2e\u9084\u5143\u578b\u8981\u7d04\u30e2\u30c7\u30eb<\/h3>\n<p>&#8220;A summarization model using keyword completion (?) for text compression&#8221;,\t\u25cb\u6c60\u7530\u8aed\u53f2, \u7267\u91ce\u6075, \u5c71\u672c\u548c\u82f1 (\u9577\u5ca1\u6280\u79d1\u5927)<P\/><\/p>\n<p>People create summaries by taking important terms from a sentence, ordering them, then creating a new sentence.  They had a corpus of Nikkei Shimbum stuff of about 3300 articles with the summarized version shown on the Shinkansen news thing that they did an evaluation over.  They had 10 people evaluate 100 sentences on a 1-4 scale for readability and meaning.  <\/p>\n<h3>D5-5 \tMulti-lingual Opinion Analysis Applied to World News: A Case Study<\/h3>\n<p>\t\u25cbEvans, David Kirk, \u795e\u9580\u5178\u5b50 (NII)<P\/><\/p>\n<p>Not much to say about this one since I presented.  <\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u30c6\u30fc\u30de\u30bb\u30c3\u30b7\u30e7\u30f31 (2): \u6559\u80b2\u3092\u652f\u63f4\u3059\u308b\u8a00\u8a9e\u5b66\u30fb\u8a00\u8a9e\u51e6\u7406 Theme Session 1 (2): Linguistics and Language Processing in Support of Education S2-1 \u82f1\u8a9e\u4f8b\u6587\u30aa\u30fc\u30b5\u30ea\u30f3\u30b0\u306e\u305f\u3081\u306e\u53ef\u7b97\u6027\u6c7a\u5b9a\u30d7\u30ed\u30bb\u30b9\u306e\u53ef\u8996\u5316 \u25cb\u6c38\u7530\u4eae (\u5175\u5eab\u6559\u80b2\u5927), \u6cb3\u5408\u6566\u592b (\u4e09\u91cd\u5927), \u68ee\u5e83\u6d69\u4e00\u90ce (\u5175\u5eab\u6559\u80b2\u5927), \u4e95\u9808\u5c1a\u7d00 (\u4e09\u91cd\u5927) S2-2 \u7d71\u8a08\u7684\u81ea\u52d5\u7ffb\u8a33\u306b\u57fa\u3065\u304f\u65e5\u672c\u4eba\u5b66\u7fd2\u8005\u306e\u82f1\u6587\u8a33\u8cea\u5206\u6790 \u25cb\u9354\u6728\u5143 (\u65e9\u5927), \u5b89\u7530\u572d\u5fd7, \u5302\u5742\u82b3\u5178 (NICT\/ATR) S2-3 \u65e5\u672c\u8a9e\u8aad\u89e3\u652f\u63f4\u306e\u305f\u3081\u306e\u8a9e\u7fa9\u6bce\u306e\u7528\u4f8b\u62bd\u51fa\u6a5f\u80fd\u306b\u3064\u3044\u3066 \u25cb\u5c0f\u6797\u670b\u5e78, \u5927\u5c71\u6d69\u7f8e, \u5742\u7530\u6d69\u4eae, \u8c37\u53e3\u96c4\u4f5c, \u592a\u7530\u3075\u307f, Noah Evans, \u6d45\u539f\u6b63\u5e78, \u677e\u672c\u88d5\u6cbb (NAIST) S2-4 \u5916\u56fd\u4eba\u304c\u4f5c\u6210\u3057\u305f\u65e5\u672c\u8a9e\u6587\u66f8\u306b\u5bfe\u3059\u308b\u81ea\u52d5\u6821\u6b63\u6280\u8853 \u25cb\u7956\u56fd\u5a01, \u52a0\u7d0d\u654f\u884c (\u6771\u829d\u30bd\u30ea\u30e5\u30fc\u30b7\u30e7\u30f3) S2-5 \u30b3\u30fc\u30d1\u30b9\u3092\u7528\u3044\u305f\u8a00\u8a9e\u7fd2\u5f97\u5ea6\u306e\u63a8\u5b9a \u25cb\u5742\u7530\u6d69\u4eae, \u65b0\u4fdd\u4ec1, \u677e\u672c\u88d5\u6cbb (NAIST) [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[10],"tags":[],"_links":{"self":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/posts\/191"}],"collection":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/comments?post=191"}],"version-history":[{"count":0,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/posts\/191\/revisions"}],"wp:attachment":[{"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/media?parent=191"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/categories?post=191"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fugutabetai.com\/blog\/wp-json\/wp\/v2\/tags?post=191"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}