Home → Blogs → BLOG@CACM → SIGIR 2017 - Day 1: Neural Networks for IR... → Full Text

SIGIR 2017 - Day 1: Neural Networks for IR

By Mei Kobayashi

August 9, 2017

[article image]

SIGIR 2017 - Day 1: Neural Networks for IR

こんにちは。ようこそ!Hello and welcome!

ACM SIGIR Conference began with a splash – literally – as Typhoon #5 hit Japan. Fortunately, Tokyo was spared from a direct hit so most attendees arrived without difficulty.

The first day consisted of full and half-day tutorials in parallel sessions. I attended Neural Networks for Information Retrieval, which attracted such a large audience that it was held in an auditorium-size room. The opening speaker and session organizer, Tom Kenter, is a well-seasoned veteran who set the stage for a substantive educational atmosphere that was informal and fun. The other speakers (Alexey Borisov, Mostafa Dehghani, Maarten deRijke, Bhaskar Mitra, Christophe Van Gysel) completed a nice mix of passionate researchers from academia and industry, from the old guard and the new. To accommodate people from diverse backgrounds and to ensure everyone would be using the same terminology, the tutorial began with basic definitions and concepts.

The remainder of the morning sessions touched briefly on IR based on exact matching (i.e., matching sequences of characters) before diving into IR based on semantic matching (i.e., identification of semantically related information that may or not be an exact match). An interesting twist of exact matching is word hashing. In the era of big data sets, finding exact matches of a term in a document is expensive for English which has ~500,000 words in active use. (Note:  some languages, such a Swedish, have several times more. My homework is to check up on Japanese!) Word hashing using letter tri-grams significantly reduces the search space to 30,621 terms. Its practical value is validated by Baidu’s search engine, which uses word hashing as one of many technologies to deal with scalability of its search system. Another nice idea for academic studies with big data sets of long documents is visualization of results as a long, horizontal bar. The x-axis represents the term, and the y-axis uses visual horizontal marker to indicate term occurrence (there is no visual mark in the absence of a term). This visualization tool facilitates quick understanding and discovery of the importance of a term in a document. For example, frequent appearance in the title and the beginning tends to be associated with higher importance.

Since it is impossible to cover all of the exciting new concepts introduced in the tutorial, interested readers can access the website where the speakers uploaded their slides. During the tutorial the speakers asked for and received feedback on websites with open source code, open data, and useful resources for work related to their tutorial. The speakers and I also welcome comments and information from readers of this blog.

A major reason for attending a tutorial is the opportunity for the audience and instructors to interact. The Q&A sessions of this tutorial were great. In the spirit of entertaining, academic combat, one audience member un-bashfully introduced himself as a member of the LSI-IR camp and challenged the speakers to explain advantages/disadvantages of LSI & LDA vs. neural network (NN)-based methods. After friendly discussions, we came to the conclusion that some applications with small data sets may be better off with LSI since NN’s require massive training data sets and computations. LDA is used for a different purpose: to find topics in a document set. A related question by another audience member was on comparison studies: LSI vs. LSA vs. NN-based methods. After discussion, we concluded that massive open data sets for benchmarking studies do not exist (to the best of our knowledge – please speak up if we are wrong!). Academics are not associated with institutions that generate data at massive scales, and industry is reluctant to share sensitive data. Any donations of suitable benchmark data to a reputable open source site will be welcomed by many!

Another good reason for attending a tutorial in-person is to learn about open research issues and future directions for research. The last few sessions examined evaluation of results based on user behavior and generating responses. The quality of search results can be evaluated, in part, through modeling of user behavior (e.g., click streams, pause times, abandoned search, search & retrieval using session data behavior vs. longer user histories). For my part, I wondered whether the accuracy of response time took into account data transmission times. In Tokyo, use of very crowded networks can be an aggravating experience. Downloading even small files or sending e-mail can take a very long time, sometimes ending in time-outs.

A relatively new area for generating responses for Q&A with chat bots. Unlike simple search engines, chat bots should not treat a session with a user to be an isolated encounter. They need to remember previous Q&As, discussions, etc. to output consistent replies to individual users. Since each session must be treated as a follow-up, personal historical data must be archived and taken into account in future encounters. Evaluation of user-experience with chat bots is a new and important area for businesses that have begun to deploy them as part of their customer service. Response generation by chat bots and their robot cousins is one, key dimension that is being targeted for evaluation and improvement of customer service/experience.  The latter sessions concluded with questions for further research and speculation by Tom Kenter on hot topics at the 50th anniversary of SIGIR …in Amsterdam? (a venue suggested by Maarten deRijke).

A third reason for attending a tutorial is to hear person viewpoints on the state of a research area, approaches for future work, possible risks for failure, whether a current hot topics is approaching the peak of a hype cycle (a.k.a., bubble), and various other materials that would not ordinarily be found in an academic paper or formal presentation. Unfortunately, valuable material is often left unpublished for fear of being quoted out-of-context. A sketch of some of the topics (interesting seeds for debate) is in the final set of slides 08_WrapUp.pdf. Perhaps readers can start debates of their own in their respective IR communities.

In closing, I should mention that Typhoon #5 moved slowly and lingered half-heartedly for two days in the Tokyo region. It seems the gods were supporting the organizers after all. Although the Tokyo City Tourism Organization set up a booth in the hotel lobby for glamorous and exciting city tours in parallel with academic sessions, there was little temptation to play hooky – at least at the offset. Stay tuned.

Coming up next: Women in SIGIR.  では、また!

Additional References:

1. Lecture notes: http://bit.ly/neuralir-intro   B. Mitra, N. Craswell. An introduction to neural information retrieval. Foundations and Trends in Information Retrieval, 2017, under review.

2. Survey: K.D. Onal et al. Neural information retrieval: At the end of early years. Information Retrieval Journal, 2017, under review.

Mei Kobayashi is manager, Data Science/Text Analysis at NTT Communications.


No entries found