Home → Magazine Archive → March 2016 (Vol. 59, No. 3) → Deep or Shallow, NLP Is Breaking Out → Abstract

Deep or Shallow, NLP Is Breaking Out

By Gregory Goth

Communications of the ACM, Vol. 59 No. 3, Pages 13-16

[article image]

One of the featured speakers at the inaugural Text By The Bay conference, held in San Francisco in April 2015, drew laughter when describing a neural network question-answering model that could beat human players in a trivia game.

While such performance by computers is fairly well known to the general public, thanks to IBM's Watson cognitive computer, the speaker, natural language processing (NLP) researcher Richard Socher, said, the neural network model he described "was built by one grad student using deep learning" rather than by a large team with the resources of a global corporation behind them.


Sherman Foresythe

It's also interesting to note what role Lawrence Berkeley Lab has played in vector space from 2005 https://www.kaggle.com/c/word2vec-nlp-tutorial/forums/t/12349/word2vec-is-based-on-an-approach-from-lawrence-berkeley-national-lab

Displaying 1 comment