Google, Stanford build hybrid neural networks that can explain photos

Gigaom

Two separate groups of researchers at Google and Stanford have merged best-of-breed neural network models and created systems that can accurately explain what’s happening in images.

Although their approaches differ (full papers are available here for Stanford and here for Google), both groups essentially combined deep convolutional neural networks — the type of deep learning models responsible for the huge advances in computer vision accuracy over the past few years — with recurrent neural networks that excel at text analysis and natural language processing. Recurrent neural networks have been responsible for some of the significant improvements in language understanding recently, including the machine translation that powers Microsoft’s Skype Translate and Google’s word2vec libraries.

In a comment on a Hacker News post, pointing to New York Times story about the research out of Google and Stanford, one of the authors of the Stanford paper points to similar points similar research also coming out…

Ver la entrada original 507 palabras más

Anuncios

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s