Word embeddings are the new magic tool for natural language processing. Without cumbersome preprocessing and feature design they are able to capture the semantics of language and texts, simply by being fed with lots of data. So they say.
We applied word embeddings - and for that matter also sentence embeddings - to various problem domains, such as chatbots, car reviews, news and language learning all in German domain-specific corpora. We will share our experiences and learnings: how much feature design was necessary, which alternative approaches are available and for which applications we were able to make use of word embeddings (recommendations, topic detection, error correction)?