This short document is extracted from the report that I wrote for a summer internship during my first year of master's degree in ENSAE Paristech in 2018.
It consists of a short introduction to basic concepts in deep learning applied to natural language processing. It thus aims at recalling some deep learning basics, introducing word embeddings, describing some popular NLP networks as LSTMs, and finally describing a 2016 NER architecture using all the aforementioned concepts.
Please feel free to send a message if noticing any error/typo.