is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems. “ —Yan LeCun (楊⽴昆)/Ishan Misra, 2021 ” Self-supervised Learning: The Dark Matter of Intelligence https://ai.facebook.com/blog/self-supervised- learning-the-dark-matter-of-intelligence/
J. Dean. Toutanova. Efficient Estimation of Word Representations in Vector Space. Proceedings of Workshop at ICLR, 2013.. 訓練好了有很多炫炫的功能。 巴黎 法國 義⼤利 羅⾺ 國王 男⼈ 女⼈ 皇后
Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008). 運⽤ self-attention, 避開 RNN 的缺點!
J. Philbin (Google). FaceNet: A Unified Embedding for Face Recognition and Clustering. arXiv preprint arXiv:1503.03832. CNN ̂ y1 ̂ y2 ̂ yn CNN ̂ y1 ̂ y2 ̂ yn Positive Sample Negative Sample
is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems. “ —Yan LeCun (楊⽴昆)/Ishan Misra, 2021 ” Self-supervised Learning: The Dark Matter of Intelligence https://ai.facebook.com/blog/self-supervised- learning-the-dark-matter-of-intelligence/