LLM models are a subcategory of deep learning models based on neural networks and natural language processing(NLP). Security and auditing are critical issues when dealing with applications based on large language models, such as GPT (Generative Pre-trained Transformer) or LLM (Large Language Model) models. This talk aims to analyze the security of these language models from the developer’s point of view, analyzing the main vulnerabilities that can occur in the generation of these models. Among the main points to be discussed we can highlight:
-Introduction to LLM
-Introduction to OWASP LLM Top 10.
-Auditing tools in applications that handle LLM models.
-Use case with the textattack tool(https://textattack.readthedocs.io/en/master/)