Lock in $30 Savings on PRO—Offer Ends Soon! ⏳

Rediscovering Apollo 11: Using Kotlin, Spring A...

Rediscovering Apollo 11: Using Kotlin, Spring AI + Redis OM Spring to explore the mission to the moon!

What happens when you combine the Apollo program’s historical data with modern AI tools? You get a way to interact with one of humanity’s greatest adventures like never before!

In this session, I’ll show you how I used Redis OM Spring and Spring AI to explore Apollo mission data—aligning transcripts, telemetry, and images to uncover hidden connections and insights. We’ll dive into how Semantic Search powered by vector embeddings makes sense of unstructured text, how Redis as a vector database enables lightning-fast retrieval, and why these tools unlock new ways to explore complex datasets.

Don’t know what embeddings or vector databases are? No worries—I’ll break it all down and show you how it works.

Come for the Moon missions, stay for the AI magic, and leave ready to build your own intelligent search experiences!

Avatar for Raphael De Lio

Raphael De Lio

November 26, 2025
Tweet

More Decks by Raphael De Lio

Other Decks in Programming

Transcript

  1. OBJECTIVE "That's one small step for man, one giant leap

    for Mankind." "That's one small step for a human, one big jump for humanity.” AND FIND SEARCH FOR
  2. -500 Jupiter (-110, 500) 0 -500 500 500 Temp Mass

    Displaying only Mercury, Venus, Jupiter, Saturn and Uranus for simpli fi cation. Mass normalized from -500 to 500 Mercury (167, -500) Venus (465, -497) Saturn (-178, -200) Uranus (-195, -454)
  3. -500 Jupiter (-110, 500) 0 -500 500 500 Temp Mass

    Displaying only Mercury, Venus, Jupiter, Saturn and Uranus for simpli fi cation. Comparing how similar Saturn is to other planets in terms of mass and temperature. The shortest, the closest. Mercury (167, -500) Venus (465, -497) Saturn (-178, -200) Uranus (-195, -454) d(S, U) = 254 d(S, J) = 703 d(S, M) = 457 d(S, V) = 708
  4. -500 Jupiter (-110, 500) 0 -500 500 500 Temp Mass

    Displaying only Mercury, Venus, Jupiter, Saturn and Uranus for simpli fi cation. Comparing how similar Saturn is to other planets in terms of mass and temperature. The lowest, the closest. Mercury (167, 500) Venus (465, -497) Saturn (-178, -200) Uranus (-195, -454) ∠(S, U) = 18°
  5. -500 Jupiter (-110, 500) 0 -500 500 500 Temp Mass

    Displaying only Mercury, Venus, Jupiter, Saturn and Uranus for simpli fi cation. Comparing how similar Saturn is to other planets in terms of mass and temperature. The lowest, the closest. Mercury (167, -500) Venus (465, -497) Saturn (-178, -200) Uranus (-195, -454) ∠(S, M) = 60°
  6. -500 Jupiter (-110, 500) 0 -500 500 500 Temp Mass

    Displaying only Mercury, Venus, Jupiter, Saturn and Uranus for simpli fi cation. Comparing how similar Saturn is to other planets in terms of mass and temperature. The lowest, the closest. Mercury (167, -500) Venus (465, -497) Saturn (-178, -200) Uranus (-195, -454) ∠(S, M) = 84°
  7. -500 Jupiter (-110, 500) 0 -500 500 500 Temp Mass

    Displaying only Mercury, Venus, Jupiter, Saturn and Uranus for simpli fi cation. Comparing how similar Saturn is to other planets in terms of mass and temperature. The lowest, the closest. Mercury (167, -500) Venus (465, -497) Saturn (-178, -200) Uranus (-195, -454) ∠(S, M) = 84°
  8. - A vector is a numerical representation of data -

    Vectors that are closer together in a vector space model are more similar to each other - Their proximity is calculated using euclidean distance or cosine similarity
  9. [0.54, 0.86, 0.23, 0.75, 0.92, 0.64, 0.47, 0.33, 0.89, 0.99,

    0.67, 0.49, 0.94, 0.36, 0.71, 0.82, 0.29, 0.57, 0.630.98, 0.64, 0.47, 0.33, 0.48, 0.95, 0.67, 0.81, 0.23, 0.21, 0.67, 0.49, 0.56, 0.57, 0.63, 0.89, 0.21, 0.67, 0.49, 0.94, 0.36, 0.71, 0.82, 0.29] Embedding Model
  10. How does it work? VECTOR SIMILARITY SEARCH "That's one small

    step for man, one giant leap for Mankind." "That's one small step for a human, one big jump for humanity.” [0.56, 0.76, 0.80, 0.54, 0.99, -0.87 … ] [0.58, 0.75, 0.75, 0.63, 0.9, -0.7 … ]
  11. Where to store our vectors? VECTOR SIMILARITY SEARCH 1 Billion

    Vectors - 90% precision - 200ms median latency 1 Billion Vectors - 95% precision - 1.3s median latency
  12. - Embedding models are algorithms that transform unstructured data into

    meaningful numerical data - Vector databases are used for e ff i ciently storing and retrieving these vectors based on their proximity in the vector space model
  13. How to interact with Redis? VECTOR SIMILARITY SEARCH Saturn V

    Stage 1: JVM Stage 2: Kotlin Stage 3: Spring Framework Command Module & Service: Redis OM Spring & RedisVL Lunar Module: Spring AI Escape System: Caching
  14. The Command Module VECTOR SIMILARITY SEARCH How to interact with

    Redis? JSON System Vector System Query Engines Probabilistic Data Structures Support Perform ance Boosters DS Enhancers Redis OM Spring
  15. • Collect our data • Load our data into Redis

    • Query our data ROCKET’S OBJECTIVE
  16. LOADING THE DATA @RedisHash data class Utterance( @Id var timestamp:

    String, @Indexed var text: String, @Indexed @NonNull var speaker: String, @Indexed @NonNull var speakerId: String = "" )
  17. @RedisHash data class Utterance( @Id var timestamp: String, @Indexed var

    timestampInt: Int = 0, @Indexed @NonNull var speaker: String, @Indexed @NonNull var speakerId: String = "" ) LOADING THE DATA Annotation for de fi ning a HASH object
  18. @RedisHash data class Utterance( @Id var timestamp: String, @Indexed var

    text: String, @Indexed @NonNull var speaker: String, @Indexed @NonNull var speakerId: String = "" ) LOADING THE DATA Annotation for automatic creating a ULID
  19. @RedisHash data class Utterance( @Id var timestamp: String, @Indexed var

    text: String, @Indexed @NonNull var speaker: String, @Indexed @NonNull var speakerId: String = "" ) LOADING THE DATA Annotation for creating indexes within Redis (Query Engine) for e ff i cient fi ltering
  20. @RedisHash data class Utterance( @Id var timestamp: String, @Indexed var

    text: String, @Indexed @NonNull var speaker: String, @Indexed @NonNull var speakerId: String = "" ) LOADING THE DATA Annotation for creating indexes within Redis (Query Engine) for e ff i cient fi ltering
  21. LOADING THE DATA Creating the repository @RedisHash data class Utterance(

    @Id var timestamp: String, @Indexed var text: String, @Indexed @NonNull var speaker: String, @Indexed @NonNull var speakerId: String = "" ) interface UtteranceRepository : RedisEnhancedRepository<Utterance, String>
  22. LOADING THE DATA JSON instead of HASH interface UtteranceRepository :

    RedisDocumentRepository<Utterance, String> @Document data class Utterance( @Id var timestamp: String, @Indexed var text: String, @Indexed @NonNull var speaker: String, @Indexed @NonNull var speakerId: String = "" )
  23. VECTORIZING THE DATA @NonNull @Vectorize( destination = "embeddedText", embeddingType =

    EmbeddingType.SENTENCE ) var text: String, @VectorIndexed( algorithm = VectorField.VectorAlgorithm.HNSW, dimension = 384, distanceMetric = DistanceMetric.COSINE ) var embeddedText: ByteArray? = null,
  24. @NonNull @Vectorize( destination = "embeddedText", embeddingType = EmbeddingType.SENTENCE ) var

    text: String, @VectorIndexed( algorithm = VectorField.VectorAlgorithm.HNSW, dimension = 384, distanceMetric = DistanceMetric.COSINE ) var embeddedText: ByteArray? = null, VECTORIZING THE DATA Data to be vectorized
  25. @NonNull @Vectorize( destination = "embeddedText", embeddingType = EmbeddingType.SENTENCE ) var

    text: String, @VectorIndexed( algorithm = VectorField.VectorAlgorithm.HNSW, dimension = 384, distanceMetric = DistanceMetric.COSINE ) var embeddedText: ByteArray? = null, VECTORIZING THE DATA Field to store the vector Index Con fi guration
  26. @NonNull @Vectorize( destination = "embeddedText", embeddingType = EmbeddingType.SENTENCE ) var

    text: String, @VectorIndexed( algorithm = VectorField.VectorAlgorithm.HNSW, dimension = 384, distanceMetric = DistanceMetric.COSINE ) var embeddedText: ByteArray? = null, VECTORIZING THE DATA Flat or HNSW
  27. @NonNull @Vectorize( destination = "embeddedText", embeddingType = EmbeddingType.SENTENCE ) var

    text: String, @VectorIndexed( algorithm = VectorField.VectorAlgorithm.HNSW, dimension = 384, distanceMetric = DistanceMetric.COSINE ) var embeddedText: ByteArray? = null, VECTORIZING THE DATA Number of dimensions created by the embedding model
  28. @NonNull @Vectorize( destination = "embeddedText", embeddingType = EmbeddingType.SENTENCE ) var

    text: String, @VectorIndexed( algorithm = VectorField.VectorAlgorithm.HNSW, dimension = 384, distanceMetric = DistanceMetric.COSINE ) var embeddedText: ByteArray? = null, VECTORIZING THE DATA Cosine or Euclidean
  29. VECTORIZING THE DATA Di ff erent embedding model Number of

    dimensions must match the number of dimensions of the model
  30. @Vectorize( destination = "embeddedQuestion", embeddingType = EmbeddingType.SENTENCE, provider = EmbeddingProvider.OPENAI,

    openAiEmbeddingModel = OpenAiApi.EmbeddingModel.TEXT_EMBEDDING_3_LARGE ) var question: String, @VectorIndexed( algorithm = VectorAlgorithm.HNSW, dimension = 3072, distanceMetric = DistanceMetric.COSINE, ) var embeddedQuestion: ByteArray? = null VECTORIZING THE DATA Di ff erent embedding model Number of dimensions must match the number of dimensions of the model
  31. QUERY THE DATA This is going to generate the command

    to search and sort on Redis. This is done e ff i ciently by the Redis Query Engine private val entityStream: EntityStream private val embedder: Embedder val embedding = embedder.getTextEmbeddingsAsBytes(listOf(text), `Utterance$`.TEXT).first() val stream: SearchStream<Utterance> = entityStream.of(Utterance : : class.java) val results = stream .filter(`Utterance$`.EMBEDDED_TEXT.knn(3, embedding)) .sorted(`Utterance$`._EMBEDDED_TEXT_SCORE) .map(Fields.of(`Utterance$`._THIS, `Utterance$`._EMBEDDED_TEXT_SCORE)) .collect(Collectors.toList())
  32. private val entityStream: EntityStream private val embedder: Embedder val embedding

    = embedder.getTextEmbeddingsAsBytes(listOf(text), `Utterance$`.TEXT).first() val stream: SearchStream<Utterance> = entityStream.of(Utterance : : class.java) val results = stream .filter(`Utterance$`.EMBEDDED_TEXT.knn(3, embedding)) .sorted(`Utterance$`._EMBEDDED_TEXT_SCORE) .map(Fields.of(`Utterance$`._THIS, `Utterance$`._EMBEDDED_TEXT_SCORE)) .collect(Collectors.toList()) QUERY THE DATA This is going to generate the command to search and sort on Redis. This is done e ff i ciently by the Redis Query Engine Provided by Redis OM Spring
  33. private val entityStream: EntityStream private val embedder: Embedder val embedding

    = embedder.getTextEmbeddingsAsBytes(listOf(text), `Utterance$`.TEXT).first() val stream: SearchStream<Utterance> = entityStream.of(Utterance : : class.java) val results = stream .filter(`Utterance$`.EMBEDDED_TEXT.knn(3, embedding)) .sorted(`Utterance$`._EMBEDDED_TEXT_SCORE) .map(Fields.of(`Utterance$`._THIS, `Utterance$`._EMBEDDED_TEXT_SCORE)) .collect(Collectors.toList()) QUERY THE DATA This is going to generate the command to search and sort on Redis. This is done e ff i ciently by the Redis Query Engine Vectorizing user query
  34. private val entityStream: EntityStream private val embedder: Embedder val embedding

    = embedder.getTextEmbeddingsAsBytes(listOf(text), `Utterance$`.TEXT).first() val stream: SearchStream<Utterance> = entityStream.of(Utterance : : class.java) val results = stream .filter(`Utterance$`.EMBEDDED_TEXT.knn(3, embedding)) .sorted(`Utterance$`._EMBEDDED_TEXT_SCORE) .map(Fields.of(`Utterance$`._THIS, `Utterance$`._EMBEDDED_TEXT_SCORE)) .collect(Collectors.toList()) QUERY THE DATA This is going to generate the command to search and sort on Redis. This is done e ff i ciently by the Redis Query Engine
  35. private val entityStream: EntityStream private val embedder: Embedder val embedding

    = embedder.getTextEmbeddingsAsBytes(listOf(text), `Utterance$`.TEXT).first() val stream: SearchStream<Utterance> = entityStream.of(Utterance : : class.java) val results = stream .filter(`Utterance$`.EMBEDDED_TEXT.knn(3, embedding)) .sorted(`Utterance$`._EMBEDDED_TEXT_SCORE) .map(Fields.of(`Utterance$`._THIS, `Utterance$`._EMBEDDED_TEXT_SCORE)) .collect(Collectors.toList()) QUERY THE DATA This is going to generate the command to search and sort on Redis. This is done e ff i ciently by the Redis Query Engine
  36. private val entityStream: EntityStream private val embedder: Embedder val embedding

    = embedder.getTextEmbeddingsAsBytes(listOf(text), `Utterance$`.TEXT).first() val stream: SearchStream<Utterance> = entityStream.of(Utterance : : class.java) val results = stream .filter(`Utterance$`.EMBEDDED_TEXT.knn(3, embedding)) .sorted(`Utterance$`._EMBEDDED_TEXT_SCORE) .map(Fields.of(`Utterance$`._THIS, `Utterance$`._EMBEDDED_TEXT_SCORE)) .collect(Collectors.toList()) QUERY THE DATA This is going to generate the command to search and sort on Redis. This is done e ff i ciently by the Redis Query Engine
  37. • Short text: “Apollo” → Doesn’t tell if it’s about

    a Greek god, a space mission, or a music album. CHUNKING?
  38. SUMMARIZATION @Bean fun summarizationChatClient( chatModel: ChatModel, ): ChatClient { return

    ChatClient.builder(chatModel) .defaultSystem(DEFAULT_PROMPT) .build() } companion object { private val DEFAULT_PROMPT = """ You are a helpful assistant who summarizes utterances of the Apollo 11 mission. Make these summaries very dense with all curiosities included. Limit the summary to 512 words. """.trimIndent() }
  39. SUMMARIZATION val processed = tocList.map { toc -> async(Dispatchers.IO) {

    runCatching { logger.info("Generating summary for TOC entry: {}", toc.startDate) val response = summarizationChatClient .prompt() .user(toc.concatenatedUtterances.orEmpty()) .call() .chatResponse() toc.summary = response ?. result ?. output ?. text logger.info("Successfully generated summary for TOC entry: {}", toc.startDate) toc }.onFailure { e -> logger.error("Error generating summary for TOC entry: {}", toc.startDate, e) }.getOrNull() } }.mapNotNull { it.await() }
  40. SUMMARIZATION val processed = tocList.map { toc -> async(Dispatchers.IO) {

    runCatching { logger.info("Generating summary for TOC entry: {}", toc.startDate) val response = summarizationChatClient .prompt() .user(toc.concatenatedUtterances.orEmpty()) .call() .chatResponse() toc.summary = response ?. result ?. output ?. text logger.info("Successfully generated summary for TOC entry: {}", toc.startDate) toc }.onFailure { e -> logger.error("Error generating summary for TOC entry: {}", toc.startDate, e) }.getOrNull() } }.mapNotNull { it.await() }
  41. SUMMARIZATION val processed = tocList.map { toc -> async(Dispatchers.IO) {

    runCatching { logger.info("Generating summary for TOC entry: {}", toc.startDate) val response = summarizationChatClient .prompt() .user(toc.concatenatedUtterances.orEmpty()) .call() .chatResponse() toc.summary = response ?. result ?. output ?. text logger.info("Successfully generated summary for TOC entry: {}", toc.startDate) toc }.onFailure { e -> logger.error("Error generating summary for TOC entry: {}", toc.startDate, e) }.getOrNull() } }.mapNotNull { it.await() }
  42. SUMMARIZATION val processed = tocList.map { toc -> async(Dispatchers.IO) {

    runCatching { logger.info("Generating summary for TOC entry: {}", toc.startDate) val response = summarizationChatClient .prompt() .user(toc.concatenatedUtterances.orEmpty()) .call() .chatResponse() toc.summary = response ?. result ?. output ?. text logger.info("Successfully generated summary for TOC entry: {}", toc.startDate) toc }.onFailure { e -> logger.error("Error generating summary for TOC entry: {}", toc.startDate, e) }.getOrNull() } }.mapNotNull { it.await() }
  43. SUMMARIZATION val processed = tocList.map { toc -> async(Dispatchers.IO) {

    runCatching { logger.info("Generating summary for TOC entry: {}", toc.startDate) val response = summarizationChatClient .prompt() .user(toc.concatenatedUtterances.orEmpty()) .call() .chatResponse() toc.summary = response ?. result ?. output ?. text logger.info("Successfully generated summary for TOC entry: {}", toc.startDate) toc }.onFailure { e -> logger.error("Error generating summary for TOC entry: {}", toc.startDate, e) }.getOrNull() } }.mapNotNull { it.await() }
  44. SUMMARIZATION val processed = tocList.map { toc -> async(Dispatchers.IO) {

    runCatching { logger.info("Generating summary for TOC entry: {}", toc.startDate) val response = summarizationChatClient .prompt() .user(toc.concatenatedUtterances.orEmpty()) .call() .chatResponse() toc.summary = response ?. result ?. output ?. text logger.info("Successfully generated summary for TOC entry: {}", toc.startDate) toc }.onFailure { e -> logger.error("Error generating summary for TOC entry: {}", toc.startDate, e) }.getOrNull() } }.mapNotNull { it.await() }
  45. QUESTION EXTRACTION Related U tt erances LLM Embedding Model Extract

    questions Vectorize each question Store each question
  46. QUESTION EXTRACTION fun updateSummary(tocData: List<TOCData>) { tocData.forEach { toc ->

    tocDataRepository.updateField(toc, `TOCData$`.SUMMARY, toc.summary) } } fun updateQuestions(tocData: List<TOCData>) { tocData.forEach { toc -> tocDataRepository.updateField(toc, `TOCData$`.QUESTIONS, toc.questions) } }
  47. QUESTION EXTRACTION fun updateSummary(tocData: List<TOCData>) { tocData.forEach { toc -

    > tocDataRepository.updateField(toc, `TOCData$`.SUMMARY, toc.summary) } } fun updateQuestions(tocData: List<TOCData>) { tocData.forEach { toc - > tocDataRepository.updateField(toc, `TOCData$`.QUESTIONS, toc.questions) } } runBlocking { coroutineScope { awaitAll( async { summarizationWorkflow.run() }, async { questionGenerationWorkflow.run() } ) } }
  48. QUESTION EXTRACTION @RedisHash data class Question( @Id var timestamp: String,

    var utterancesConcatenated: String, var utterances: List<Utterance>, @Vectorize( destination = "embeddedQuestion", embeddingType = EmbeddingType.SENTENCE, provider = EmbeddingProvider.OPENAI, openAiEmbeddingModel = OpenAiApi.EmbeddingModel.TEXT_EMBEDDING_3_LARGE ) var question: String, @VectorIndexed( algorithm = VectorAlgorithm.HNSW, dimension = 3072, distanceMetric = DistanceMetric.COSINE, ) var embeddedQuestion: ByteArray? = null )
  49. ENHANCING RESPONSES User Prompt LLM Embedding Model Vectorize Hydrate the

    prompt Semantic Search Final Prompt Trigger the LLM
  50. fun searchByQuestion(embedding: ByteArray): List<QuestionSearchResult> { val stream: SearchStream<Question> = entityStream.of(Question

    :: class.java) return stream .filter(`Question$`.EMBEDDED_QUESTION.knn(3, embedding)) .sorted(`Question$`._EMBEDDED_QUESTION_SCORE) .map(Fields.of(`Question$`._THIS, `Question$`._EMBEDDED_QUESTION_SCORE)) .collect(Collectors.toList()) .map { QuestionSearchResult(it.first, it.second) } } RETRIEVE SIMILAR QUESTIONS OR SUMMARIES
  51. @Bean fun ragChatClient( chatModel: ChatModel, ): ChatClient { return ChatClient.builder(chatModel)

    .defaultSystem(DEFAULT_PROMPT) .build() } companion object { private val DEFAULT_PROMPT = """ You are an expert assistant specializing in the Apollo missions. Your goal is to provide accurate, detailed, and concise answers to user inquiries by utilizing the provided Apollo mission data. Rely solely on the information given below and avoid introducing external information. """.trimIndent() } CONFIG A CHAT CLIENT
  52. fun generateResponse(query: String, data: String): String { val response =

    ragChatClient .prompt() .system("Apollo mission data: $data") .user("User question: $query") .call() .chatResponse() val enhancedAnswer = response ?. result ?. output ?. text ? : "" logger.info("AI response: {}", enhancedAnswer) return enhancedAnswer } GENERATE RESPONSE
  53. @Bean fun ragChatClient( chatModel: ChatModel, ): ChatClient { return ChatClient.builder(chatModel)

    .defaultSystem(DEFAULT_PROMPT) .build() } companion object { private val DEFAULT_PROMPT = """ You are an expert assistant specializing in the Apollo missions. Your goal is to provide accurate, detailed, and concise answers to user inquiries by utilizing the provided Apollo mission data. Rely solely on the information given below and avoid introducing external information. """.trimIndent() } WHY MULTIPLE CHAT CLIENTS?
  54. @Bean public ChatClient analysisChatClient( ChatModel chatModel, ChatMemory chatMemory, DateTimeTools dateTimeTools)

    { return ChatClient.builder(chatModel) .defaultAdvisors(MessageChatMemoryAdvisor.builder(chatMemory).build()) .defaultTools(dateTimeTools) .defaultSystem(DEFAULT_PROMPT) .defaultOptions( ToolCallingChatOptions.builder() .internalToolExecutionEnabled(true) .build()) .build(); } SEPARATION OF CONCERNS
  55. User Prompt Regular Pipeline (LLM) Embedding Model Vectorize Cache Hit

    Check semantic cache Return previously generated response Cache Miss
  56. @Bean fun vectorizer(): SentenceTransformersVectorizer { return SentenceTransformersVectorizer("Xenova/all-MiniLM-L6-v2") } @Bean fun

    semanticCache( jedis: UnifiedJedis, vectorizer: BaseVectorizer ): SemanticCache { return SemanticCache.Builder() .name("semantic-cache") .distanceThreshold(0.2F) .ttl(360) .redisClient(jedis) .vectorizer(vectorizer) .build() } REDIS VECTOR LIBRARY
  57. @Bean fun vectorizer(): SentenceTransformersVectorizer { return SentenceTransformersVectorizer("Xenova/all-MiniLM-L6-v2") } @Bean fun

    semanticCache( jedis: UnifiedJedis, vectorizer: BaseVectorizer ): SemanticCache { return SemanticCache.Builder() .name("semantic-cache") .distanceThreshold(0.2F) .ttl(360) .redisClient(jedis) .vectorizer(vectorizer) .build() } val cached = semanticCache.check(request.query) if (cached.isPresent) { val c = cached.get() return CachedResponse( query = request.query, answer = c.response, ) }
  58. @Bean fun vectorizer(): SentenceTransformersVectorizer { return SentenceTransformersVectorizer("Xenova/all-MiniLM-L6-v2") } @Bean fun

    semanticCache( jedis: UnifiedJedis, vectorizer: BaseVectorizer ): SemanticCache { return SemanticCache.Builder() .name("semantic-cache") .distanceThreshold(0.2F) .ttl(360) .redisClient(jedis) .vectorizer(vectorizer) .build() } semanticCache.store(query, answer) val cached = semanticCache.check(request.query) if (cached.isPresent) { val c = cached.get() return CachedResponse( query = request.query, answer = c.response, ) }
  59. - How to enhance search capabilities (Vector Similarity Search) -

    How to improve VS search accuracy (Chunking, Grouping, Summarization, Extraction) - How to improve LLMs responses (Retrieval Augmented Generation) - How to save speed and time when dealing with LLMs (Semantic Cache) WHAT WE SAW TODAY:
  60. Exact vs approximate nearest neighbors in vector databases What is

    a vector database? What is an embedding model? Redis Redis Redis