
? Problem: Too much data for embedding? ?? Solution: Chunk your data! ✂️
A core principal of RAG is to search through your data to use as an input for LLMs. You can embed the data into a vector so it is searchable, but if you have a large video or entire book that will be too much data in one embedding.
Let's talk about some strategies for chunking your data for using with embedding models.
#AI #RAG #embedding #LLMs #VectorSearch #NLP #Firestore #VertexAI #Google #GoogleCloud #Tech #ComputerScience #Engineering
Let's talk about some strategies for chunking your data for using with embedding models.
#AI #RAG #embedding #LLMs #VectorSearch #NLP #Firestore #VertexAI #Google #GoogleCloud #Tech #ComputerScience #Engineering
Google Cloud Tech
Helping you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning....