3-4 June, 2025 | Bengaluru, India View More Details & Registration IMPORTANT NOTE: Timing of sessions and room locations are subject to change.
The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for OpenSearchCon India 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.
This schedule is automatically displayed in India Standard Time (IST | GMT+5:30). To see the schedule in your preferred timezone, please select from the drop-down located at the bottom of the menu to the right.
Sign up or log in to add sessions to your schedule and sync them to your phone or calendar.
Retrieval-Augmented Generation (RAG) has emerged as a powerful paradigm for grounding large language models (LLMs) in external knowledge, enabling more accurate and contextually relevant responses. However, traditional RAG pipelines often operate in a static, pre-defined manner, limiting their ability to adapt to complex queries and dynamically explore information spaces. This talk introduces "Multimodal Agentic RAG", an innovative approach that integrates autonomous agents into the RAG framework to enhance its reasoning and retrieval capabilities. This talk will delve into the architecture and implementation of multimodal RAG with agentic workflows, showcasing its potential to address limitations of traditional RAG. The architecture is based on OpenSearch's multimodal capabilities, DeepSeek-R1's reasoning capabilities, and custom-tuned embedding models. Furthermore, we will discuss the challenges and future directions of Agentic RAG, including the development of robust planning mechanisms, efficient knowledge source management, and effective agent coordination.
Bobby is a Principal Product Manager at AWS focusing on GenAI and ML product initiatives. Previously, he worked on products across the full lifecycle of machine learning, including Analytics and ML features on SageMaker platform, deep learning training and inference products at... Read More →
Sr Generative AI Data Scientist, Amazon Web Services
Surya Kari is a Senior Generative AI Data Scientist at AWS, specializing in developing solutions leveraging state-of-the-art foundation models. He has extensive experience working with advanced language models including DeepSeek-R1, the LLama family, and Qwen, focusing on their fine-tuning... Read More →
This session explores how Gaia’s decentralized AI platform integrates with OpenSearch to create scalable, privacy-preserving solutions for semantic search, generative AI applications, and real-time analytics.
In this session, I intend to demonstrate a use case where Gaia’s federated learning models enhance OpenSearch’s vector database capabilities, enabling dynamic personalization and efficient retrieval-augmented generation (RAG) workflows.
Gaia’s AI Framework: How Gaia’s decentralized architecture complements OpenSearch’s ecosystem, particularly in scenarios requiring data privacy and distributed model training (e.g., healthcare or financial analytics).
Demo a RAG Pipeline: Build a real-world demo/example using Gaia’s language models to generate embeddings stored in OpenSearch’s binary vector indexes (new in v2.19), coupled with OpenSearch Flow for automated pipeline configuration.
Community Impact: A look at how this integration supports OpenSearch’s goal of becoming the preferred backend for generative AI while adhering to open governance principles.
I lead Developer Relations at Gaia 🌱 (https://gaianet.ai). I focus on onboarding developers, enterprises create, launch and scale their own AI Agents.I was a Community Lead for AngelHack, a global company dedicated to Developer Relations and building diverse communities of change... Read More →
Struggling to tame the latency of your OpenSearch query? Atlassian was facing the exact problem with a query which had grown organically over a number of years of relevance improvements. Learn how they dove into the internals of OpenSearch to discover how their query worked, reducing their query latency by 70% in the process.
This session will cover:
* Techniques for identifying problem areas * Shard topology * Conjunctive and disjunctive boolean queries and how they impact latency * track_total_hits, how it works and subtleties of when it doesn't * Concurrent segment search * Joins, when denormalisation is faster and when it isn't * Index sorting * initial query scoring vs rescoring * Cross field match vs copy_to * Parallel recall paths using hybrid queries or multi-search * Query cache vs pre-computing * LTR plugin, feature logging and ranking and impact on latency
Jason has been working at Atlassian for the past 15 years across a number of products and platforms. Most recently he has been the lead engineer within Search Platform, upleveling it to deliver world class relevance search to Atlassian's Cloud products.
In this session, Uber, a premier member of the OpenSearch Foundation, will share how the unique demands of their low-latency, high-throughput search platform have driven impactful contributions to the OpenSearch project.
The discussion will explore Uber’s collaboration with the OpenSearch community to integrate critical learnings and innovations from their own search platform. Key highlights include the development of a modular, cloud-native architecture with clear separation between readers and writers, a high-performance messaging protocol for inter-service communication, and a pull-based indexing model.
These advancements, along with contributions from the broader OpenSearch community, are shaping OpenSearch into a cutting-edge, serverless open-source search engine tailored for enterprise needs. Join this session to discover how pioneering work from Uber and OpenSearch is paving the way for the next generation of high performance and scalable search technology in open source.
Pallavi Priyadarshini is a senior engineering manager at Amazon Web Services, leading engineering for high-performing and scalable search and security technologies for the OpenSearch Project and Amazon OpenSearch Service. Previously, she held leadership positions at IBM focused on... Read More →
Shubham Gupta is a Senior Engineering Manager in the Platform Engineering organization at Uber. He leads a team of engineers to develop a high-performing, and scalable search platform that powers several product surfaces. He is an OpenSearch TSC member
OpenSearch is a powerful tool for log storage, analysis, and dashboard reporting. In the context of Retrieval-Augmented Generation (RAG) applications with Large Language Models (LLMs), OpenSearch serves as a vital component, acting as a vector database. This presentation explores an architecture and process flow to integrate graph-based context with OpenSearch, enhancing the correlatability of data and improving the context and performance of search results.
Through experimentation with synthetic data from a travel app, we’ll demonstrate how customer activity data—such as travel points, preferences, and interactions with destinations—can be integrated into OpenSearch. By introducing graph points and correlating them with other data (e.g., maps and historical data), we can provide personalized, more relevant search results for users. Additionally, we’ll compare metrics before and after the introduction of graph context to showcase how it improves search performance and the overall user experience in RAG-based systems.
Satej works as Principal Data Engineer at Zalando SE with over 14 years of experience in the industry. He has worked with renowned organizations such as Boeing, Adidas, Honeywell specializing in architecture, big data and machine learning use cases. With a strong track record of architecting... Read More →
As enterprises grapple with unstructured data, Retrieval-Augmented Generation (RAG) is transforming search from simple retrieval to context-aware AI-driven knowledge generation.
This session explores how DeepSeek-R1, a powerful reasoning model, integrates with Amazon OpenSearch Service’s vector database to enable scalable, intelligent information retrieval.
Attendees will learn: How RAG enhances traditional search using vector embeddings and semantic understanding. How to connect DeepSeek-R1 on SageMaker with OpenSearch for AI-powered responses. How OpenSearch’s flexible connectors integrate with models like DeepSeek, Cohere, and OpenAI.
By the end of this session, you'll gain actionable insights on implementing RAG workflows in OpenSearch, improving search accuracy, and advancing the AI ecosystem with knowledge-augmented AI.
Associate Delivery Consultant - Data Analytics, Amazon Web Services (AWS), Amazon Web Services
Shubham is an Associate Delivery Consultant at Amazon Web Services, based in Bangalore. He specializes in Big Data, Data Lakes, ETL Migrations, Search and Observability, as well as GenAI projects. With deep technical expertise, He consistently delivers impactful and scalable solutions... Read More →
Ramya is a AWS Delivery Consultant specializing in Search, Data Warehouse, and ETL solutions. With experience in data engineering and architecture, she has shared her expertise at public technology conferences including PostgreSQL Conference (PGConf), APJC Tech Summit and Amazon wide... Read More →
Wednesday June 4, 2025 11:15am - 11:55am IST Jupiter
Aditya Soni is a DevOps/SRE tech professional He worked with Product and Service based companies including Red Hat, Searce, and is currently positioned at Forrester Research as a DevOps Engineer II. He holds AWS, GCP, Azure, RedHat, and Kubernetes Certifications.He is a CNCF Ambassador... Read More →
Site Reliability Engineer, CNCF Ambassador, Autodesk
Meet Seema, the tech whiz at Autodesk. She's not just about fixing things – she loves sharing what she knows! Whether speaking at cool events like KubeCon NA, KubeDay, GitLab Commit, and GitHub Universe or breaking down tech on her YouTube channel, Seema makes the complicated stuff... Read More →
Wednesday June 4, 2025 11:15am - 11:55am IST Ceres
Query Insights delivers deep visibility into search query execution, offering detailed metrics and actionable recommendations to help users optimize their search operations. This feature analyzes query characteristics and execution patterns, providing clear insights into system behavior at each processing stage. Users can proactively identify potential performance bottlenecks, implement recommended optimizations, and maintain optimal query performance. These capabilities contribute to improved system efficiency, better user experience, and increased operational reliability.
Siddhant Gupta is a Senior Product Manager (Technical) at AWS, leading OpenSearch Service development with a focus on generative AI and search technologies. With 20+ years of experience across AWS, Microsoft, and global tech companies, he specializes in cloud computing, AI/ML, and... Read More →
Wednesday June 4, 2025 12:00pm - 12:20pm IST Ceres
In the era of vector search and semantic similarity, returning highly relevant results is only half the battle. When search results are too similar, users must wade through redundant information to find diverse perspectives. This talk introduces Maximum Marginal Relevance (MMR) implementation in OpenSearch, a powerful technique that optimally balances result relevance with diversity.
Key takeaways will include: - Implementing MMR reranking with OpenSearch's vector search - Optimizing performance for large-scale deployments - Measuring and tuning diversity metrics - Real-world applications and success patterns
Joinal is an experienced Data Science professional with a interest on building solutions with quick prototypes, community engagements and influencing technology adoption.
- Why use RAG (Retrieval-Augmented Generation)? - How does OpenSearch contribute to building intelligent agents?
Barriers to Deploying RAG Systems in Production:
- Key concerns: Quality, Cost, Safety, Maintenance, and Knowledge/Time. - The biggest challenge remains quality. - What leads to quality degradation in RAG systems, and how does it happen?
Solving the Issues:
- Identifying challenges with different chunking techniques: Which one should you use, or should you develop a custom approach tailored to your needs using OpenSearch? - Addressing problems with search/querying techniques to optimize results.
By understanding these challenges and adopting the suggested solutions, you can significantly enhance the accuracy of RAG-based applications.
If needed, we can provide a deeper dive, a demo, or even hands-on workshops on this topic.
Parminder is an entrepreneur with a strong background in building and scaling businesses. In 2016, he co-founded Hansel.io, which was acquired by Netcore in 2020.In 2022, he helped launch and grow Scaler’s US operations. Currently, he is focused on building Tempera.ai, an AI company.Parminder... Read More →
Wednesday June 4, 2025 12:20pm - 12:40pm IST Ceres
I’ll share how we built a highly available, multi-tenant, multi-AZ search platform using OpenSearch, supporting modern capabilities like vector search for semantic search. I’ll cover key challenges and how we ensured fault tolerance and scalability across a distributed environment. Additionally, I look forward to learning from the community’s experience in scaling OpenSearch for advanced search use cases.
Attendees will understand how to design a fault-tolerant OpenSearch architecture and integrate vector search to enhance accuracy and search performance. I’ll provide insights into optimising distributed setups for high availability and resilience, ensuring OpenSearch meets enterprise-scale demands.
By sharing best practices and challenges, I hope to drive discussions on multi-region deployments and hybrid search solutions. Highlighting OpenSearch’s capabilities for scalable, AI-powered search can help expand its adoption and evolution for next-gen search applications.
I am a seasoned search and infrastructure engineer with extensive experience in maintaining and optimizing large-scale search ecosystems. Currently, I manage 20+ OpenSearch clusters for search and vector use cases across multiple regions, ensuring 95% of search queries execute under... Read More →
Senior Staff Engineer at Freshworks with 12 years of experience specialising in the Data Platform and Observability. Expert in building observability systems, search platforms, data pipelines, and vector stores, with a strong focus on integrating AI and ML solutions. Passionate about... Read More →
Vector search is a rapidly growing area within search and analytics, and the k-NN (k-Nearest Neighbors) plugin in OpenSearch offers multiple engines to support this capability, with FAISS (Facebook AI Similarity Search), which provides efficient similarity search and clustering of dense vectors, being a preferred choice.
To speed up vector operations, FAISS engine utilizes SIMD (Single Instruction Multiple Data) processing, leveraging AVX (Advanced Vector Extensions) instructions on x86 architecture. With the latest AVX-512 (512-bit wide vector processing) support in FAISS, OpenSearch achieves further performance improvements in vector indexing, quantization, and search operations.
This presentation will delve into key technical advancements, including an in-depth look at AVX, the impact of AVX-512 on vector search performance and real-world benefits for OpenSearch users. Additionally, we will discuss how these enhancements lead to better price-performance and reduced TCO (Total Cost of Ownership).
Sourav is an accomplished Engineering Manager with a strong background in Software Development, Performance Engineering, & Cloud Computing. Possessing a degree in Computer Science & Engineering, Sourav has amassed nearly two decades of professional experience, collaborating with professionals... Read More →
Abhijit Kulkarni is a proficient Software Engineer with over 14 years of experience in software design and development across diverse domains such as Payments, Telecom and Aviation. His extensive Java experience along with strong problem-solving and analytical skills have been instrumental... Read More →
Search is no longer just a feature—it’s a critical capability that drives innovation, efficiency, and decision-making across enterprises. However, scaling OpenSearch across an organization presents challenges, from infrastructure management to governance and adoption. How do you enable diverse teams—from engineering to business functions—to harness the power of OpenSearch without deep operational overhead?
This session explores how to elevate OpenSearch from a tool to an internal platform offering, allowing teams to seamlessly integrate search into their workflows. By treating OpenSearch as a self-service, scalable, and well-governed enterprise platform, organizations can unlock its potential for everything from complex knowledge engines to simpler, ad-hoc use cases.
We’ll discuss key architectural principles, governance models, automation strategies, and best practices for building an OpenSearch-powered internal platform. Whether you're dealing with enterprise-wide search initiatives, analytics, or operational insights, this talk will provide actionable strategies to ensure OpenSearch adoption is efficient, scalable, and accessible to all teams—without friction.
Explore how OpenSearch enhances AI applications through vector stores, similarity search, and retrieval-augmented generation (RAG). Learn to efficiently store and retrieve high-dimensional data, improve recommendation systems with similarity search, and integrate robust indexing for context-aware responses using RAG. This session provides practical insights and examples for developers, data scientists, and AI enthusiasts to optimize their AI solutions with OpenSearch. Join us to elevate your AI capabilities with advanced search technologies.