Longda Feng is currently Senior Director of OceanBase R&D and General Manager of the OceanBase Open Source Ecosystem at Ant Group. With over 20 years of experience in distributed computing, distributed storage, and Linux kernel development, he is a seasoned technology leader and open-source advocate.
He is a PMC member of Apache Storm, a Committer for Apache RocketMQ, and the founder of Alibaba JStorm. He also serves as an Executive Member of the Database Subcommittee of the China Computer Federation (CCF).
In his role, he leads OceanBase’s open-source strategy and ecosystem development, helping Ant Group and Alibaba build influential global open-source projects. His focus is on fostering innovation and collaboration across the infrastructure and open-source communities.
As AI workloads, such as Retrieval-Augmented Generation (RAG), semantic search, and document Q&A systems, become increasingly common, many organizations are looking to integrate these intelligent capabilities into their Kubernetes-native infrastructure. But managing large-scale vector data, ensuring low-latency response, and maintaining a simplified and resilient architecture remain challenging, especially when deploying across multiple clusters and cloud environments. In this session, we’ll introduce how OceanBase, a distributed SQL-compatible database, now supports vector search capabilities natively within its engine, allowing developers to run hybrid queries that combine both structured and unstructured data in a single SQL statement. Drawing from real-world experience, including our recent deployment strategies shared at KubeCon, we’ll walk through: • How OceanBase supports vector storage and search directly on Kubernetes • A practical example of building a document Q&A system using OceanBase and large language models • How do we achieve high availability and stability in multi-cluster Kubernetes deployments • Key architectural benefits: simplified stack, no need for separate vector engines, and better data consistency and observability Whether you’re building AI services in cloud-native environments, operating at the edge, or modernizing legacy infrastructure, this talk will show how OceanBase brings AI readiness to your Kubernetes-powered open infrastructure, without the complexity.