Peng Wang is the Global Technical Evangelist for OceanBase, with over a decade of experience in database R&D, including leading a database engineering team at Intel.
He is a contributor to several top-level Apache open source projects, and an active advocate for open collaboration in the developer community.
At OceanBase, he leads global developer engagement across content creation, community building, and open source evangelism.
He recently spoke at KubeCon + CloudNativeCon, SREday London, APIdays London, and Conf42.com.
As AI workloads, such as Retrieval-Augmented Generation (RAG), semantic search, and document Q&A systems, become increasingly common, many organizations are looking to integrate these intelligent capabilities into their Kubernetes-native infrastructure. But managing large-scale vector data, ensuring low-latency response, and maintaining a simplified and resilient architecture remain challenging, especially when deploying across multiple clusters and cloud environments. In this session, we’ll introduce how OceanBase, a distributed SQL-compatible database, now supports vector search capabilities natively within its engine, allowing developers to run hybrid queries that combine both structured and unstructured data in a single SQL statement. Drawing from real-world experience, including our recent deployment strategies shared at KubeCon, we’ll walk through: • How OceanBase supports vector storage and search directly on Kubernetes • A practical example of building a document Q&A system using OceanBase and large language models • How do we achieve high availability and stability in multi-cluster Kubernetes deployments • Key architectural benefits: simplified stack, no need for separate vector engines, and better data consistency and observability Whether you’re building AI services in cloud-native environments, operating at the edge, or modernizing legacy infrastructure, this talk will show how OceanBase brings AI readiness to your Kubernetes-powered open infrastructure, without the complexity.