Cision PR Newswire
Pinecone announces new features to lower the barrier of entry for vector search
News provided byPinecone Systems Inc
Aug 17, 2022, 5:26 AM ET
Developers can now more easily start, experiment, and scale vector databases with Pinecone
SAN MATEO, Calif., Aug. 17, 2022 /PRNewswire/ -- Pinecone Systems Inc., a search infrastructure company, today announced the release of new features and enhancements that make it significantly easier for developers — regardless of AI or ML experience and background — to get started with vector search for applications such as semantic search and recommendation systems. New features include up to 10x faster indexes, flexible collections of vector data, and zero-downtime vertical scaling.
"Our vector database makes it easy for engineers to build capabilities like semantic search, AI recommendations, image search, and AI threat detection, but for teams who are new to vector search, some challenges remain," said Edo Liberty, founder and CEO of Pinecone. "Those challenges centered on the limited capacity of indexes, supporting high-throughput applications, and changing index size to support growing data volume. Our new release addresses these technical challenges, further simplifying and speeding up vector search."
With Pinecone's new vertical scaling, if a company's index grows beyond the available capacity, pods can be changed on a live index with zero downtime to accommodate more data. Pods are now available in different sizes — 1x, 2x, 4x, and 8x — so engineering teams can start with the exact capacity they need and easily scale their index. Hourly costs for pods change to match the new sizes, meaning they still only pay for what they use.
Pinecone's new Collections allow engineers to experiment with and store vector data in one place. Users can save data from an index and create new indexes from any collection. Whether using collections for backing up and restoring indexes, testing different index types with the same data, or moving data to a new index, users can now do it all within Pinecone.
Pinecone is also launching p2 pods that are purpose-built for performance and high-throughput use cases. The new p2 pod type provides blazing fast search speeds of under 10ms and throughput as high as 200QPS per replica (throughput can be increased by adding more replicas). That's 10x better than what was previously available in Pinecone. This is achieved with a new graph-based index that trades off ingestion speed and filter performance in exchange for lower latencies and higher throughput.
"Our new features make it easier and more cost-effective than ever for engineers to start and scale a vector database in production, furthering our mission of democratizing vector search," added Liberty.
You can read the full announcement for more information about these features and performance improvements.
Pinecone has built the first vector database to enable the next generation of artificial intelligence (AI) applications in the cloud. Its engineers built ML platforms at AWS (Amazon SageMaker), Yahoo, Google, Databricks, and Splunk, and its scientists published more than 100 academic papers and patents on machine learning, data science, systems, and algorithms. Pinecone is backed by Wing Venture Capital and operates in Silicon Valley, New York and Tel Aviv. For more information, see http://www.pinecone.io.
Pinecone media contact:
SOURCE Pinecone Systems Inc