This publish is cowritten by Ishan Gupta, Co-Founder and Chief Expertise Officer, Juicebox.
Juicebox is an AI-powered expertise sourcing search engine, utilizing superior pure language fashions to assist recruiters determine one of the best candidates from an enormous dataset of over 800 million profiles. On the core of this performance is Amazon OpenSearch Service, which supplies the spine for Juicebox’s highly effective search infrastructure, enabling a seamless mixture of conventional full-text search strategies with trendy, cutting-edge semantic search capabilities.
On this publish, we share how Juicebox makes use of OpenSearch Service for improved search.
Challenges in recruiting search
Recruiting search engines like google historically depend on easy Boolean or keyword-based searches. These strategies aren’t efficient in capturing the nuance and intent behind advanced queries, typically resulting in massive volumes of irrelevant outcomes. Recruiters spend pointless time filtering by means of these outcomes, a course of that’s each time-consuming and inefficient.
As well as, recruiting search engines like google typically wrestle to scale with massive datasets, creating latency points and efficiency bottlenecks as extra information is listed. At Juicebox, with a database rising to greater than 1 billion paperwork and tens of millions of profiles being searched per minute, we wanted an answer that would not solely deal with massive-scale information ingestion and querying, but in addition help contextual understanding of advanced queries.
Answer overview
The next diagram illustrates the answer structure.
OpenSearch Service securely unlocks real-time search, monitoring, and evaluation of enterprise and operational information to be used instances like software monitoring, log analytics, observability, and web site search. You ship search paperwork to OpenSearch Service and retrieve them with search queries that match textual content and vector embeddings for quick, related outcomes.
At Juicebox, we solved 5 challenges with Amazon OpenSearch Service, which we focus on within the following sections.
Drawback 1: Excessive latency in candidate search
Initially, we confronted important delays in returning search outcomes because of the scale of our dataset, particularly for advanced semantic queries that require deep contextual understanding. Different full-text search engines like google couldn’t meet our necessities for velocity or relevance when it got here to understanding recruiter intent behind every search.
Answer: BM25 for quick, correct full-text search
The OpenSearch Service BM25 algorithm shortly proved invaluable by permitting Juicebox to optimize full-text search efficiency whereas sustaining accuracy. By way of key phrase relevance scoring, BM25 helps rank profiles primarily based on the probability that they match the recruiter’s question. This optimization lowered our common question latency from round 700 milliseconds to 250 milliseconds, permitting recruiters to retrieve related profiles a lot quicker than our earlier search implementation.
With BM25, we noticed a virtually threefold discount in latency for keyword-based searches, enhancing the general search expertise for our customers.
Drawback 2: Matching intent, not simply key phrases
In recruiting, precise key phrase matching can typically result in lacking out on certified candidates. A recruiter searching for “information scientists with NLP expertise” would possibly miss candidates with “machine studying” of their profiles, regardless that they’ve the suitable experience.
Answer: k-NN-powered vector seek for semantic understanding
To handle this, Juicebox makes use of k-nearest neighbor (k-NN) vector search. Vector embeddings permit the system to grasp the context behind recruiter queries and match candidates primarily based on semantic that means, not simply key phrase matches. We keep a billion-scale vector search index that’s able to performing low-latency k-NN search, because of OpenSearch Service optimizations like product quantization capabilities. The neural search functionality allowed us to construct a Retrieval Augmented Era (RAG) pipeline for embedding pure language queries earlier than looking. OpenSearch Service permits us to optimize algorithm hyperparameters for Hidden Navigable Small Worlds (HNSW) like m, ef_search, and ef_construction. This enabled us to realize our goal latency, recall, and price objectives.
Semantic search, powered by k-NN, allowed us to floor 35% extra related candidates in comparison with keyword-only searches for advanced queries. The velocity of those searches was nonetheless quick and correct, with vectorized queries reaching a 0.9+ recall.
Drawback 3: Problem in benchmarking machine studying fashions
There are a number of key efficiency indicators (KPIs) that measure the success of your search. While you use vector embeddings, you’ve quite a few decisions to make when choosing the mannequin, fine-tuning the mannequin, and selecting the hyperparameters to make use of. You could benchmark your resolution to just be sure you’re getting the suitable latency, value, and particularly accuracy. Benchmarking machine studying (ML) fashions for recall and efficiency is difficult because of the huge variety of fast-evolving fashions obtainable (corresponding to MTEB leaderboard on Hugging Face). We confronted difficulties in choosing and measuring fashions precisely whereas ensuring we carried out nicely throughout large-scale datasets.
Answer: Actual k-NN with scoring script in OpenSearch Service
Juicebox used precise k-NN with scoring script options to handle these challenges. This characteristic permits for exact benchmarking by executing brute-force nearest neighbor searches and making use of filters to a subset of vectors, ensuring that recall metrics are correct. Mannequin testing was streamlined utilizing the wide selection of pre-trained fashions and ML connectors (built-in with Amazon Bedrock and Amazon SageMaker) supplied by OpenSearch Service. The pliability of making use of filtering and customized scoring scripts helped us consider a number of fashions throughout high-dimensional datasets with confidence.
Juicebox was capable of measure mannequin efficiency with fine-grained management, reaching 0.9+ recall. Using precise k-NN allowed Juicebox to benchmark quicker and reliably, even on billion-scale information, offering the boldness wanted for mannequin choice.
Drawback 4: Lack of data-driven insights
Recruiters have to not solely discover candidates, but in addition acquire insights into broader expertise business traits. Analyzing a whole bunch of tens of millions of profiles to determine traits in abilities, geographies, and industries was computationally intensive. Most different search engines like google that help full-text search or k-NN search didn’t help aggregations.
Answer: Superior aggregations with OpenSearch Service
The highly effective aggregation options of OpenSearch Service allowed us to construct Expertise Insights, a characteristic that gives recruiters with actionable insights from aggregated information. By performing large-scale aggregations throughout tens of millions of profiles, we recognized key abilities and hiring traits, and helped shoppers alter their sourcing methods.
Aggregation queries now run on over 100 million profiles and return ends in beneath 800 milliseconds, permitting recruiters to generate insights immediately.
Drawback 5: Streamlining information ingestion and indexing
Juicebox ingests information constantly from a number of sources throughout the online, reaching terabytes of recent information per 30 days. We would have liked a sturdy information pipeline to ingest, index, and question this information at scale with out efficiency degradation.
Answer: Scalable information ingestion with Amazon OpenSearch Ingestion pipelines
Utilizing Amazon OpenSearch Ingestion, we applied scalable pipelines. This allowed us to effectively course of and index a whole bunch of tens of millions of profiles each month with out worrying about pipeline failures or system bottlenecks. We used AWS Glue to preprocess information from a number of sources, chunk it for optimum processing, and feed it into our indexing pipeline.
Conclusion
On this publish, we shared how Juicebox makes use of OpenSearch Service for improved search. We will now index a whole bunch of tens of millions of profiles per 30 days, holding our information contemporary and updated, whereas sustaining real-time availability for searches.
Concerning the authors
Ishan Gupta is the Co-Founder and CTO of Juicebox, an AI-powered recruiting software program startup backed by prime Silicon Valley traders together with Y Combinator, Nat Friedman, and Daniel Gross. He has constructed search merchandise utilized by hundreds of shoppers to recruit expertise for his or her groups.
Jon Handler is the Director of Options Structure for Search Providers at Amazon Internet Providers, primarily based in Palo Alto, CA. Jon works intently with OpenSearch and Amazon OpenSearch Service, offering assist and steerage to a broad vary of shoppers who’ve search and log analytics workloads for OpenSearch. Previous to becoming a member of AWS, Jon’s profession as a software program developer included 4 years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the College of Pennsylvania, and a Grasp of Science and a Ph. D. in Pc Science and Synthetic Intelligence from Northwestern College.
