MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
search
Recherche

GitHub Claims Source Code Search Engine Is a Game Changer

mercredi 8 février 2023, 11:00 , par Slashdot
Thomas Claburn writes via The Register: GitHub has a lot of code to search -- more than 200 million repositories -- and says last November's beta version of a search engine optimized for source code that has caused a 'flurry of innovation.' GitHub engineer Timothy Clem explained that the company has had problems getting existing technology to work well. 'The truth is from Solr to Elasticsearch, we haven't had a lot of luck using general text search products to power code search,' he said in a GitHub Universe video presentation. 'The user experience is poor. It's very, very expensive to host and it's slow to index.' In a blog post on Monday, Clem delved into the technology used to scour just a quarter of those repos, a code search engine built in Rust called Blackbird.

Blackbird currently provides access to almost 45 million GitHub repositories, which together amount to 115TB of code and 15.5 billion documents. Shifting through that many lines of code requires something stronger than grep, a common command line tool on Unix-like systems for searching through text data. Using ripgrep on an 8-core Intel CPU to run an exhaustive regular expression query on a 13GB file in memory, Clem explained, takes about 2.769 seconds, or 0.6GB/sec/core. At 0.01 queries per second, grep was not an option. So GitHub front-loaded much of the work into precomputed search indices. These are essentially maps of key-value pairs. This approach makes it less computationally demanding to search for document characteristics like the programming language or word sequences by using a numeric key rather than a text string. Even so, these indices are too large to fit in memory, so GitHub built iterators for each index it needed to access. According to Clem, these lazily return sorted document IDs that represent the rank of the associated document and meet the query criteria.

To keep the search index manageable, GitHub relies on sharding -- breaking the data up into multiple pieces using Git's content addressable hashing scheme and on delta encoding -- storing data differences (deltas) to reduce the data and metadata to be crawled. This works well because GitHub has a lot of redundant data (e.g. forks) -- its 115TB of data can be boiled down to 25TB through deduplication data-shaving techniques. The resulting system works much faster than grep -- 640 queries per second compared to 0.01 queries per second. And indexing occurs at a rate of about 120,000 documents per second, so processing 15.5 billion documents takes about 36 hours, or 18 for re-indexing since delta (change) indexing reduces the number of documents to be crawled.

Read more of this story at Slashdot.
https://developers.slashdot.org/story/23/02/08/0741220/github-claims-source-code-search-engine-is-a-...
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
ven. 26 avril - 16:16 CEST