marginalia_nu 3 hours ago

The idea behind search itself is very simple, and it's a fun problem domain that I encourage anyone to explore[1].

The difficulties in search are almost entirely dealing with the large amounts of data, both logistically and in handling underspecified queries.

A DBMS-backed approach breaks down surprisingly fast. Probably perfectly fine if you're indexing your own website, but will likely choke on something the size of English wikipedia.

[1] The SeIRP e-book is a good (free) starting point https://ciir.cs.umass.edu/irbook/

  • submeta 2 hours ago

    Thank you very much for the recommendation. I am in the process of building knowledge base bots, and am confronted with the task of creating various crawlers for the different sources the company has. And this book comes in very handy.

entropoem an hour ago

Searching in general is difficult. It is really a difficult thing.

If you haven't felt it, look at companies like Apple, Microsoft, or "The most important AI research lab in the world" OpenAI, for example, their products have terrible search features even though their resources - money - technology can be considered top-notch.

  • marginalia_nu an hour ago

    I think the reason most companies can't implement a working search box is the sort of work needed to make it perform adequately clashes catastrophically with the software development culture that has emerged in the corporate world (anything to do with sprints, jira, and daily standups).

    Getting search to work well requires a lot of fiddling with ranking parameters, work that is difficult bordering on impossible to plan or track. The work requires a degree of trust that developers are rarely afforded these days.

tombert 4 hours ago

About a decade ago, I was working with a guy who was getting a PhD in search engine design, which I knew/know nothing about.

It was actually a lot of fun to chat with him, because he was so enthusiastic about how searching works and how it can integrate with databases, and he was eager to explain this all to anyone who would listen. I learned a fair amount from him, though admittedly I still don't know much about the intricacies of how search engines work.

Some day, I am going to really go through the guts of Apache Solr and Lucene to understand the internals (like I did for Kafka a few years ago), and maybe I'll finally be competent with it.

  • DanielHB 3 hours ago

    People who work on really obscure things love to talk about their work, heck if someone would listen to me I could talk for hours about what I do.

    Unfortunately very few people care about the minutia of making a behemoth system work.

    • Bengalilol an hour ago

      I would be more than interested to listen to you and what you do. Do not hesitate to share (blog post, AskHN, ShowHN, ...)

    • puilp0502 3 hours ago

      I would. Heck, I bet half of HN would be interested in what kind of insanity lies under those behemoths.

nmstoker 4 hours ago

Reminds me of reading Programming Collective Intelligence by Toby Segaran, which inspired me with a range of things, like building search, recommenders, classifiers etc.

  • _s_a_m_ 3 hours ago

    I loved that book also, but saw him a few year later saying in some youtube video "don't use that book" because it is obsolete in his opinion.

  • cipherself 4 hours ago

    That was a great book, I wonder what the 2025 equivalent of it is...

    • RestartKernel 2 hours ago

      Prompting Inferred Intelligence: from ChatGPT to Claude

mobeigi 5 hours ago

Great read. It makes you wonder how heavily optimised the tokenizers used by popular search enginea truly are.

precompute 4 hours ago

Incredible article. Does what it claims in the title, is written well and follows a linear chain of reasoning with a minumum of surprises.

jillesvangurp 2 hours ago

Building a simple text search engine isn't that hard. People show them off on HN on a fairly regular basis. Most of those are fairly primitive. Unfortunately building a good search engine isn't that straightforward. There's more to it than just implementing bm25 (the goto ranking algorithm), which you can vibe code in a few minutes these days. The reason this is easy is because this is nineties era research that is all well publicized and documented and not all that hard once you figure it out.

Building your own search engine is a nice exercise for understanding how search works. It gets you to the same level as a very long tail of "Elasticsearch alternatives" that really aren't coming even close to implementing a tiny percentage of its feature set. That can be useful as long as you are aware of what you are missing out on.

I've been consulting companies for a few years with going from in house coded solutions to something proper (typically Opensearch/Elasticsearch). Usually people fight themselves into a corner where their in house solution starts simple and then grows more complicated as they inevitably deal with ranking problems their users encounter. Usual symptoms: "it's slow" (they are doing silly shit with multiple queries against postgres or whatever), "it's returning the wrong things" (it turns out that trigrams aren't a one size fits all solution and returns false positives), etc. Add aggregations and other things to the mix and you basically have a perfect use case for Elasticsearch about 10 years ago before they started making it faster, smarter, and better.

The usual arguments against Elasticsearch & Opensearch:

"Elasticsearch/Opensearch are hard to run". Reality, there isn't a whole lot to configure these days. Yes you might want to take care of monitoring, backups, and a few other things. As you would with any server product. But it self configures mostly. Particularly, you shouldn't have to fiddle with heap settings, garbage collection, etc. The out of the box defaults work fine. Get a managed setup if all this scares you; those run with the same defaults typically. Honestly, running postgres is harder. There's way more to configure for that. Especially for high availability setups. The hardest part is sizing your vms correctly and making sure you don't blow through your limits by indexing too much data. Most of your optimizations are going to be at the index mapping level, not in the configuration.

"It's slow". That depends what you do and how you use it. Most of the simple alternatives have some hard limitations. If you under engineer your search (poor ranking, lots of false positives) it's probably going to be faster. That's what happens if you skip all the fancy algorithmic stuff that could make your search better. I've seen all the rookie mistakes that people make with Elasticsearch that impact performance. They are usually fairly easy to fix. (e.g. let's turn off dynamic mapping and not index all those text fields you never query on that fill up your disk and memory and bloat your indexing performance ...).

"I don't need all that fancy stuff". Yes you do. You just don't know it yet because you haven't figured out what's actually needed. Look, if your search isn't great and it doesn't matter, it's all fine. But if search quality matters and you lose user's interest when they fail to find stuff in your app/website it quickly can become an existential problem. Especially if you have competitors that do much better. That fancy stuff is what you would need to build to solve that.

Unless you employ some hard core search ranking experts, your internally crafted thing is probably not going to be great. If you can afford to run at ~2005 era state of the art (Lucene existed, SOLR & Elasticsearch did not, Lucene was fairly limited in scope), then go for it. But it's going to be quite limited when you need those extra features after all.

There are some nice search products out there other than Elasticsearch & Opensearch that I would consider fit for purpose; especially if you want to do vector search. And in fairness, using a search engine properly still requires a bit of skill. But that isn't any different if you do things yourself. Except it involves a lot less wheel reinvention.

There just is a bit of necessary complexity to building a good search product.

  • internet_points 2 hours ago

    Seems like good advice, search has been built quite a few times now :-) I've defaulted to elasticsearch myself.

    However, have you tried running any of the "up and coming" alternatives that keep showing up here? In particular, https://github.com/SeekStorm/SeekStorm seems very interesting, though I haven't heard from anyone using it in prod.

    • jillesvangurp an hour ago

      A red flag for me is that it lists stopword lists as a feature. Those went out of fashion in Lucene/Elasticsearch because of some non trivial but very effective caching and other optimizations around version 5.

      Stopwords are an old school optimization to deal with the problem of high frequency tokens when calculating rankings. Basically that means dealing with long lists of document ids that e.g. contain the word "to". This potentially has a lot of overhead. The solution is to eliminate the need to do so by focusing on ranking clauses with low frequency terms first and caching the results for the low frequency terms. You can eliminate a lot of documents by doing that. This gets rid of most of the overhead for low frequency terms without resorting to simply filtering them out.

      The key test here is queries that consist of stop words, like "to be or not to be" to find documents about Hamlet. If you filter out all the stop words, you are not going to get the right results on top.

      Just an example of where Seekstorm can probably do better. I have no direct experience with it though. So, maybe they do have a solution for that.

      But you should treat the need for stop word lists as a red flag for probably fairly immature takes on this problem space. Elasticsearch doesn't need those anymore. Also, what do stop word lists look like if you need multi lingual support? Who maintains these lists for different languages? Do you have language experts on your search team doing that for all the languages you need to support? People always forget about operational overhead like that. Stop word lists are fairly ineffective if you don't have people curating them and it creates obvious issues with certain queries.

shevy-java 4 hours ago

Good. Now please someone replace Google's search engine.

I am always annoyed using it, how bad it is these days. Then I try the alternatives such as Duck Duck Go and they manage to be even worse.

Qwant is semi-ok but it also omits tons of things that Google Search finds (and also is slower, for some weird reason).

Google's UI nerf is also annoying - so much useless stuff. In the past I could disable that via ublock origin but Google killed that for chrome.

We need to do something against this Evil that Google brought into this world.

  • MrAlex94 3 hours ago

    Not quite independent as it’s a meta-search, but I developed a subscription based one at search.waterfox.net. Pays for the infrastructure costs and remains ad/tracking free.

    • SyneRyder an hour ago

      Nice! I couldn't see the list of search engines that are included in your meta-search, the FAQ currently seems to imply that it only serves Google results?

      If you give users the option to include / not include certain search engines in their results, so their money never goes to those particular engine companies, that could be of interest to some Kagi refugees.

      I ended up vibe coding my own meta-search engine (augmented with a local SQlite database of hand-picked sites) so that I could escape Kagi, but I'm excited if Waterfox Search is an alternative I can recommend to others!

  • renegat0x0 3 hours ago

    - How many people use only Google search engine nowadays? More and more people use chatbots, with Google search.

    - Google search also does not provide good results for finding stuff in all walled gardens, so we also use niche search engines for individual platforms. I am not sure if it finds good results for posts in facebook and x.com

    - I also use my own index of pages, YouTube channels, and github pages. Contains tags, page scoring system, related links, social information like number of followers etc.

    https://github.com/rumca-js/Internet-Places-Database

    So in a way, it has being replaced. It just takes some time for people to switch.

  • ku1ik 4 hours ago

    Try kagi.com. I tried and stayed. It’s paid though.

    • dyml 4 hours ago

      I also used Kagi, but decided to cancel my subscription last year when it was revealed they pay Yandex for their search, which is a Russian company that ultimately fuels the Russian war on Ukraine.

      Once Kagi stops transferring money to Russia, I’d be happy re-subscribe.

      • ramon156 3 hours ago

        Do you have a source how funding yandex funds the war? Yandex is a great search engine, so I would hate to find out that this is true

        • b2ccb2 3 hours ago

            https://en.wikipedia.org/wiki/Yandex#Legal_issues_in_Ukraine
            https://www.zois-berlin.de/en/publications/zois-spotlight/the-sad-fate-of-yandex-from-independent-tech-startup-to-kremlin-propaganda-tool
        • ninalanyon 3 hours ago

          It's based in Russia so it presumably pays taxes and salaries in Russia.

eduction 6 hours ago

I completely agree with the insight that full text search has been complexified. People seem to want to jump straight to clustering or other enterprise level things.

I also appreciate the moxie of getting in there and building it yourself.

Myself, I reach for Lucene. Then you don’t need to build all this yourself if you don’t want. It lives in a dir on disk. True, it’s a separate database, but one optimized for this problem.

  • aorloff 5 hours ago

    This was the solution I was thinking about, but I thought, well that's the way someone would have done it 20 years ago

    • shevy-java 4 hours ago

      Alright but why do we not have more search engines that are actually good?

      I'd love to cut myself off from Google, including Google Search, but any alternatives manage to be even worse. Consistently so. It's as if Google won the war by being just permanently slightly better - while everyone is actually really crap. That wasn't the case, say, 10 years ago or so.

      • jillesvangurp 2 hours ago

        Because it's not a simple problem space. Lucene has gone through about three decades of lots of optimization, feature development, and performance tuning. A lot of brain power goes into that.

        Google bootstrapped the AI revolution as a side effect of figuring out how to do search better. They started by hiring a lot of expert researchers that then got busy iterating on interesting search engine adjacent problems (like figuring out synonyms, translations, etc.). In the process they got into running neural networks at scale, figuring out how to leverage GPUs and eventually building their own TPUs.

        The Acquire Podcast recently did a great job of outlining the history of Google & Alphabet.

        Doing search properly at scale mainly requires a lot of infrastructure. And that's Google's real moat. They get to pay for all that with an advertising money printing machine. Which BTW. leverages a lot of search algorithms. Matching advertisements to content is a search problem. Google just got really good at that. That's what finances all the innovation in this space from deep learning to TPUs. Being able to throw a few hundred million at running some experiments is what makes the difference here.

      • 19l21 3 hours ago

        I use the '4get' proxy search engine, which lets you use pretty much every search engine under the sun, for both websites and images. It's really useful because it is faster than google, and if you need to find some pages you can just change the search engine quickly.

        It is open source and there are many instances available, I use '4get.bloat.cat' or '4get.lunar.iu'

        It is a better alternative to SearX for sure

      • inferiorhuman 3 hours ago

        These days I default to DDG. Not because it's improved but because Google's results are just that bad. Even a couple years ago I was reaching for Google with a lot more frequency.