Pay to play was the problem there. I had the highest ranking joke page on webcrawler for a stint, but Yahoo wanted $500 to put me on top. My 15 year old self was not interested.
That’s pretty much what all of the site aggregators were. I ran a couple of communities on yahoo and some other sites. There were also services like Archie, gopher, and wais, and I am pretty sure my Usenet client had some searching on it (it might have been emacs - I can’t remember anymore). I remember when Google debuted on Stanford.edu/google and realized that everything was about to change.
Or AI to rank and filter out the things you need based on public indexing. Preferably there’d be several AI assistants to choose from. Things seem to be moving in that direction anyway.
While this is true (and a problem with current engines like Google), I could see having a local LLM doing the filtering for you based on your own criteria. Then you could do a wide-open search as needed, or with minimal filtering, etc.
When I’m searching for technical stuff (Android rom, Linux commands/how it works), it would be really helpful to have some really capable filtering mechanisms that have learned.
When I want to find something from a headline, then it needs to be mostly open (well, maybe filtering out The Weekly World News).
But it really needs to be done by my own instance of an LLM/AI, not something controlled elsewhere.
Given that the indices are not available locally, it’d be difficult for your own algorithm of any sort, AI or otherwise, to rank items higher/lower than others.
Would need human curation to select the best websites in each field.
Yahoo back in the day with its categories, and later Fazed.net with curated links was a nice time for a while
Pay to play was the problem there. I had the highest ranking joke page on webcrawler for a stint, but Yahoo wanted $500 to put me on top. My 15 year old self was not interested.
That’s pretty much what all of the site aggregators were. I ran a couple of communities on yahoo and some other sites. There were also services like Archie, gopher, and wais, and I am pretty sure my Usenet client had some searching on it (it might have been emacs - I can’t remember anymore). I remember when Google debuted on Stanford.edu/google and realized that everything was about to change.
It worked because the web was much smaller.
Or AI to rank and filter out the things you need based on public indexing. Preferably there’d be several AI assistants to choose from. Things seem to be moving in that direction anyway.
The problem is that personalization of search results tends to information bubbles. That is the reason why I prefer DDG over Google.
While this is true (and a problem with current engines like Google), I could see having a local LLM doing the filtering for you based on your own criteria. Then you could do a wide-open search as needed, or with minimal filtering, etc.
When I’m searching for technical stuff (Android rom, Linux commands/how it works), it would be really helpful to have some really capable filtering mechanisms that have learned.
When I want to find something from a headline, then it needs to be mostly open (well, maybe filtering out The Weekly World News).
But it really needs to be done by my own instance of an LLM/AI, not something controlled elsewhere.
Ai won’t help since it’ll be programmed to show only what it’s owners want us to see
With your own customization, done locally.
Given that the indices are not available locally, it’d be difficult for your own algorithm of any sort, AI or otherwise, to rank items higher/lower than others.