A loosely moderated place to ask open-ended questions
Search asklemmy 🔍
If your post meets the following criteria, it’s welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
Icon by @Double_A@discuss.tchncs.de
- 0 users online
- 217 users / day
- 934 users / week
- 2.44K users / month
- 5.59K users / 6 months
- 1 subscriber
- 3.07K Posts
- 119K Comments
- Modlog
No, I’d rather just have a decent search function. Lemmy should be about human interaction, not getting answers from an LLM.
NO
Hard pass. Lemmy absolutely does not need anything AI. A decent search, yes, and a way for results to appear on web searches.
I think this is quite a bad idea even if we totally set aside any ethical concerns with AI, solely because it increases the hardware requirements to run a Lemmy instance. I believe that a critical goal of federated services should be to reduce the barrier to entry for instance ownership as much as possible. The more instances the better. If there’s only two or three big ones, the problems of centralization appear again, albeit diluted. The whole point of federation is to have multiple instances. Already many survive on donations or outright charity. But AI increases costs immensely.
I think it’s fine to add features that require more compute power if they have a vast improvement to user experience for the compute required. But AI is one of the most computationally intensive features I can think of, and the ratio to its value addition is particularly low. There’s so little content on Lemmy that you can feasibly view the entire post history of most communities in under a day of browsing, so there’s no real need for improved searchability - it’s just not that big here yet. And even when it does get that big, I think a strong search algorithm would be just about as effective, much more transparent, and most importantly not require instance owners to add GPUs to their servers.
I’m inclined to say no. It pretty much a useless feature and doesn’t solve the fundamental problems of searching a federated service like Lemmy.
Even if LLMs worked like the general public thinks they should, who would pay for the processing time? A one off request isn’t too expensive, sure, but multiply that times however many users a server might have and it gets real expensive real quick. And that’s just assuming the models are hosted by the Lemmy server. It gets even more expensive if you’re using a one of the public APIs to run the LLM queries.
whenever i see an llm-chatbot integrated into another website or app, i always wonder what the point is. i only use llms when i really have to, but even if i was an enthusiastic user, why wouldnt i just use my preferred model directly instead?
No.
Man, are you crazy?
You have to know that asking this is just begging for abuse.
Troll
deleted by creator