What is a self-hosted small LLM actually good for (<= 3B)
-
Have you tried RAG? I believe that they are actually pretty good for searching and compiling content from RAG.
So in theory you could have it connect to all of you local documents and use it for quick questions. Or maybe connected to your signal/whatsapp/sms chat history to ask questions about past conversations
No, what is it? How do I try it?
-
No, what is it? How do I try it?
RAG is basically like telling an LLM "look here for more info before you answer" so it can check out local documents to give an answer that is more relevant to you.
You just search "open web ui rag" and find plenty kf explanations and tutorials
-
I've tried coding and every one I've tried fails unless really, really basic small functions like what you learn as a newbie compared to say 4o mini that can spit out more sensible stuff that works.
I've tried explanations and they just regurgitate sentences that can be irrelevant, wrong, or get stuck in a loop.
So. what can I actually use a small LLM for? Which ones? I ask because I have an old laptop and the GPU can't really handle anything above 4B in a timely manner. 8B is about 1 t/s!
I installed Llama. I've not found any use for it. I mean, I've asked it for a recipe because recipe websites suck, but that's about it.
-
I've integrated mine into Home Assistant, which makes it easier to use their voice commands.
I haven't done a ton with it yet besides set it up, though, since I'm still getting proxmox configured on my gaming rig.
What are you using for voice integration? I really don't want to buy and assemble their solution if I don't have to
-
I've tried coding and every one I've tried fails unless really, really basic small functions like what you learn as a newbie compared to say 4o mini that can spit out more sensible stuff that works.
I've tried explanations and they just regurgitate sentences that can be irrelevant, wrong, or get stuck in a loop.
So. what can I actually use a small LLM for? Which ones? I ask because I have an old laptop and the GPU can't really handle anything above 4B in a timely manner. 8B is about 1 t/s!
I've run a few models that I could on my GPU. I don't think the smaller models are really good enough. They can do stuff, sure, but to get anything out of it, I think you need the larger models.
They can be used for basic things, though. There are coder specific models you can look at. Deepseek and qwen coder are some popular ones
-
Hey, you're treating that data with the respect it demands, right? And you definitely collected consent from those chat participants before you Hoover'd up their [re-reads example] extremely Personal Identification Information AND Personal Health Information, right? Because if you didn't, you're in violation of a bunch of laws and the Twitch TOS.
wrote last edited by [email protected]If I say my name is Doo doo head, in a public park, and someone happens to overhear it - they can do with that information whatever they want. Same thing. If you wanna spew your personal life on Twitch, there are bots that listen to all of the channels everywhere on twitch. They aren't violating any laws, or Twitch TOS. So, *buzzer* WRONG.
Right now, the same thing is being done to you on Lemmy. And Reddit. And Facebook. And everywhere else.
Look at a bot called "FrostyTools" for Twitch. Reads Twitch chat, Uses an AI to provide summaries of chat every 30 minutes or so. If that's not violating TOS, then neither am I. And thousands upon thousands of people use FrostyTools.
I have the consent of the streamer, I have the consent of Twitch (through their developer API), and upon using Twitch, you give the right to them to collect, distribute, and use that data at their whim.
-
Surely none of that uses a small LLM <= 3B?
wrote last edited by [email protected]Yes. The small LLM isn't retrieving data, it's just understanding context of text enough to know what "Facts" need to be written to a file. I'm using the publicly released Deepseek models from a couple of months ago.
-
I installed Llama. I've not found any use for it. I mean, I've asked it for a recipe because recipe websites suck, but that's about it.
you can do a lot with it.
I heated my office with it this past winter.
-
I've run a few models that I could on my GPU. I don't think the smaller models are really good enough. They can do stuff, sure, but to get anything out of it, I think you need the larger models.
They can be used for basic things, though. There are coder specific models you can look at. Deepseek and qwen coder are some popular ones
Been coming to similar conclusions with some local adventures. It's decent but not as able to process larger contexts.
-
What are you using for voice integration? I really don't want to buy and assemble their solution if I don't have to
I just use the companion app for now. But I am designing a HAL9000 system for my home.
-
RAG is basically like telling an LLM "look here for more info before you answer" so it can check out local documents to give an answer that is more relevant to you.
You just search "open web ui rag" and find plenty kf explanations and tutorials
wrote last edited by [email protected]I think RAG will be surpassed by LLMs in a loop with tool calling (aka agents), with search being one of the tools.
-
I've tried coding and every one I've tried fails unless really, really basic small functions like what you learn as a newbie compared to say 4o mini that can spit out more sensible stuff that works.
I've tried explanations and they just regurgitate sentences that can be irrelevant, wrong, or get stuck in a loop.
So. what can I actually use a small LLM for? Which ones? I ask because I have an old laptop and the GPU can't really handle anything above 4B in a timely manner. 8B is about 1 t/s!
It'll work for quick bash scripts and one-off things like that. But there's not usually enough context window unless you're using a 24G GPU or such.
-
I think RAG will be surpassed by LLMs in a loop with tool calling (aka agents), with search being one of the tools.
LLMs that train LoRas on the fly then query themselves with the LoRa applied
-
Most US states are single party consent. https://recordinglaw.com/united-states-recording-laws/one-party-consent-states/
There is no expectation of privacy in public spaces. Participants to these streams which are open to all do not have a prohibition on repeating what they have heard.
-
If I say my name is Doo doo head, in a public park, and someone happens to overhear it - they can do with that information whatever they want. Same thing. If you wanna spew your personal life on Twitch, there are bots that listen to all of the channels everywhere on twitch. They aren't violating any laws, or Twitch TOS. So, *buzzer* WRONG.
Right now, the same thing is being done to you on Lemmy. And Reddit. And Facebook. And everywhere else.
Look at a bot called "FrostyTools" for Twitch. Reads Twitch chat, Uses an AI to provide summaries of chat every 30 minutes or so. If that's not violating TOS, then neither am I. And thousands upon thousands of people use FrostyTools.
I have the consent of the streamer, I have the consent of Twitch (through their developer API), and upon using Twitch, you give the right to them to collect, distribute, and use that data at their whim.
So, buzzer WRONG.
Quite arrogant after you just constructed a faulty comparison.
If I say my name is Doo doo head, in a public park, and someone happens to overhear it - they can do with that information whatever they want. Same thing.
That's absolutely not the same thing. Overhearing something that is in the background is fundamentally different from actively recording everything going on in a public space. You film yourself or some performance in a park and someone happens to be in the background? No problem. You build a system to identify everyone in the park and collect recordings of their conversations? Absolutely a problem, depending on the jurisdiction. The intent of the recording(s) and the reasonable expectations of the people recorded are factored in in many jurisdictions, and being in public doesn't automatically entail consent to being recorded.
See for example https://www.freedomforum.org/recording-in-public/
(And just to clarify: I am not arguing against your explanation of Twitch's TOS, only against the bad comparison you brought.)
-
It'll work for quick bash scripts and one-off things like that. But there's not usually enough context window unless you're using a 24G GPU or such.
Snippets are a great use.
I use StableCode on my phone as a programming tutor for learning Python. It is outstanding in both speed and in accuracy for this task. I have it generate definitions which I copy and paste into Anki the flashcard app. Whenever I'm on a bus or airplane I just start studying. Wish that it could also quiz me interactively.
-
I've tried coding and every one I've tried fails unless really, really basic small functions like what you learn as a newbie compared to say 4o mini that can spit out more sensible stuff that works.
I've tried explanations and they just regurgitate sentences that can be irrelevant, wrong, or get stuck in a loop.
So. what can I actually use a small LLM for? Which ones? I ask because I have an old laptop and the GPU can't really handle anything above 4B in a timely manner. 8B is about 1 t/s!
7b is the smallest I've found useful. I'd try a smaller quant before going lower, if I had super small vram.
-
If I say my name is Doo doo head, in a public park, and someone happens to overhear it - they can do with that information whatever they want. Same thing. If you wanna spew your personal life on Twitch, there are bots that listen to all of the channels everywhere on twitch. They aren't violating any laws, or Twitch TOS. So, *buzzer* WRONG.
Right now, the same thing is being done to you on Lemmy. And Reddit. And Facebook. And everywhere else.
Look at a bot called "FrostyTools" for Twitch. Reads Twitch chat, Uses an AI to provide summaries of chat every 30 minutes or so. If that's not violating TOS, then neither am I. And thousands upon thousands of people use FrostyTools.
I have the consent of the streamer, I have the consent of Twitch (through their developer API), and upon using Twitch, you give the right to them to collect, distribute, and use that data at their whim.
Doesn't Twitch own all data that is written and their TOS will state something like you can't store data yourself locally.
-
There is no expectation of privacy in public spaces. Participants to these streams which are open to all do not have a prohibition on repeating what they have heard.
Right and what I was saying was even if it wasnt “public”, single party consent means the person recording can be that single party- so still a non-issue.
-
Doesn't Twitch own all data that is written and their TOS will state something like you can't store data yourself locally.
wrote last edited by [email protected]I'm not storing their data. I'm feeding it to an LLM which infers things and storing that data. Other Twitch bots store twitch data too. Everything from birthdays to imaginary internet points.