Guide to Self Hosting LLMs Faster/Better than Ollama
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    7d ago 100%

    Context was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don't make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.

    Otherwise, all settings are the defaults.

    1
  • Guide to Self Hosting LLMs Faster/Better than Ollama
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    7d ago 100%

    Super useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I'm using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?

    1
  • Your Most Frustrating Configuration Experience?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    2w ago 100%

    Can you explain a bit more about this and how to configure it? When I use FF on gnome, the save dialogue just looks like other dialogues?

    1
  • Winamp source code is now on GitHub
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    4w ago 100%

    Not necessarily. While of course in many many cases, open source is a volunteer effort, there's usually some implicit transaction going on. Whether that's improving the software for yourself and passing that on to others, being a business and improving a library or something you use that helps your project generate revenue, or even a straight up commercial transaction.

    But in all these cases, the open source project can be taken by you (or others) and you can do whatever you want with it. In the case of Winamp here, you cannot do any of that. It would be different if they were paying for contributions. But they're not, so.

    23
  • Hotel owner: “If you want to join this union that you used to be in I will let you go, just so that is very clear.”
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    4w ago 100%

    A lot of times immigrants to Iceland in low paying jobs like this do not understand their rights. Wouldn't surprise me if this guy has gotten away with it before. Possibly more than once.

    Iceland isn't perfect. If a business wants to get rid of someone, they'll find a way to do it. But it is illegal to prevent someone from joining a union, or issue threats like this. Companies over a certain size (50+ I think?) are actually required to have a union representative.

    56
  • Hotel owner: “If you want to join this union that you used to be in I will let you go, just so that is very clear.”
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    4w ago 100%

    Somebody is going to get steamrolled by Icelandic labor laws. And it's not going to be the employee.

    Edit: like this is seriously illegal in Iceland. Also, if you're going to be a corrupt and immoral business owner (evil really in this case), the number one thing you DON'T do is broadcast your nefarious intentions over a recordable medium.

    184
  • Has Google Search gotten so much worse in the last couple of weeks?
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    1mo ago 100%

    You can right click the URL bar for sites that support the OpenSearch XML standard. Which I guess is what they wanted to replace it with. But I don't really know why they removed the button to a about: config setting. Could at least be a checkbox or something to enable.

    2
  • Firefox 130.0 Release Notes
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    2mo ago 100%

    Is it possible to use ollama or an arbitrary OpenAI-compatible endpoint with the chatbot feature yet? Or only the cloud providers?

    4
  • Gatekeep ideas, not people
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearPR
    projectmoon
    2mo ago 100%

    That would probably be a task for regular machine learning. Plus proper encryption shouldn't have a discernible pattern in the encrypted bytes. Just blobs of garbage.

    8
  • Over the weekend (this past Saturday specifically), GPT-4o seems to have gone from capable and rather free for generating creative writing to not being able to generate basically anything due to alleged content policy violations. It'll just say "can't assist with that" or "can't continue." But 80% of the time, if you regenerate the response, it'll happily continue on its way. It's like someone updated some policy configuration over the weekend and accidentally put an extra 0 in a field for censorship. GPT-4 and GPT 3.5 seem unaffected by this, which makes it even weirder. Switching to GPT 4 will have none of the issues that 4o is having. I noticed this happening literally in the middle of generating text. See also: https://old.reddit.com/r/ChatGPT/comments/1droujl/ladies_gentlemen_this_is_how_annoying_kiddie/ https://old.reddit.com/r/ChatGPT/comments/1dr3axv/anyone_elses_ai_refusing_to_do_literally_anything/

    29
    10

    Current situation: I've got a desktop with 16 GB of DDR4 RAM, a 1st gen Ryzen CPU from 2017, and an AMD RX 6800 XT GPU with 16 GB VRAM. I can 7 - 13b models extremely quickly using ollama with ROCm (19+ tokens/sec). I can run Beyonder 4x7b Q6 at around 3 tokens/second. I want to get to a point where I can run Mixtral 8x7b at Q4 quant at an acceptable token speed (5+/sec). I can run Mixtral Q3 quant at about 2 to 3 tokens per second. Q4 takes an hour to load, and assuming I don't run out of memory, it also runs at about 2 tokens per second. What's the easiest/cheapest way to get my system to be able to run the higher quants of Mixtral effectively? I know that I need more RAM Another 16 GB should help. Should I upgrade the CPU? As an aside, I also have an older Nvidia GTX 970 lying around that I might be able to stick in the machine. Not sure if ollama can split across different brand GPUs yet, but I know this capability is in llama.cpp now. Thanks for any pointers!

    16
    5

    Not sure if this has been asked before or not. I tried searching and couldn't find anything. I have an issue where any pictures from startrek.website do not show up on the homepage. It seems to only affect startrek.website. Going to the link directly loads the image just fine. Is this something wrong with lemm.ee?

    5
    2

    For the past few days, the android app has been very slow. The app itself loads fine and is responsive, but it takes many seconds to load messages, sometimes up to 30 seconds. At first I thought it was a blip, but it's been going on for a few days now. Anyone else have this problem? Edit: clearing cache in the app settings (not system settings) fixed it.

    9
    10

    Erum við að læra íslensku? Er það áætlun til að læra tungumálið?

    1
    0

    It would be a good idea to have the new post button also available in the floating action button from the main feed screens, rather than just on specific communities. At least that's what it's like currently for me.

    1
    0

    This has probably already been asked before, but: The magazines of kbin federate as Lemmy communities, but is the microblog section of a kbin magazine accessible via Lemmy?

    16
    36

    Assuming you haven't already.

    9
    3