• 0 Posts
  • 10 Comments
Joined 11 months ago
cake
Cake day: June 7th, 2025

help-circle

  • Even if they had left out that condition, I’m sure there would be ways around it for gaming laptops and they wouldn’t necessarily even have to be stupid ways: I could imagine a stupid way of complying being a charging cable with USB-C for the first 100W and proprietary port for the other 200W+.

    Just because a law might say that it’s got to be technically able to charge from USB-C probably doesn’t imply that has to be the only charging port and method, nor even the normal/recommended one. Even on a 200W+ gaming laptop it would be nice sometimes to be able to charge it from USB-C, without pulling out the full charger. If mine supported USB-C charging I could see using it like that when I travel, I might only be using it for half an hour or an hour a day, the 100W would significantly extend the battery runtime, the rest of the time it could be sleeping or off and charging happily back to full from USB-C, so I wouldn’t even need to bring the (literal) charging brick.




  • Waffling on moving to Codeberg because it’s not 100% perfect means supporting GitHub and drops the possibility of federated forges to 0%. Moving to Codeberg makes the future of federated forges go up to greater than 0%.

    Fair, but there’s a third path that exists: spin up your own instance. Yeah, it’s more administration work, yeah it’s significantly more painful for contributors to contribute for now. But the more stand-alone, non-federated, community-operated forges are out there, the more appealing the federation glue needed to connect them all together becomes.


  • It’s an improvement but I hate that we’re trading one centralized singlepoint of failure to another. It’s fine for people to be fleeing there temporarily as refugees from github, but I’m afraid that if it becomes “the one and only destination” it’s going to end up getting way too comfortable really fast. What we really need is a federated architecture, where it doesn’t matter who’s actually hosting the repository as long as they support the proper standards and protocols as part of the same federated universe where your own user or self-hosted instance exists, you can still be granted access to other people’s repos, report issues, send PRs to it, without needing dozens of accounts on dozens of self-hosted Forgejos for each individual project. They’re working on those features with Forgejo, and that’s awesome. But if everyone just moves straight to Codeberg, the incentive for that largely goes away, and even if they do end up implementing it, the incentive to improve and maintain it goes away, and there’s no reason we can’t end up with Codeberg being the new Github people are trying to escape from in 5, 10, 20 years.



  • It matters a bit but not as much as you’d think, it also depends on what model and what runner you’re using I think. There’s a lot of optimizing that can be done, but in practice, you’re not going to notice huge difference in speed. Again, remember that these are not speed-demon cards to begin with, the main feature is the large VRAM capacity that allows them to run very large, powerful models without the speed penalty of dropping to system RAM, and that, at least where I understand it, is where the PCIe starts to matter a lot more. That said, maybe I’m wrong. I’m not an expert at this stuff and my experience is limited to what little hardware in what limited configurations I have available personally. Best of luck in your adventures, anything we can do to democratize machine learning technology to more people is worth pursuing I think.


  • I’m not sure Intel has great drivers or compatibility for AI, it might be sort of janky and limiting. Even with an AMD card I’ve struggled a lot, there are still plenty of things that only support Nvidia, full stop.

    Edit to add: If you’re willing to consider this option, you might also be interested in say, an Nvidia P40 or similar card. P40s have 24GB VRAM and you can pick them up cheap as can be, like $100-$200 price range. You need to 3d print or buy a fan cooler bracket for them. They are janky and limiting in a different way, they are old datacenter-style cards, they don’t have fans or do video at all, they are a bit slow at AI tasks and only run on specific Nvidia drivers. But having plenty of VRAM for that low of a price is nice, and you can tune them down to about 125W so you can run a few of them if you can find enough slots or risers for them. AI usage does NOT require PCI-E X16 bandwidth, in my experience 4X is plenty and even 1X is probably okay with some penalty.

    Edit again: Ah it looks like I guess P40s have doubled in price now too, the VRAM scavengers finally got to them. Still, if you were willing to go up to 1k price point, it’s still viable.


  • I have no experience with AI lab or what formats it supports but you can download raw model files from Huggingface, or use a tool like Ollama that can pull them (either from its own model repos or from huggingface directly, there is an ollama copy-paste command in huggingface). There are lots of formats but GGUF is usually easiest and a single file.