• 0 Posts
  • 105 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • Absolutely, over here we’ve recently elected a horrible party as the biggest one, with 25% of the votes. Dark times.

    The difference is that in many European countries the head of state is more of a ceremonial position (at least in practice) and the head of the government holds nowhere near the amount of power a US president does. With proportional representation, the biggest party often doesn’t have an absolute majority and needs to form a government together with other parties, or might even end up in the opposition. Together they agree on who’s going to be the head of government (usually the head of the largest party), who will be the ministers and what will be the policy. If it doesn’t work out because of disagreements, the government breaks up and new elections will be held.

    My point is: the risk is real, populism is growing, policy is shifting, but the dynamics are different. Having a first past the post system and concentrating so much power into a single political position feels like an accelerator.


  • Exactly. Same as with sleeping data. When it says that you’ve been awake 3 times last night, it doesn’t really mean much. That kind of data shouldn’t be presented as being accurate. However, it could still be made accessible behind a button er menu option. For example, it might show you that the signal is intermittent because your watch band isn’t tight enough, or other anomalies. And of course you’re right: they won’t tell you that the data is of low quality and as a user you don’t necessarily know that, so in that sense it can be very misleading.


  • I’m not familiar with how Bluesky works, but I bet that if lemmy.world goes down it will feel like Lemmy as a whole is down for a lot of people. Is this similar to that, as in that the major instance went down because of the large influx of users? Somewhat comparable to the influx of Lemmy users from Reddit that we’ve seen? Or is Bluesky not really federated and did it go down as a whole?






  • It’s called Markdown. If you know how to use it, it’s very convenient, but if not, it can cause all kinds of unexpected effects. It’s also the reason that **test** is formatted as test. You can often force new lines by ending the line with two spaces or a backslash character. Let’s test:

    This is a line
    And this is another line

    In this case it was a backslash character. So what I typed is:

    This is a line\
    And this is another line

    With spaces:

    This is a line
    And this is another line





  • I agree. I think people might have the idea that the states dictates the contents, but that’s not at all how it works in well functioning democracies. It’s there to serve the public interest: to have a relatively unbiased news outlet that’s accessible to all and without (or with little) commercial interests. It coexists with commercial news outlets.



  • Sure, but I’m just playing around with small quantized models on my laptop with integrated graphics and the RAM was insanely cheap. It just interests me what LLMs are capable of that can be run on such hardware. For example, llama 3.2 3B only needs about 3.5 GB of RAM, runs at about 10 tokens per second and while it’s in no way comparable to the LLMs that I use for my day to day tasks, it doesn’t seem to be that bad. Llama 3.1 8B runs at about half that speed, which is a bit slow, but still bearable. Anything bigger than that is too slow to be useful, but still interesting to try for comparison.

    I’ve got an old desktop with a pretty decent GPU in it with 24 GB of VRAM, but it’s collecting dust. It’s noisy and power hungry (older generation dual socket Intel Xeon) and still incapable of running large LLMs without additional GPUs. Even if it were capable, I wouldn’t want it to be turned on all the time due to the noise and heat in my home office, so I’ve not even tried running anything on it yet.


  • The only time I can remember 16 GB not being sufficient for me is when I tried to run an LLM that required a tad more than 11 GB and I had just under 11 GB of memory available due to the other applications that were running.

    I guess my usage is relatively lightweight. A browser with a maximum of about 100 open tabs, a terminal, a couple of other applications (some of them electron based) and sometimes a VM that I allocate maybe 4 GB to or something. And the occasional Age of Empires II DE, which even runs fine on my other laptop from 2016 with 16 GB of RAM in it. I still ordered 32 GB so I can play around with local LLMs a bit more.


  • I’m not going to defend Apple’s profit maximization strategy here, but I disagree. Most people won’t end up buying a cable and adaptare because they already have one, and in contrast to those pieces made of plastic and metal, the packaging is mostly made of paper. I’m pretty confident that the reduction in plastic and metal makes up for the extra packaging that’s produced for the minority that does buy a cable and/or adapter.