tal

joined 2 years ago
[–] tal@lemmy.today 1 points 7 hours ago* (last edited 7 hours ago)

I'm using Debian trixie on two systems with (newer) AMD hardware:

ROCm 7.0.1.70001-42~24.04 on an RX 7900 XTX

ROCm 7.0.2.70002-56~24.04 on an AMD AI Max 395+.

[–] tal@lemmy.today 3 points 8 hours ago (1 children)

No problem. It was an interesting question that made me curious too.

[–] tal@lemmy.today 4 points 8 hours ago* (last edited 8 hours ago) (3 children)

https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable

Rik van Riel's comments when adding MemAvailable to /proc/meminfo:

/proc/meminfo: MemAvailable: provide estimated available memory

Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up "free" and "cached", which was fine ten years ago, but is pretty much guaranteed to be wrong today.

It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files.

Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the "low" watermarks from /proc/zoneinfo.

However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory.

It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place.

Looking at the htop source:

https://github.com/htop-dev/htop/blob/main/MemoryMeter.c

   /* we actually want to show "used + shared + compressed" */
   double used = this->values[MEMORY_METER_USED];
   if (isPositive(this->values[MEMORY_METER_SHARED]))
      used += this->values[MEMORY_METER_SHARED];
   if (isPositive(this->values[MEMORY_METER_COMPRESSED]))
      used += this->values[MEMORY_METER_COMPRESSED];

   written = Meter_humanUnit(buffer, used, size);

It's adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.

top, on the other hand, is using the kernel's MemAvailable directly.

https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c

	printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));

In short: You probably want to trust /proc/meminfo's MemAvailable, (which is what top will show), and htop is probably giving a misleadingly-low number.

[–] tal@lemmy.today 3 points 8 hours ago (6 children)

If oomkiller starts killing processes, then you’re running out of memory.

Well, you could want to not dig into swap.

[–] tal@lemmy.today 1 points 8 hours ago (4 children)

While I agree with the general sentiment, he was already in office for four during his first term, and we are at the end of the first year of his second term. So 5 out of 8 years down.

[–] tal@lemmy.today 8 points 9 hours ago* (last edited 9 hours ago)

There might be some way to make use of it.

Linux apparently can use VRAM as a swap target:

https://wiki.archlinux.org/title/Swap_on_video_RAM

So you could probably take an Nvidia H200 (141 GB memory) and set it as a high-priority swap partition, say.

Normally, a typical desktop is liable to have problems powering an H200 (600W max TDP), but that's with all the parallel compute hardware active, and I assume that if all you're doing is moving stuff in and out of memory, it won't use much power, same as a typical gaming-oriented GPU.

That being said, it sounds like the route on the Arch Wiki above is using vramfs, which is a FUSE filesystem, which means that it's running in userspace rather than kernelspace, which probably means that it will have more overhead than is really necessary.

EDIT: I think that a lot will come down to where research goes. If it turns out that someone figures out that changing the hardware (having a lot more memory, adding new operations, whatever) dramatically improves performance for AI stuff, I suspect that current hardware might get dumped sooner rather than later as datacenters shift to new hardware. Lot of unknowns there that nobody will really have the answers to yet.

EDIT2: Apparently someone made a kernel-based implementation for Nvidia cards to use the stuff directly as CPU-addressable memory, not swap.

https://github.com/magneato/pseudoscopic

In holography, a pseudoscopic image reverses depth—what was near becomes far, what was far becomes near. This driver performs the same reversal in compute architecture: GPU memory, designed to serve massively parallel workloads, now serves the CPU as directly-addressable system RAM.

Why? Because sometimes you have 16GB of HBM2 sitting idle while your neural network inference is memory-bound on the CPU side. Because sometimes constraints breed elegance. Because we can.

Pseudoscopic exposes NVIDIA Tesla/Datacenter GPU VRAM as CPU-addressable memory through Linux's Heterogeneous Memory Management (HMM) subsystem. Not swap. Not a block device. Actual memory with struct page backing, transparent page migration, and full kernel integration.

I'd guess that that'll probably perform substantially better.

It looks like they presently only target older cards, though.

[–] tal@lemmy.today 24 points 11 hours ago* (last edited 11 hours ago) (3 children)

This world is getting dumber and dumber.

Ehhh...I dunno.

Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.

searches

https://www.belfasttelegraph.co.uk/news/internet-killed-my-daughter/28397087.html

Internet killed my daughter

https://archive.ph/pJ8Dw

Were Simon and Natasha victims of the web?

https://archive.ph/i9syP

Predators tell children how to kill themselves

And before that, I remember video games.

It happens periodically


something new shows up, and then you'll have people concerned about any potential harm associated with it.

https://en.wikipedia.org/wiki/Moral_panic

A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is "the process of arousing social concern over an issue",[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]

Stanley Cohen, who developed the term, states that moral panic happens when "a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests".[6] While the issues identified may be real, the claims "exaggerate the seriousness, extent, typicality and/or inevitability of harm".[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen's model of moral panic, below).

Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]

Media technologies

Main article: Media panic

The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]

According to media studies professor Kirsten Drotner:[42]

[E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.

Recent manifestations of this kind of development include cyberbullying and sexting.[8]

I'm not sure that we're doing better than people in the past did on this sort of thing, but I'm not sure that we're doing worse, either.

[–] tal@lemmy.today 16 points 22 hours ago* (last edited 22 hours ago)

Actually, whether or not it's permitted is, surprisingly, an undecided point in case law.

The case law here is Goldwater v. Carter, but the Supreme Court ruled on a technicality rather than the major question.

https://en.wikipedia.org/wiki/Goldwater_v._Carter

Goldwater v. Carter, 444 U.S. 996 (1979), was a United States Supreme Court case in which the Court dismissed a lawsuit filed by Senator Barry Goldwater and other members of the United States Congress challenging the right of President Jimmy Carter to unilaterally nullify the Sino-American Mutual Defense Treaty, which the United States had signed with the Republic of China, so that relations could instead be established with the People's Republic of China.

EDIT: I've brought it up before because a somewhat-analogous issue was also surprisingly undecided in UK case law, and there was a major legal tussle in the UK over it, whether or not the Prime Minister had the power to withdraw the UK from the EU without going to Parliament.

[–] tal@lemmy.today 6 points 1 day ago

!patientgamers@sh.itjust.works looked smug as hell. They'd been telling everyone for years.

[–] tal@lemmy.today 43 points 1 day ago

Summary created by Smart Answers AI

chuckles

[–] tal@lemmy.today 10 points 1 day ago (7 children)

Frankly, if I were about to initiate a conflict with the US, I'd rather have Trump running the country, but you do you.

[–] tal@lemmy.today 0 points 1 day ago (1 children)

And why Bash and not another shell?

I chose it for my example because I happen to use it. You could use another shell, sure.

Should we consider “throwaway” anything that supports interactive mode of your daily driver you chose in your default terminal prompt?

Interactive mode is a good case for throwaway code, but one-off scripts would also work.

144
submitted 1 week ago* (last edited 1 week ago) by tal@lemmy.today to c/technology@lemmy.world
 

I think that it's interesting to look back at calls that were wrong to try to help improve future ones.

Maybe it was a tech company that you thought wouldn't make it and did well or vice versa. Maybe a technology you thought had promise and didn't pan out. Maybe a project that you thought would become the future but didn't or one that you thought was going to be the next big thing and went under.

Four from me:

  • My first experience with the World Wide Web was on an rather unstable version of lynx on a terminal. I was pretty unimpressed. Compared to gopher clients of the time, it was harder to read, the VAX/VMS build I was using crashed frequently, and was harder to navigate around. I wasn't convinced that it was going to go anywhere. The Web has obviously done rather well since then.

  • In the late 1990s, Apple was in a pretty dire state, and a number of people, including myself, didn't think that they likely had much of a future. Apple turned things around and became the largest company in the world by market capitalization for some time, and remains quite healthy.

  • When I first ran into it, I was skeptical that Wikipedia would manage to stave off spam and parties with an agenda sufficiently to remain useful as it became larger. I think that it's safe to say that Wikipedia has been a great success.

  • After YouTube throttled per-stream download speeds, rendering youtube-dl much less useful, the yt-dlp project came to the fore, which worked around this with parallel downloads. I thought that it was very likely that YouTube wouldn't tolerate this


it seems to me to have all the drawbacks of youtube-dl from their standpoint, plus maybe more, and shouldn't be too hard to detect. But at least so far, they haven't throttled or blocked it.

Anyone else have some of their own that they'd like to share?

367
submitted 5 months ago* (last edited 5 months ago) by tal@lemmy.today to c/world@lemmy.world
 

Japan recorded the highest ever temperature of 41.2 degrees Celsius on Wednesday, beating the previous high of 41.1 C marked in 2018 and 2020. Authorities are strongly urging people to take precautions to avoid risks of heatstroke.

The mercury hit the above-human temperature of 41.2 C in the city of Tanba, Hyogo Prefecture, at 14:39, while two cities — Fukuchiyama in Kyoto and Nishiwaki in Hyogo — also recorded extremely high temperatures of 40.6 C and 40 C, respectively.

view more: next ›