litchralee

joined 2 years ago
[–] litchralee@sh.itjust.works 1 points 14 hours ago

I've seen the suggestion of buying a GUA subnet, purely to use as a routable-but-unique prefix that will never collide, and will always win over ULA or Legacy IP routes. When I last checked, it was something like €1 for a /48 off of someone's /32 prefix, complete with a letter of authorization and reverse IP delegation. So it could be routable, if one so chooses.

[–] litchralee@sh.itjust.works 15 points 1 day ago* (last edited 1 day ago) (3 children)

https://ipv6now.com.au/primers/IPv6Reasons.php

Basically, Legacy IP (v4) is a dead end. Under the original allocation scheme, it should have ran out in the early 1990s. But the Internet explosion meant TCP/IP(v4) was locked in, and so NAT was introduced to stave off address exhaustion. But that caused huge problems to this day, like mismanagement of firewalls and the need to do port-forwarding. It also broke end-to-end connectivity, which requires additional workarounds like STUN/TURN that continue to plague gamers and video conferencing software.

And because of that scarcity, it's become a land grab where rich companies and countries hoard the limited addresses in circulation, creating haves (North America, Europe) and have-nots (Africa, China, India).

The want for v6 is technical, moral, and even economical: one cannot escape Big Tech or American hegemony while still having to buy IPv4 space on the open market. Czechia and Vietnam are case studies in pushing for all-IPv6, to bolster their domestic technological familiarity and to escape the broad problems with Business As Usual.

Accordingly, there are now three classes of Internet users: v4-only, dual-v4-and-v6, and v6-only. Surprisingly, v6-only is very common now on mobile networks for countries that never had many v4 addresses. And it's an interop requirement for all Apple apps to function correctly in a v6-only environment. At a minimum, everyone should have access to dual-stack IP networks, so they can reach services that might be v4-only or v6-only.

In due course, the unstoppable march of time will leave v4-only users in the past.

[–] litchralee@sh.itjust.works 16 points 1 day ago* (last edited 1 day ago) (2 children)

You might also try asking on !ipv6@lemmy.world .

Be advised that even if a VPN offers IPv6, they may not necessarily offer it sensibly. For example, some might only give you a single address (aka a routed /128). That might work for basic web fetching but it's wholly inadequate if you wanted the VPN to also give addresses to any VMs, or if you want each outbound connection to use a unique IP. And that's a fair ask, because a normal v6 network can usually do that, even though a typical Legacy IP network can't.

Some VPNs will offer you a /64 subnet, but their software might not check if your SLAAC-assigned address is leaking your physical MAC address. Your OS should have privacy-extensions enabled to prevent this, but good VPN software should explicitly check for that. Not all software does.

[–] litchralee@sh.itjust.works 1 points 2 days ago (1 children)

Connection tracking might not be totally necessary for a reverse proxy mode, but it's worth discussing what happens if connection tracking is disabled or if the known-connections table runs out of room. For a well-behaved protocol like HTTP(S) that has a fixed inbound port (eg 80 or 443) and uses TCP, tracking a connection means being aware of the TCP connection state, which the destination OS already has to do. But since a reverse proxy terminates a TCP connection, then the effort for connection tracking is minimal.

For a poorly-behaved protocol like FTP -- which receives initial packets in a fixed inbound port but then spawns a separate port for outbound packers -- the effort of connection tracking means setting up the firewall to allow ongoing (ie established) traffic to pass in.

But these are the happy cases. In the event of a network issue that affects an HTTP payload sent from your reverse proxy toward the requesting client, a mid-way router will send back to your machine an ICMP packet describing the problem. If your firewall is not configured to let all ICMP packets through, then the only way in would be if conntrack looks up the connection details from its table and allows the ICMP packet in, as "related" traffic. This is not dissimilar to the FTP case above, but rather than a different port number, it's an entirely different protocol.

And then there's UDP tracking, which is relevant to QUIC. For hosting a service, UDP is connectionless and so for any inbound packet we received on port XYZ, conntrack will permit an outbound packet on port XYZ. But that's redundant since we presumably had to explicitly allow inbound port XYZ to expose the service. But in the opposite case, where we want to access UDP resources on the network, then an outbound packet to port ABC means conntrack will keep an entry to permit an inbound packet on port ABC. If you are doing lots of DNS lookups (typically using UDP), then that alone could swamp the con track table: https://kb.isc.org/docs/aa-01183

It may behoove you to first look at what's filling conntrack's table, before looking to disable it outright. It may be possible to specifically skip connection tracking for anything already explicitly permitted through the firewall (eg 80/443). Or if the issue is due to numerous DNS resolution requests from trying to look up spam sources IPs, then perhaps either the logs should not do a synchronous DNS lookup, or you can also skip connection tracking for DNS.

[–] litchralee@sh.itjust.works 9 points 5 days ago (1 children)

https://github.com/Overv/vramfs

Oh, it's a user space (FUSE) driver. I was rather hoping it was an out-of-tree Linux kernel driver, since using FUSE will: 1) always pass back to userspace, which costs performance, and 2) destroys any possibility of DMA-enabled memory operations (DPDK is a possible exception). I suppose if the only objective was to store files in VRAM, this does technically meet that, but it's leaving quite a lot on the table, IMO.

If this were a kernel module, the filesystem performance would presumably improve, limited by how the VRAM is exposed by OpenCL (ie very fast if it's just all mapped into PCIe). And if it was basically offering VRAM as PCIe memory, then this potentially means the VRAM can be used for certain RAM niche cases, like hugepages: some applications need large quantities of memory, plus a guarantee that it won't be evicted from RAM, and whose physical addresses can be resolved from userspace (eg DPDK, high-performance compute). If such a driver could offer special hugepages which are backed by VRAM, then those application could benefit.

And at that point, on systems where the PCIe address space is unified with the system address space (eg x86), then it's entirely plausible to use VRAM as if it were hot-insertable memory, because both RAM and VRAM would occupy known regions within the system memory address space, and the existing MMU would control which processes can access what parts of PCIe-mapped-VRAM.

Is it worth re-engineering the Linux kernel memory subsystem to support RAM over PCIe? Uh, who knows. Though I've always like the thought of DDR on PCIe cards. All technologies are doomed to reinvent PCIe, I think, said someone from Level1Techs.

[–] litchralee@sh.itjust.works 10 points 5 days ago (3 children)

Ok, I have to know: how is this done, and what do people use it for?

[–] litchralee@sh.itjust.works 2 points 6 days ago

It might not be used frequently, but perhaps "incomprehension"?

[–] litchralee@sh.itjust.works 14 points 1 week ago* (last edited 1 week ago)

In the past, we did have a need for purpose-built skyscrapers meant to house dense racks of electronic machines, but it wasn't for data centers. No, it was for telephone equipment. See the AT&T Long Lines building in NYC, a windowless monolith of a structure in Lower Manhattan. It stands at 170 meters (550 ft).

This NYC example shows that it's entirely possible for telephone equipment to build up, and was very necessary considering the cost of real estate in that city. But if we look at the difference between a telephone exchange and a data center, we quickly realize why the latter can't practically achieve skyscraper heights.

Data centers consume enormous amounts of electric power, and this produces a near-equivalent amount of heat. The chiller units for a data center are themselves estimated to consume something around a quarter of the site's power consumption, to dissipate the heat energy of the computing equipment. For a data center that's a few stories tall, the heat density per land area is enough that a roof-top chiller can cool it. But if the data center grows taller, it has a lower ratio of rooftop to interior volume.

This is not unlike the ratio of surface area to interior volume, which is a limiting factor for how large (or small) animals can be, before they overheat themselves. So even if we could mount chiller units up the sides of a building -- which we can't, because heat from the lower unit would affect an upper unit -- we still have this problem of too much heat in a limited land area.

[–] litchralee@sh.itjust.works 2 points 1 week ago (1 children)

For my own networks, I've been using IPv6 subnets for years now, and have NAT64 translation for when they need to access Legacy IP (aka IPv4) resources on the public Internet.

Between your two options, I'm more inclined to recommend the second solution, because although it requires renumbering existing containers to the new subnet, you would still have one subnet for all your containers, but it's bigger now. Whereas the first solution would either: A) preclude containers on the first bridge from directly talking to containers on the second bridge, or B) you would have to enable some sort of awful NAT44 translation to make the two work together.

So if IPv6 and its massive, essentially-unlimited ULA subnets are not an option, then I'd still go with the second solution, which is a bigger-but-still-singular subnet.

[–] litchralee@sh.itjust.works 2 points 1 week ago* (last edited 1 week ago)

The French certainly benefitted from the earlier Jesuit work, although the French did do their own attempts at "westernizing" parts of the language. I understand that today in Vietnam, the main train station in Hanoi is called "Ga Hà Nội", where "ga" comes from the French "gare", meaning train station (eg Gare du Nord in Paris). This kinda makes sense since the French would have been around when railways were introduced in the 19th Century.

Another example is what is referred to in English as the "Gulf of Tonkin incident", referring to the waters off the coast of north Vietnam. Here, Tonkin comes from the French transliteration of Đông Kinh (東京), which literally means "eastern capital".

So far as I'm aware, English nor French don't use the name Tonkin anymore (it's very colonialism-coded), and modern Vietnamese calls those waters by a different name anyway. There's also another problem: that name is already in-use by something else, being the Tokyo metropolis in Japan.

In Japanese, Tokyo is written as 東京 (eastern capital) in reference to it being east of the cultural and historical seat of the Japanese Emperor in Kyoto (京都, meaning "capital metropolis"). Although most Vietnamese speakers would just say "Tokyo" to refer to the city in Japan, if someone did say "Đông Kinh", people are more likely to think of Tokyo (or have no clue) than to think of an old bit of French colonial history. These sorts of homophones exist between the CJKV languages all the time.

And as a fun fact, if Tokyo is the most well-known "eastern capital" when considering the characters in the CJKV language s, we also have the northern capital (北京, Beijing, or formerly "Peking") and the southern capital (南京, Nanjing). There is no real consensus on where the "western capital" is.

Vietnamese speakers will in-fact say Bắc Kinh when referring to the Chinese capital city rather than "Beijing", and I'm not totally sure why it's an exception like that. Then again, some newspapers will also print the capital city of the USA as Hoa Thịnh Đốn (華盛頓) rather than "Washington, DC", because that's how the Chinese wrote it down first, and then brought to Vietnamese, and then changed to the modern script. To be abundantly clear, it shouldn't be surprising to have a progression from something like "Wa-shing-ton" to "hua-shen-dun' to "hoa-thinh-don".

[–] litchralee@sh.itjust.works 7 points 1 week ago* (last edited 1 week ago) (2 children)

As a case study, I think Vietnamese is especially apt to show how the written language develops in parallel and sometimes at odds with the spoken language. The current alphabetical script of Vietnamese was only adopted for general use in the late 19th Century, in order to improve literacy. Before that, the grand majority of Vietnamese written works were in a logographic system based on Chinese characters, but with extra Vietnamese-specific characters that conveyed how the Vietnamese would pronounce those words.

The result was that Vietnamese scholars pre-20th Century basically had to learn most of the Chinese characters and their Cantonese pronunciations (not Mandarin, since that's the dialect that's geographically father away), and then memorize how they are supposed to be read in Vietnamese, then compounded by characters that sort-of convey hints about the pronunciation. This is akin to writing a whole English essay using Japanese katakana; try writing "ornithology" like that.

Also, the modern Vietnamese script is a work of Portuguese Jesuit scholars, who were interested in rendering the Vietnamese language into a more familiar script that could be read phonetically, so that words are pronounced letter-by-letter. That process, however faithful they could manage it, necessarily obliterates some nuance that a logographic language can convey. For example, the word bầu can mean either a gourd or to be pregnant. But in the old script, no one would confuse 匏 (gourd) with 保 (to protect; pregnant) in the written form, even though the spoken form requires context to distinguish the two.

Some Vietnamese words were also imported into the language from elsewhere, having not previously existed in spoken Vietnamese. So the pronunciation would hew closer to the origin pronunciation, and then to preserve the lineage of where the pronunciation came from, the written word might also be written slightly different. For example, nhôm (meaning aluminum) draws from the last syllable of how the French pronounce aluminum. Loanwords -- and there are many in Vietnamese, going back centuries -- will mess up the writing system too.

[–] litchralee@sh.itjust.works 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I'm not a computer engineer, but I did write this comment for a question on computer architecture. At the very onset, we should clarify that RAM capacity (# of GBs) and clock rate (aka frequency; eg 3200 MHz) are two entirely different quantities, and generally can not be used to compensate for the other. It is akin to trying to halve an automobile's fuel tank in order to double the top-speed of the car.

Since your question is about performance, we have to look at both the technical impacts to the system (primarily from reduced clock rate) and then also the perceptual changes (due to having more RAM capacity). Only by considering both together can be arrive as some sort of coherent answer.

You've described your current PC as having an 8 GB stick of DDR4 3200 MHz. This means that the memory controller in your CPU (pre-DDR4 era CPUs would have put the memory controller on the motherboard) is driving the RAM at 3200 MHz. A single clock cycle is a square wave that goes up and then goes down. DDR stands for "Double Data Rate", and means that a group of bits (called a transaction) are sent on both the up and the down of that single clock cycle. So 3200 MHz means the memory is capable of moving 6400 million transactions per second (6400 MT/s). For this reason, 3200 MHz DDR4 is also advertised as DDR4-6400.

Some background about DDR versus other RAM types, when used in PCs: the DDR DIMMs (aka sticks) are typically made of 8 visually-distinct chips on each side of the DIMM, although some ECC-capable DIMMs will have 9 chips. These are the small black boxes that you can see, but they might be underneath the DIMM's heatsink, if it has one. The total capacity of these sixteen chips on your existing stick is 8 GB, so each chip should be 512 MB. A rudimentary way to store data would be for the first 512 MB to be stored in the first chip, then the next 512 MB in the second chips, and so on. But DDR DIMMs do a clever trick to increase performance: the data is "striped" across all 8 or 16 chips. That is, to retrieve a single Byte (8 bits), the eight chips on one face of the DIMM are instructed to return their stored bit simultaneously, and the memory controller composes these into a single Byte to send to the CPU. This all happens in the time of a single transaction.

We can actually do that on both sides of the DIMM, so two Bytes could be retrieved at once. This is known as dual-rank memory. But why should each chip only return a single bit? What if each chip could return 4 bits at a time? If all sixteen chips support this 4-bit quantity (known as memory banks), we would get 64 bits (8 Bytes), still in the same time as a single transaction. Compare to earlier where we didn't stripe the bits across all sixteen chips: it would have taken 16 times longer for one chip to return what 16 chips can return in parallel. Free performance!

But why am I mentioning these engineering details, which has already been built into the DIMM you already have? The reason is that it's the necessary background to explain the next DDR hat-trick for memory performance: multi-channel memory. The most common is dual channel memory, and I'll let this "DDR4 for Dummies" quote explain:

A memory channel refers to DIMM slots tied to the same wires on the CPU. Multiple memory channels allow for faster operation, theoretically allowing memory operations to be up to four times as fast. Dual channel architecture with 64-bit systems provides a 128-bit data path. Memory is installed in banks, and you have to follow a couple of rules to optimize performance.

Basically, dual-channel is kinda like having two memory controllers for the CPU, each driving half of the DDR in the system. On an example system with two 1 GB sticks of RAM, we could have each channel driving a single stick. A rudimentary use would be if the first 1 GB of RAM came from channel 1, and then the second 1 GB came from channel 2. But from what we saw earlier with dual-rank memory, this is leaving performance on the table. Instead, we should stripe/interlace memory accesses across both channels, so that each stick of RAM returns 8 Bytes, for a total of 16 Bytes in the time of a single transaction.

So now let's answer the technical aspect of you question. If your system supports dual-channel memory, and you install that second DIMM into the correct slot to make use of that feature, then in theory, memory accesses should double in capacity, because of striping the access across two independent channels. The downside is that for that whole striping thing to work, all channels must be running at the same speed, or else one channel would return data too late. Since you have an existing 3200 MHz stick but the new stick would be 2400 MHz, the only thing the memory controller can do is to run the existing stick at the lower speed of 2400 MHz. Rough math says that the existing stick is now operating at only 75% of its performance, but from the doubling of capacity, that might lead to 150% of performance. So still a net gain, but less than ideal.

The perceptual impact has to do with how a machine might behave now that it has 16 GB of memory, having increased from 8 GB. If you were only doing word processing, your existing 8 GB might not have been fully utilized, with the OS basically holding onto it. But if instead you had 50 browser tabs open, then your 8 GB of RAM might have been entirely utilized, with the OS having to shuffle memory onto your hard drive or SSD. This is because those unused tabs still consume memory, despite not actively in front of you. In some very extreme cases, this "thrashing" causes the system to slow to a crawl, because the shuffling effort is taking up most of the RAM's bandwidth. If increasing from 8 GB to 16 GB would prevent thrashing, then the computer would overall feel faster than before, and that's on top of the theoretical 50% performance gain from earlier.

Overall, it's not ideal to mix DDR speeds, but if the memory controller can drive all DIMMs at the highest common clock speed and with multi-channel memory, then you should still get a modest boost in technical performance, and possibly a boost in perceived performance. But I would strongly recommend matched-speed DDR, if you can.

1
submitted 9 months ago* (last edited 9 months ago) by litchralee@sh.itjust.works to c/newpipe@lemmy.ml
 

(fairly recent NewPipe user; ver 0.27.6)

Is there a way to hide particular live streams from showing up on the "What's New" tab? I found the option in Settings->Content->Fetch Channel Tabs which will prevent all live streams from showing in the tab. But I'm looking for an option to selective hide only certain live streams from the tab.

Some of my YouTube channels have 24/7 live streams (eg Arising Empire), which will always show at the top of the page. But I don't want to hide all live streams from all channels, since I do want to see if new live streams appear, usually ones that aren't 24/7.

Ideally, there'd be an option to long-press on a live stream in the tab, one which says "Hide From Feed", which would then prevent that particular stream ID from appearing in the feed for subsequent fetches.

From an implementation perspective, I imagine there would be some UI complexity in how to un-hide a stream, and to list out all hidden streams. If this isn't possible yet, I can try to draft a feature proposal later.

 

I'm trying to remind myself of a sort-of back-to-back chaise longue or sofa, probably from a scene on American TV or film -- possibly of the mid-century or modern style -- where I think two characters are having an informal business meeting. But the chaise longue itself is a single piece of furniture with two sides, such that each characters can stretch their legs while still being able to face each other for the meeting, with a short wall separating them.

That is to say, they are laying anti-parallel along the chaise longue, if that makes any sense. The picture here is the closest thing I could find on Google Images.

So my questions are: 1) what might this piece of furniture be called? A sofa, chaise longue, settee, something else? And 2) does anyone know of comparable pieces of furniture from TV or film? Additional photos might help me narrow my search, as I'm somewhat interested in trying to buy such a thing. Thanks!

EDIT 1: it looks like "tete a tete chair" is the best keyword so far for this piece of furniture

EDIT 2: the term "conversation chair" also yields a number of results, including a particular Second Empire style known as the "indiscreet", having room for three people!

view more: next ›