litchralee

joined 2 years ago
[–] litchralee@sh.itjust.works 14 points 4 days ago* (last edited 4 days ago)

In the past, we did have a need for purpose-built skyscrapers meant to house dense racks of electronic machines, but it wasn't for data centers. No, it was for telephone equipment. See the AT&T Long Lines building in NYC, a windowless monolith of a structure in Lower Manhattan. It stands at 170 meters (550 ft).

This NYC example shows that it's entirely possible for telephone equipment to build up, and was very necessary considering the cost of real estate in that city. But if we look at the difference between a telephone exchange and a data center, we quickly realize why the latter can't practically achieve skyscraper heights.

Data centers consume enormous amounts of electric power, and this produces a near-equivalent amount of heat. The chiller units for a data center are themselves estimated to consume something around a quarter of the site's power consumption, to dissipate the heat energy of the computing equipment. For a data center that's a few stories tall, the heat density per land area is enough that a roof-top chiller can cool it. But if the data center grows taller, it has a lower ratio of rooftop to interior volume.

This is not unlike the ratio of surface area to interior volume, which is a limiting factor for how large (or small) animals can be, before they overheat themselves. So even if we could mount chiller units up the sides of a building -- which we can't, because heat from the lower unit would affect an upper unit -- we still have this problem of too much heat in a limited land area.

[–] litchralee@sh.itjust.works 2 points 6 days ago (1 children)

For my own networks, I've been using IPv6 subnets for years now, and have NAT64 translation for when they need to access Legacy IP (aka IPv4) resources on the public Internet.

Between your two options, I'm more inclined to recommend the second solution, because although it requires renumbering existing containers to the new subnet, you would still have one subnet for all your containers, but it's bigger now. Whereas the first solution would either: A) preclude containers on the first bridge from directly talking to containers on the second bridge, or B) you would have to enable some sort of awful NAT44 translation to make the two work together.

So if IPv6 and its massive, essentially-unlimited ULA subnets are not an option, then I'd still go with the second solution, which is a bigger-but-still-singular subnet.

[–] litchralee@sh.itjust.works 2 points 6 days ago* (last edited 6 days ago)

The French certainly benefitted from the earlier Jesuit work, although the French did do their own attempts at "westernizing" parts of the language. I understand that today in Vietnam, the main train station in Hanoi is called "Ga Hà Nội", where "ga" comes from the French "gare", meaning train station (eg Gare du Nord in Paris). This kinda makes sense since the French would have been around when railways were introduced in the 19th Century.

Another example is what is referred to in English as the "Gulf of Tonkin incident", referring to the waters off the coast of north Vietnam. Here, Tonkin comes from the French transliteration of Đông Kinh (東京), which literally means "eastern capital".

So far as I'm aware, English nor French don't use the name Tonkin anymore (it's very colonialism-coded), and modern Vietnamese calls those waters by a different name anyway. There's also another problem: that name is already in-use by something else, being the Tokyo metropolis in Japan.

In Japanese, Tokyo is written as 東京 (eastern capital) in reference to it being east of the cultural and historical seat of the Japanese Emperor in Kyoto (京都, meaning "capital metropolis"). Although most Vietnamese speakers would just say "Tokyo" to refer to the city in Japan, if someone did say "Đông Kinh", people are more likely to think of Tokyo (or have no clue) than to think of an old bit of French colonial history. These sorts of homophones exist between the CJKV languages all the time.

And as a fun fact, if Tokyo is the most well-known "eastern capital" when considering the characters in the CJKV language s, we also have the northern capital (北京, Beijing, or formerly "Peking") and the southern capital (南京, Nanjing). There is no real consensus on where the "western capital" is.

Vietnamese speakers will in-fact say Bắc Kinh when referring to the Chinese capital city rather than "Beijing", and I'm not totally sure why it's an exception like that. Then again, some newspapers will also print the capital city of the USA as Hoa Thịnh Đốn (華盛頓) rather than "Washington, DC", because that's how the Chinese wrote it down first, and then brought to Vietnamese, and then changed to the modern script. To be abundantly clear, it shouldn't be surprising to have a progression from something like "Wa-shing-ton" to "hua-shen-dun' to "hoa-thinh-don".

[–] litchralee@sh.itjust.works 7 points 6 days ago* (last edited 6 days ago) (2 children)

As a case study, I think Vietnamese is especially apt to show how the written language develops in parallel and sometimes at odds with the spoken language. The current alphabetical script of Vietnamese was only adopted for general use in the late 19th Century, in order to improve literacy. Before that, the grand majority of Vietnamese written works were in a logographic system based on Chinese characters, but with extra Vietnamese-specific characters that conveyed how the Vietnamese would pronounce those words.

The result was that Vietnamese scholars pre-20th Century basically had to learn most of the Chinese characters and their Cantonese pronunciations (not Mandarin, since that's the dialect that's geographically father away), and then memorize how they are supposed to be read in Vietnamese, then compounded by characters that sort-of convey hints about the pronunciation. This is akin to writing a whole English essay using Japanese katakana; try writing "ornithology" like that.

Also, the modern Vietnamese script is a work of Portuguese Jesuit scholars, who were interested in rendering the Vietnamese language into a more familiar script that could be read phonetically, so that words are pronounced letter-by-letter. That process, however faithful they could manage it, necessarily obliterates some nuance that a logographic language can convey. For example, the word bầu can mean either a gourd or to be pregnant. But in the old script, no one would confuse 匏 (gourd) with 保 (to protect; pregnant) in the written form, even though the spoken form requires context to distinguish the two.

Some Vietnamese words were also imported into the language from elsewhere, having not previously existed in spoken Vietnamese. So the pronunciation would hew closer to the origin pronunciation, and then to preserve the lineage of where the pronunciation came from, the written word might also be written slightly different. For example, nhôm (meaning aluminum) draws from the last syllable of how the French pronounce aluminum. Loanwords -- and there are many in Vietnamese, going back centuries -- will mess up the writing system too.

[–] litchralee@sh.itjust.works 3 points 1 week ago* (last edited 1 week ago) (1 children)

I'm not a computer engineer, but I did write this comment for a question on computer architecture. At the very onset, we should clarify that RAM capacity (# of GBs) and clock rate (aka frequency; eg 3200 MHz) are two entirely different quantities, and generally can not be used to compensate for the other. It is akin to trying to halve an automobile's fuel tank in order to double the top-speed of the car.

Since your question is about performance, we have to look at both the technical impacts to the system (primarily from reduced clock rate) and then also the perceptual changes (due to having more RAM capacity). Only by considering both together can be arrive as some sort of coherent answer.

You've described your current PC as having an 8 GB stick of DDR4 3200 MHz. This means that the memory controller in your CPU (pre-DDR4 era CPUs would have put the memory controller on the motherboard) is driving the RAM at 3200 MHz. A single clock cycle is a square wave that goes up and then goes down. DDR stands for "Double Data Rate", and means that a group of bits (called a transaction) are sent on both the up and the down of that single clock cycle. So 3200 MHz means the memory is capable of moving 6400 million transactions per second (6400 MT/s). For this reason, 3200 MHz DDR4 is also advertised as DDR4-6400.

Some background about DDR versus other RAM types, when used in PCs: the DDR DIMMs (aka sticks) are typically made of 8 visually-distinct chips on each side of the DIMM, although some ECC-capable DIMMs will have 9 chips. These are the small black boxes that you can see, but they might be underneath the DIMM's heatsink, if it has one. The total capacity of these sixteen chips on your existing stick is 8 GB, so each chip should be 512 MB. A rudimentary way to store data would be for the first 512 MB to be stored in the first chip, then the next 512 MB in the second chips, and so on. But DDR DIMMs do a clever trick to increase performance: the data is "striped" across all 8 or 16 chips. That is, to retrieve a single Byte (8 bits), the eight chips on one face of the DIMM are instructed to return their stored bit simultaneously, and the memory controller composes these into a single Byte to send to the CPU. This all happens in the time of a single transaction.

We can actually do that on both sides of the DIMM, so two Bytes could be retrieved at once. This is known as dual-rank memory. But why should each chip only return a single bit? What if each chip could return 4 bits at a time? If all sixteen chips support this 4-bit quantity (known as memory banks), we would get 64 bits (8 Bytes), still in the same time as a single transaction. Compare to earlier where we didn't stripe the bits across all sixteen chips: it would have taken 16 times longer for one chip to return what 16 chips can return in parallel. Free performance!

But why am I mentioning these engineering details, which has already been built into the DIMM you already have? The reason is that it's the necessary background to explain the next DDR hat-trick for memory performance: multi-channel memory. The most common is dual channel memory, and I'll let this "DDR4 for Dummies" quote explain:

A memory channel refers to DIMM slots tied to the same wires on the CPU. Multiple memory channels allow for faster operation, theoretically allowing memory operations to be up to four times as fast. Dual channel architecture with 64-bit systems provides a 128-bit data path. Memory is installed in banks, and you have to follow a couple of rules to optimize performance.

Basically, dual-channel is kinda like having two memory controllers for the CPU, each driving half of the DDR in the system. On an example system with two 1 GB sticks of RAM, we could have each channel driving a single stick. A rudimentary use would be if the first 1 GB of RAM came from channel 1, and then the second 1 GB came from channel 2. But from what we saw earlier with dual-rank memory, this is leaving performance on the table. Instead, we should stripe/interlace memory accesses across both channels, so that each stick of RAM returns 8 Bytes, for a total of 16 Bytes in the time of a single transaction.

So now let's answer the technical aspect of you question. If your system supports dual-channel memory, and you install that second DIMM into the correct slot to make use of that feature, then in theory, memory accesses should double in capacity, because of striping the access across two independent channels. The downside is that for that whole striping thing to work, all channels must be running at the same speed, or else one channel would return data too late. Since you have an existing 3200 MHz stick but the new stick would be 2400 MHz, the only thing the memory controller can do is to run the existing stick at the lower speed of 2400 MHz. Rough math says that the existing stick is now operating at only 75% of its performance, but from the doubling of capacity, that might lead to 150% of performance. So still a net gain, but less than ideal.

The perceptual impact has to do with how a machine might behave now that it has 16 GB of memory, having increased from 8 GB. If you were only doing word processing, your existing 8 GB might not have been fully utilized, with the OS basically holding onto it. But if instead you had 50 browser tabs open, then your 8 GB of RAM might have been entirely utilized, with the OS having to shuffle memory onto your hard drive or SSD. This is because those unused tabs still consume memory, despite not actively in front of you. In some very extreme cases, this "thrashing" causes the system to slow to a crawl, because the shuffling effort is taking up most of the RAM's bandwidth. If increasing from 8 GB to 16 GB would prevent thrashing, then the computer would overall feel faster than before, and that's on top of the theoretical 50% performance gain from earlier.

Overall, it's not ideal to mix DDR speeds, but if the memory controller can drive all DIMMs at the highest common clock speed and with multi-channel memory, then you should still get a modest boost in technical performance, and possibly a boost in perceived performance. But I would strongly recommend matched-speed DDR, if you can.

[–] litchralee@sh.itjust.works 6 points 1 week ago* (last edited 1 week ago)

Overall, it looks like you're done your homework, covering the major concerns. What I would add is that keeping an RPi cool is a consideration, since without even a tiny heatsink, the main chip gets awfully hot. Active cooling with a fan should be considered to prevent thermal throttling.

The same can apply to a laptop, since the intended use-case is with the monitor open and with the machine perched upon a flat and level surface. But they already have automatic thermal control, so the need for supplemental cooling is not very big.

Also, it looks like you've already considered an OS. But for other people's reference, an old x86 laptop (hopefully newer than i686) has a huge realm of potential OS's, including all the major *BSD distros. Whereas I think only Ubuntu, Debian, and Raspbian are the major OS's targeting the RPi.

One last thing in favor of choosing laptop: using what you have on hand is good economics and reduces premature ewaste, as well as formenting the can-do attitude that's common to self hosting (see: !selfhosted@lemmy.world).

TL;DR: not insane. Don't forget IPv6 support.

[–] litchralee@sh.itjust.works 3 points 1 week ago

At the very minimum, gym in the morning (but after coffee/caffeine, plus the time for it to kick in) is the enlightened way. It helps if your gym is nearby or you have a !homegym@lemmy.world .

I personally also use the wee morning hours to reconcile my financial accounts, since ACH transactions in the USA will generally process a day faster if submitted before 10:30 ET.

[–] litchralee@sh.itjust.works 15 points 2 weeks ago* (last edited 2 weeks ago)

The photos taken by the sorting machines are of the outside of the envelope, and are necessary in order to perform OCR of the destination address and to verify postage. There is no general mechanism to photograph the contents of mailpieces, and given how enormous the operations of the postal service is, casting a wide surveillance net to capture the contents of mailpieces is simply impractical before someone eventually spilled the beans.

That said, what you describe is a method of investigation known as mail cover, where the useful info from the outside of a recipient's mail can be useful. For example, getting lots of mail from a huge number domestic addresses in plain envelopes, the sort that victims of remittance fraud would have on hand, could be a sign that the recipient is laundering fraudulent money. Alternatively, sometimes the envelope used by the sender is so thin that the outside photo accidentally reveals the contents. This is no different than holding up an envelope to the sunlight and looking through it. Obvious data is obvious to observe.

In electronic surveillance (a la NSA), looking at just the outside of an envelope is akin to recording only the metadata of an encrypted messaging app. No, you can't read the messages, but seeing that someone received a 20 MB message could indicate a video, whereas 2 KB might just be one message in a rapid convo.

[–] litchralee@sh.itjust.works 17 points 2 weeks ago (1 children)

So no, no billion dollar company can make their own training data

This statement brought along with it the terrifying thought that there's a dystopian alternative timeline where companies do make their own training data, by commissioning untold numbers of scientists, engineers, artists, researchers, and other specialties to undertake work that no one else has. But rather than trying to further the sum of human knowledge, or even directly commercializing the fruits of that research, that it's all just fodder to throw into the LLM training set. A world where knowledge is not only gatekept like Elsevier but it isn't even accessible by humans: only the LLM will get to read it and digest it for human consumption.

Written by humans, read by AI, spoonfed to humans. My god, what an awful world that would be.

[–] litchralee@sh.itjust.works 8 points 2 weeks ago* (last edited 2 weeks ago)

A few factors:

  • Human population centers historically were built by natural waterways and/or by the sea, to enable access to trade, seafood, and obviously, water for drinking and agriculture
  • When the fastest mode of land transport is a horse (ie no railways or automobiles), the long-distance roads between nations which existed up to the 1700s were generally unimproved and dangerous, both from the risk of breakdown but also highway robbery. Short-distance roads made for excellent invasion routes for an army, and so those tended to fall under control of the same nation.
  • Water transport was (and still is) capable of moving large quantities of tonnage, and so was the predominant form of trade, only seeing competition when land transport improved and air transport was introduced.

So going back centuries when all the "local" roads are still within the same country (due to conquest), and all the long-distance roads were treacherous, slow, and usually uncomfortable (ie dysentery on the Oregon Trail), the most obvious way to get to another country would have been to get a ride on a trading ship. An island nation would certainly regard all other countries as being "overseas", but so would an insular nation hemmed in by mountains but sitting directly on the sea. When land transport is limited, sea routes are the next best. And whereas roads only connect places situated along the route, the sea (and the sky) allow point-to-point trading, exposing faraway countries to each other when their ships arrive at the port.

TL;DR: for most of human history, other countries were most reasonably reached by sea. Hence "overseas".

[–] litchralee@sh.itjust.works 16 points 3 weeks ago* (last edited 3 weeks ago)

Truly, it could be anything that unsettles the market. A bubble popping is essentially a cascading failure, where the dominos fall, when the house of cards collapses, when fear turns into panic, even when everyone is of sound mind.

The Great Depression is said to have started because of a colossally bad "short squeeze", where investors tried to corner the market on copper futures, I think. That caused some investment firms to go broke, which then meant trust overall was shaken. And then things spiraled out of control thereafter, irrespective of whether the underlying industries were impacted or not.

So too did the Great Financial Crisis in 2008, where the USA housing market collapsed, and the extra leverage that mortgagees had against their home value worked against them, plunging both individuals and mortgage companies into financial ruin. In that situation, the fact that some people lost their homes, coupled with them losing their jobs due to receding market, was an unvirtuous cycle that fed itself.

I can't speculate as to what will pop the current bubble, but more likely than not, it will be as equally messy as bubbles of yore. But much like the Big One -- which here in California refers to another devastating earthquake to come -- it's not a question of if but when.

Until it (and the AI bubble popping) happens though, it is not within my power to do much about it, and so I'll spend my time preparing. That doesn't mean I'm off to move my retirement funds into S&P500 ex-AI though, since even the Dot Com bubble produced gains before it went belly up. I must reiterate that no one knows when the bubble will pop, so getting on or getting off now is a financial risk.

Preparation means to build resilience, to decouple my home from my job, to keep my family and community safe even when the shaking starts. For some, this means stocking food and water. For others, it means building mutual aid networks. And for some still, it means downsizing and making their lives more financially sustainable, before the choice is made for them.

This is a rollercoaster and we're all strapped in, whether we like it or not.

[–] litchralee@sh.itjust.works 12 points 3 weeks ago* (last edited 3 weeks ago)

All I can offer you are notable rail vs road innovations in the 18th Century, North American electricity supplies, and bicycle wheel construction.

1
submitted 8 months ago* (last edited 8 months ago) by litchralee@sh.itjust.works to c/newpipe@lemmy.ml
 

(fairly recent NewPipe user; ver 0.27.6)

Is there a way to hide particular live streams from showing up on the "What's New" tab? I found the option in Settings->Content->Fetch Channel Tabs which will prevent all live streams from showing in the tab. But I'm looking for an option to selective hide only certain live streams from the tab.

Some of my YouTube channels have 24/7 live streams (eg Arising Empire), which will always show at the top of the page. But I don't want to hide all live streams from all channels, since I do want to see if new live streams appear, usually ones that aren't 24/7.

Ideally, there'd be an option to long-press on a live stream in the tab, one which says "Hide From Feed", which would then prevent that particular stream ID from appearing in the feed for subsequent fetches.

From an implementation perspective, I imagine there would be some UI complexity in how to un-hide a stream, and to list out all hidden streams. If this isn't possible yet, I can try to draft a feature proposal later.

 

I'm trying to remind myself of a sort-of back-to-back chaise longue or sofa, probably from a scene on American TV or film -- possibly of the mid-century or modern style -- where I think two characters are having an informal business meeting. But the chaise longue itself is a single piece of furniture with two sides, such that each characters can stretch their legs while still being able to face each other for the meeting, with a short wall separating them.

That is to say, they are laying anti-parallel along the chaise longue, if that makes any sense. The picture here is the closest thing I could find on Google Images.

So my questions are: 1) what might this piece of furniture be called? A sofa, chaise longue, settee, something else? And 2) does anyone know of comparable pieces of furniture from TV or film? Additional photos might help me narrow my search, as I'm somewhat interested in trying to buy such a thing. Thanks!

EDIT 1: it looks like "tete a tete chair" is the best keyword so far for this piece of furniture

EDIT 2: the term "conversation chair" also yields a number of results, including a particular Second Empire style known as the "indiscreet", having room for three people!

view more: next ›