iOS
IOS already does this.
iOS
IOS already does this.
Is there other alternatives to Apple and Google phones?
There are phones that run on other platforms, but the app library and hardware isn't competitive.
https://en.wikipedia.org/wiki/List_of_open-source_mobile_phones
You could also move most of what you do to a tablet or laptop if you're willing to carry that, and just use the phone as an Internet access device and for phone calls.
EDIT: Or use a cell modem for data and SIP service for phone service and texts, though then you need to have a device that you'll keep on if you want to get incoming calls when they come in. Cell phones are pretty optimized for low idle power usage.
You're probably going to have the easiest time just cleaning them periodically.
If you want to have something that's intrinsically antifungal, instead of stainless steel, you could get a copper-alloy water bottle, like bronze or brass.
https://en.wikipedia.org/wiki/Antimicrobial_copper-alloy_touch_surfaces
Antimicrobial copper-alloy touch surfaces can prevent frequently touched surfaces from serving as reservoirs for the spread of pathogenic microbes. This is especially true in healthcare facilities, where harmful viruses, bacteria, and fungi colonize and persist on doorknobs, push plates, handrails, tray tables, tap (faucet) handles, IV poles, HVAC systems, and other equipment.[1] These microbes can sometimes survive on surfaces for more than 30 days.
I wouldn't bet on it stopping growth on the gasket, though.
It looks like this claims to be copper (which if correct, I would think would be really prone to denting):
https://www.amazon.com/Adonai-Hardware-Hammered-Copper-Bottle/dp/B09MZ9VYJS
This vacuum flask says that it has a copper internal lining:
https://www.amazon.com/OUTSIDER-Stainless-Vacuum-Insulated-Bottle-Thermos/dp/B0BX7C1MDK
The new lawsuit said Li began working as an engineer for xAI last year, where he helped train and develop Grok. The company said Li took its trade secrets in July, shortly after accepting a job from OpenAI and selling $7 million in xAI stock.
I must say that it's going to be a bitch-and-a-half to retain core engineers if people are walking away at what amounts to $7 million/year in effective compensation.
kagis
Looks like he only started working in industry in 2023, too (though was doing relevant work as a graduate student prior to that).
If I recall correctly, at least for non-group chats they do use end-to-end encryption. That being said, obviously there are some practical limitations on the impact if you think that WhatsApp would actively try to be malicious, since they're also providing the client software and could hypothetically backdoor that.
kagis
According to this, they do use end-to-end encryption for group chats too.
Maybe I'm recalling some other service or a default setting or something. Some service had non-e2e-encrypted-group messages for at least some period of time.
$3-10k...not getting the speeds and quality
I mean, that's true. But the hardware that OpenAI is using costs more than that per pop.
The big factor in the room is that unless the tech nerds you mention are using the hardware for something that requires keeping the hardware under constant load
which occasionally interacting with a chatbot isn't going to do
it's probably going to be cheaper to share the hardware with others, because it'll keep the (quite expensive) hardware at a higher utilization rate.
I'm also willing to believe that there is some potential for technical improvement. I haven't been closely following the field, but one thing that I'll bet is likely technically possible
if people aren't banging on it already
is redesigning how LLMs work such that they don't need to be fully loaded into VRAM at any one time.
Right now, the major limiting factor is the amount of VRAM available on consumer hardware. Models get fully loaded onto a card. That makes for nice, predictable computation times on a query, but it's the equivalent of...oh, having video games limited by needing to load an entire world onto the GPU's memory. I would bet that there are very substantial inefficiencies there.
The largest GPU you're going to get is something like 24GB, and some workloads can be split that across multiple cards to make use of VRAM on multiple cards.
You can partially mitigate that with something like a 128GB Ryzen AI Max 395+ processor-based system. But you're still not going to be able to stuff the largest models into even that.
My guess is that it is probably possible to segment sets of neural net edge weightings into "chunks" that have a likelihood to not concurrently be important, and then keep not-important chunks not loaded, and not run non-loaded chunks. One would need to have a mechanism to identify when they likely do become important, and swap chunks out. That will make query times less-predictable, but also probably a lot more memory-efficient.
IIRC from my brief skim, they do have specialized sub-neural-networks, which are called "MoE", for "Mixture of Experts". It might be possible to unload some of those, though one is going to need more logic to decide when to include and exclude them, and probably existing systems are not optimal for these:
kagis
Yeah, sounds like it:
https://arxiv.org/html/2502.05370v1
fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving
Despite the computational efficiency, MoE models exhibit substantial memory inefficiency during the serving phase. Though certain model parameters remain inactive during inference, they must still reside in GPU memory to allow for potential future activation. Expert offloading [54, 47, 16, 4] has emerged as a promising strategy to address this issue, which predicts inactive experts and transfers them to CPU memory while retaining only the necessary experts in GPU memory, reducing the overall model memory footprint.
Oh, wait, yeah, you're right, and in fact a number of packages do take that when binding to an address. Sorry, that's on me.
cannot bind to local IPv4 socket: Cannot assign requested address
inet 169.254.210.0
Yeah. That'll be that you're needing an interface with that address assigned.
ifconfig
Going from memory, I believe that if you've got ifconfig available and this is a Linux system and you need to keep the address on the current interface to keep the system connected to the Internet or something, you can use something like ifconfig enp7s0:0 10.10.10.3 to use an interface alias, use both addresses (169.254.210.0 and 10.10.10.3) at the same time. Might also need ifconfig enp7s0:0 up after that. That being said, (a) I don't think that I've set up an interface alias in probably a decade, and it's possible that's something has changed, (b) that's a bit of additional complexity, and if you aren't super familiar with Linux networking, you might not want to add more complexity if you don't mind dropping just setting the address on the interface to something else.
Probably an iproute2-based approach to do this too (the ip command rather than the ifconfig command) but I haven't bothered to pick up iproute2 equivalents for a bunch of stuff.
EDIT: Sounds like you can assign the address and bring the interface alias up as one step (or could a decade ago, when this comment was written):
To setup eth0:0 alias type the following command as the root user:
# ifconfig eth0:0 192.168.1.6 up
So probably give ifconfig enp7s0:0 10.10.10.3 up a try, then see if the TFTP server package can bind to the 10.10.10.3 address.
I haven't done anything with OpenWRT for a lomg time, but...
I have the IP of the server set to 0.0.0.0:69 when I try to set it to 10.10.10.3 (per the wiki) The server on my pc won't start and gives an error.
I'm pretty sure that you can't use all zeroes as an IP address.
kagis
https://en.wikipedia.org/wiki/0.0.0.0
RFC 1122 refers to 0.0.0.0 using the notation {0,0}. It prohibits this as a destination address in IPv4 and only allows it as a source address during the initialization process, when the host is attempting to obtain its own address.
As it is limited to use as a source address and prohibited as a destination address, setting the address to 0.0.0.0 explicitly specifies that the target is unavailable and non-routable.
You probably need to figure out why your TFTP server is unhappy with 10.10.10.3, and there's not enough information here to provide guidance on that. I don't know what OS or software package you're using or the error or the network config.
It may be that you don't have any network interface with 10.10.10.3 assigned to it, which I believe might cause the TFTP server to fail to bind a socket to that address and port when it attempts to do so.
If you are manually invoking the TFTP server as a non-root user and trying to bind to port 69, and this is a Linux system, it will probably fail, as ports below 1024 are privileged ports and processes running as ordinary users cannot bind to them. That might cause a TFTP server package to bail out.
But I'm really just taking wild stabs in the dark, without any information about the software involved and the errors you're seeing. I would probably recommend trying to make 10.10.10.3 work, though, not 0.0.0.0.
If this is a Linux system, you might use a packet sniffer on the TFTP host, like Wireshark or tcpdump, to diagnose any additional issues that come up, since that will let you see how the two devices are talking to each other. But if you can't get the TFTP server to even run on an IP address, then you're not to that point yet.
I don't use this plugin myself, but if you're using Firefox, you might take a look at it, as it provides a bunch of browser-side configurability. I don't know whether the feature you're looking for is there, but as far as I can tell, it aims to be a pretty large bucket of pretty much every add-on YouTube feature one might want.
I was looking at it a while back for something unrelated, a UI tweak that I was hoping that it might do.
Thank you! Corrected!
I haven't watched the video
I would generally rather have text form content
but if Rossman is announcing the same thing that I just read about elsewhere, it's not a removal of sideloading. It requires that a developer register and provide Google with personal information for Google to let them create packages. Assuming that Google is willing to let the F-Droid developers register an account (which I assume they have) and sign the F-Droid package, it should not restrict installation of the F-Droid package.
However, you wouldn't be able to use F-Droid to install any packages that didn't conform to Google's new requirements.
I doubt that the restriction is at the store app level, but at the package installation level. That is, I would expect that the F-Droid or Google's store app or whatever says "install this package" and the OS refuses.
https://developer.android.com/developer-verification