That's fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.
I'm man feeding orphans to the orphan crushing machine. I can stop this at any moment.
That's fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.
I'm man feeding orphans to the orphan crushing machine. I can stop this at any moment.
Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.
I'm a huge fan of this all in one idea that is upgradable.
These are M715q Thinkcentres with a Ryzen Pro 5 2400GE
Not really a lot of thought went into rack choice. I wanted something smaller and more powerful than my several optiplexs I had.
I also decided I didn't want storage to happen here anymore because I am stupid and only knew how to pass through disks for Truenas. So I had 4 truenas servers on my network and I hated it.
This was just what I wanted at a price I was good with at Like $120. There's a 3D printable version but I wasn't interested in that. I do want to 3D print racks and I want to make my own custom ones for the Pis to save space.
But this set up is way cheaper if you have a printer and some patience.
Not much. As much as I like LLMs, I don't trust them for more than rubber duck duty.
Eventually I want to have a Copilot at Home set up where I can feed a notes database and whatever manuals and books I've read so it can draw from that when I ask it questions.
The problem is my best GPU is my gaming GPU a 5060ti and its in a Bazzite gaming PC so its hard to get the AI out of it because of Bazzite's "No I won't let you break your computer" philosophy, which is why I did it. And my second best GPU is a 3060 12GB which is really good, but if I made a dedicated AI server, I'd want it to be better than my current server.
I just built a mini rack with 3 Thinkcentre tiny PCs I bought for $175 (USD) on eBay. All work great.
Fuck, we were talking to an expert
I always tell the interviewer what they want to hear. Its very obviously a game of correct answers.
I lie on my resume too, but not in ways I can't back up.
For instance, I imply I have a degree because I did go to college for 4 consecutive years for a multitude of degrees. So I have different resumes with different majors depending on what job I'm applying to. I mostly use my CS/Engineering degree now-a-days. I'm able to talk the talk enough that they've never checked or asked for a transcript.
But it sounds more like your job wanted to, on paper, be compliant with workers rights stuff
"We offer a break at X and Y hours."
But had a cultural expectation to not follow it. Which is dumb and they can, in my humble opinion, get fucked. Nursing has a massive burnout rate and shit like that is why.
I think you should recognize you dodged a bullet more than you should think about "lying" in an interview.
Ya know. I don't know. Every state does this as far as I can tell and so I've never questioned it.
If I had to guess, its how the DOT or Highway department shills to Tue new governor
"Hey look boss, we put ya name on da side of Interstate 69 from Illinois!,"
The fact that gods and magic also seemingly exist really fucks me up because its explicit in Tue original book that god is just a tool for smarter people (Foundation) to manipulate dumber people (everyone else).
Obnoxious atheist take? Sure I guess.
But it feels as if someone rebooted harry potter and made the kids saying something nice about trans people or Jews.
Legitimately, if they had just done a "A Foundation Story: Empire" and then just did the genetic dynasty stuff, I don't think any of us would be mad.
But I don't think general audiences have read much Foundation these days so they would have struggled to set it in that universe without an established Foundation Cinematic Universe.
Anyways, I'm super excited for Tue Foundation super cut that's just Empire.
With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It's much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven't found the need to do that yet in my use case.
As you may have guessed, I can't fit a 3060 in this rack. That's in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn't try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.
But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven't noticed a difference in quality between my local LLM and the web based stuff.