Smokeydope

joined 2 years ago
[–] Smokeydope@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

IMO 4k resolution is overkill its way past the optimal between file storage and visual fidelity. Nobody has ever complained about the visual quality of my 720p or 1080p sourced stuff much in the same way most sane people wont notice the difference between FLAC and mp3 on average listening. Bhack in my day we were lucky to get 480p on a square box tv.

[–] Smokeydope@lemmy.world 8 points 2 months ago* (last edited 2 months ago)

Less danger than OPsec nerds hype up but enough of a concern you want at least a reverse proxy. The new FOSS replacement for cloudflare on the block is Anubis https://github.com/TecharoHQ/anubis, while Im not the biggest fan of seeing chibi anime funkopop girl thing wag its finger at me for a second or two as it test connection, I cannot deny the results seem effective enough that all the cool kids on the FOSS circle all are switching to it over cloudflare.

I just learned how to get my first website and domain and stuff setup locally this summer so theres some network admin stuff im still figuring out. I don't have any complex scripting or php or whatever so all the bots that try scanning for admin pages are never going to hit anything it just pollutes the logs. People are all nuts about scraping bots in current year but when I was a kid allowing your sites to be indexed and crawled was what let people discover it through engines, I don't care if botnets scan through my permissively licensed public writing.

[–] Smokeydope@lemmy.world 2 points 2 months ago* (last edited 2 months ago) (1 children)

Similar story here! A couple years ago learning about Gemini/gopher/smallnet from mentaloutlaw videos. So I joined a public access Unix server (first SDF later tilde.team) and learned how to write my own capsule site for a few years. Learned some basic .CGI bin and awk processing to create a gemtext to epub converter that made small ebooks of daily post in atom feed. It was like training wheels really helped prepare me for the transition to full self hosting capsule and website

[–] Smokeydope@lemmy.world 2 points 2 months ago* (last edited 2 months ago)

Thanks for sharing! It was a good read. They have good points for security and clarity revisions.

A lot of Gemini spec choices were made to dissuade feature creep. Youre probably never going to do banking through Gemini but its also pretty much gaurenteed you'll never need adblock either.

Gemini is appealing from the perspective of novice self hosters. Its simple enough that most people can set up a simple server and publish on their site within a few hours. Its minimality enforces maximizing the most reading content for least bits used. 95% of modern webpages isnt even for reading or reference its all back end trackers and scripts and fancy CSS. Newswaffle shows just how bad it is.

When I read through a gemtext capsule I get the impression I'm looking at something that was distilled into its most essential. No popups no adds no inline images or tracking scripts or complex page layouts. My computer connects to the server, I get back a page of text or an image of a zip file. Once and done.

[–] Smokeydope@lemmy.world 3 points 2 months ago (4 children)
[–] Smokeydope@lemmy.world 6 points 2 months ago* (last edited 2 months ago) (6 children)

Im a hobbiest who just learned how to self host my own static website on a spare laptop over the summer. I went with what I knew and was comfortable with which is a fresh install of linux and installing from the apt package manager.

As im getting more serious im starting to take another look at docker. Unforunately my OS package manager only has old outdated versions of docker I may need to reinstall with like ubuntu/debian LTS server something with more cutting edge software in repo. I don't care much for building from scratch and navigating dependency roulette.

 

This is a simple toolchain that allows you to focus on writing your website instead of getting distracted with HTML formatting.

It works by taking in a gemtext file and converting it into an HTML file.

Gemtext:

HTML:

Code can be found here on the public Git:

https://codeberg.org/TomCon/gem2web/src/branch/master

[–] Smokeydope@lemmy.world 10 points 2 months ago* (last edited 2 months ago)

Theres been at least 5 rapture days since ive been alive. Cult members REALLY like the idea that they're super special moral beings with Gods favor. For some reason people get off on the idea everyone they don't like or isn't part of their tribe gets locked out of 'paradise' or better yet actively burns in hell forever. This vengenance fantasy extends to the idea of rapture where God personally singles them out as a favored one for heavenly ascension or whatever. I guess theres something appealing about this concept that really resonates with the kind of people who drink the coolaid, they just can't let go and take the L. So we see the same story every 10 years or so a cult leader predicts the end of the world all their followers give up life savings destroy their life then when the day passes they quietly sweep it under the rug pretend it never happened to save themselves embarrassment.

[–] Smokeydope@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

Almost certainly the lion, this isn't an anime where a knight can just casually bisect a 800lbs raging bag of muscle with the flick of their wrist and a marketing beam energy swipe. The lion would absolutely knock down, crush, and maul even if it couldn't penetrate armor right away. Like, id give it a 1/50 chance of striking a lucky blow as the lion plunges or if you're smart you might be able to come out alive by going limp and playing dead.

[–] Smokeydope@lemmy.world 4 points 2 months ago

I wrote my own set of tools in python that convert a simple gemtext formatted .gmi file into a static HTML file thats served by apache.

I'm a big fan of the Gemini Protocol project and found that handwriting pages in gemtext was ideal for focusing on text content and not worrying about formatting. Converting it to HTML+CSS with some scripts is pretty easy.

If anyone's interested I can give a link, currently just hosting source locally on my website, really should get a public github running.

[–] Smokeydope@lemmy.world 5 points 3 months ago (1 children)

Tape some bubblewrap around a mason jar taps forhead

[–] Smokeydope@lemmy.world 4 points 4 months ago (1 children)

Sure, whatever nerd.

[–] Smokeydope@lemmy.world 12 points 4 months ago* (last edited 4 months ago) (6 children)

The secret nerd technique they don't want you to know is to get a big usb stick housing for a proper m.2 SSD stick. Form wise its a slightly chunkier usb stick. Inside is a proper drive you can buy from a reputable source with terrabytes of storage and 3.0 speeds. A reputable SSD drive will easily last a decade.

As far as store bought regular old sandisk will last a long long time.

 

I think i've discovered something important in the field I dabble in as a advanced hobbyist. Like this was a breakthrough and perspective shift enough for me to stay awake all night into the morning until I had to go to sleep testing it works and boilerplating the abstract paper. I constructed a theoretical framework, practical implementation, and statistically analyzed experimental results across numerous test cases. I then put my findings into as good a technical paper as I could write up. I did as much research as I could to make sure nobody else had written about this before.

At this point though I don't really know how to proceed. Im an outsider systems engineer not an academic, and arXiv requires you be endorsed/recognized as a member of the scientific community with like a college email or written recommendation by someone already known. Then whenever I look at the papers on arxiv they always look a very specific way I cant get with libreoffice writer. Theres apparently a whole bunch of rules on formatting and font and style and this and that. Its overwhelming and kind of scary.

So. What do i do here? I have something I think is important enough to get off my ass and get in touch with a local college to maybe get a recommendation. I'd like to have my name in the community and contribute.

 

I now do some work with computers that involves making graphics cards do computational work on a headless server. The computational work it does has nothing to do with graphics.

The name is more for consumers based off the most common use for graphics cards and why they were first made in the 90s but now they're used for all sorts of computational workloads. So what are some more fitting names for the part?

I now think of them as 'computation engines' analagous to a old car engine. Its where the computational horsepower is really generated. But how would ram make sense in this analogy?

 

Setting up a personal site on local hardware has been on my bucket list for along time. I finally bit he bullet and got a basic website running with apache on a Ubuntu based linux distro. I bought a domain name, linked it up to my l ip got SSL via lets encrypt for https and added some header rules until security headers and Mozilla observatory gave it a perfect score.

Am I basically in the clear? What more do I need to do to protect my site and local network? I'm so scared of hackers and shit I do not want to be an easy target.

I would like to make a page about the hardware its running on since I intend to have it be entirely ran off solar power like solar.lowtechmagazine and wanted to share technical specifics. But I heard somewhere that revealing the internal state of your server is a bad idea since it can make exploits easier to find. Am I being stupid for wanting to share details like computer model and software running it?

 

Setting up a personal site on local hardware has been on my bucket list for along time. I finally bit he bullet and got a basic website running with apache on a Ubuntu based linux distro. I bought a domain name, linked it up to my l ip got SSL via lets encrypt for https and added some header rules until security headers and Mozilla observatory gave it a perfect score.

Am I basically in the clear? What more do I need to do to protect my site and local network? I'm so scared of hackers and shit I do not want to be an easy target.

I would like to make a page about the hardware its running on since I intend to have it be entirely ran off solar power like solar.lowtechmagazine and wanted to share technical specifics. But I heard somewhere that revealing the internal state of your server is a bad idea since it can make exploits easier to find. Am I being stupid for wanting to share details like computer model and software running it?

 

So its been almost 10 years since i've swapped computer parts and I am nervous about this. Ive never done any homelab type thing involving big powerful parts, just dealt with average mid range consumer class parts in standard desktop cases.

I do computational work now and want to convert a desktop pc into a headless server with a beefy GPU. I bit the bullet and ordered a used P100 tesla 16gb. Based on what im reading, a new PSU may be in order as well if nothing else. I havent actually read labels yet but online info on the desktop model indicates its probably around a 450~ watt PSU.

The P100 power draw is rated at 250 W maximum. The card im using now draws 185 W maximum. Im reading that 600W would be better for just-in-case overhead. I plan to get this 700W which I hope is enough overhead to cover an extra GPU if I want to take advantage of nvidia CUDA with the 1070ti in my other desktop.

How much does the rest of the system use on average with a ryzen 5 2600 six core in a m4 motherboard and like 16gb ddr4 ram?

When I read up on powering the P100 though I stumbled across this reddit post of someone confused how to get it to connect to a regular consumer corsehair PSU. Apparently the p100 uses a CPU power cable instead of a PCIE one? But you cant use the regular cpu power output from the PSU. Acording to the post, people buy adapter cables with two input gpu cables to one output cpu cable for these cards.

Can you please help me with a sanity check and to understand what i've gotten myself into? I don't exactly understand what im supposed to do with those adapter cables. Do modern PSUs come with multiple GPU power outputs/outlets from the interface these days and I need to run two parallel lines into that adapter?

Thank you all for your help on the last post im deeply grateful for all the input ive gotten here. Ill do my best not to spam post with my tech concerns but this one has me really worried.

 

Do I need to worry about upgrading motherboard with GPU if its old or will it work okay just buying a new GPU?

 

I have a memory foam matress on top a cot. Every now and then I need to sun dry the mattress and cot from a decent amount of moisture trapped between the two. Is there a way to keep the moisture out or even just reduce it?

 
 

Smokey's Simple Guide To Search Engine Alternatives

This post was inspired by the surge in people mentioning the new Kagi Search engine on various Lemmy comments. I happen to be somewhat knowledgeable on the topic and wanted to tell everyone about some other alternative search engines available to them, as well as the difference between meta-search engines and true search engines. This guide was written with the average person in mind, I have done my best to avoid technical jargon and speak plainly in a way most should be able to understand without a background in IT.

Understanding Search Engines Vs. Meta-Search Engines

There are many alternative search engines floating around that people use, however most of them are meta search engines. Meaning that they are a kind of search result reseller, middle men to true search engines. They query the big engines for you and aggregate their results.

Examples of Meta-search engines:

Format: Meta Search Engine / Sourced True Engines (and a hyperlink to where I found that info)

Duckduckgo / Bing has some web crawling of it own but mostly relies on Bing

Ecosia / Bing + Google a portion of profit goes to tree planting

Kagi / Google, Mojeek, Yandex, Marginalia, Requires email signup, 10$/month for unlimited searches

SearXNG / Too many to list, basically all of them, configurable, Free & Open Source Software AGPL-3.0

Startpage / Google + Bing

4get / Google, Bing, Yandex, Mojeek, Marginalia, Wiby Open source software made by one person as an alternative to SearX

Swisscows / Bing

Qwant / Bing Relied on Bing most of its life but in 2019 started making moves to build up its own web crawlers and infrastructure putting it in a unique transitioning phase.

True Search Engines & The Realities Of Web-Crawling

As you can see, the vast majority of alternative search engines rely on some combination of Google and Bing. The reason for this is that the technology which powers search engines, web-crawling and indexing, are extremely computationally heavy, non-trivial things.

Powering a search engine requires costly enterprise computers. The more popular the service (as in the more people connecting to and using it per second) the more internet bandwidth and processing power is needed. It takes a lot of money to pay for power, maintenance, and development/security. At the scales of google and Bing who serve many millions of visitors each second, huge warehouses full of specialized computers known as data centers are needed.

This is a big financial ask for most companies interested in making a profit out of the gate, they determine its worth just paying Google and Bing for access to their enormous pre-existing infrastructure without the headaches of dealing with maintenance and security risk.

True Search engines

True search engines are honest search engines which are powered by their own internally owned and operated web-crawlers, indexers, and everything else that goes into making a search engine under the hood. They tend to be owned by big tech companies with the financial resources to afford huge arrays of computers to process and store all that information for millions of active users each second. The last two entries are unique exceptions we will discuss later.

Examples of True Search Engines:

Bing / Owned by Microsoft

Google / Owned by Google/Alphabet

Mojeek / Owned by Mojeek .LTD

Yandex / Owned by Yandex .INC

YaCy / Free & Open Source Software GPL-2.0, powered by peer to peer technology, created by Michael Christen,

Marginalia Search / Free & Open Source Software AGPL-3.0, developed by Marginalia/ Martin Rue

How Can Search Engines Be Free?

You may be wondering how any service can remain free if it needs to make a profit. Well, that is where altruistic computer hobbyist come in. The internet allows for knowledgeable tech savvy individuals to host their own public services on their own hardware capable of serving many thousands of visitors per second.

The financially well off hobbyist eats the very small hosting cost out of pocket. A thousand hobbyist running the same service all over the world allows the load to be distributed evenly and for people to choose the closest instances geographically for fastest connection speed. Users of these free public services are encouraged to donate directly to the individual operators if they can.

An important take away is that services don't need to make a profit if they aren't a product to a business. Sometimes people are happy to sacrifice a bit of their own resources for the betterment of thousands of others.

Companies that live and die by profit margins have to concern themselves with the choice of owning their own massive computer infrastructures or renting lots of access to someone elses. You and I just have to pay a few extra cents on an electric bill that month for a spare computer sitting in the basement running a public service + some time investment to get it all set up.

As Lemmy users, you should at least vaguely understand the power of a decentralized service spread out among many individually operated/maintained instances that can cooperate with each other. The benefit of spreading users across multiple instances helps prevent any one of them from exceeding the free/cheap allotment of API calls in the case of meta-search engines like SearXNG or being rate limited like 3rd party YouTube scrapers such as Invidious and Piped.

In the case of YaCy decentralization is also federated, all individual YaCy instances communicate with each other through peer-to-peer technology to act as one big collective web crawler and indexer.

SearXNG

I love SearXNG. I use it every day. So its the engine I want to impress on you the most. SearX/SearXNG is a free and open source, highly customizable, and self-hostable meta search engine. SearX instances act as a middle man, they query other search engines for you, stripping all their spyware ad crap and never having your connection touch their servers.

Here is a list of all public SearX instances, I personally prefer to use paulgo.io All SearX instances are configured different to index different engines. If one doesn't seem to give good results try a few others.

Did I mention it has bangs like DuckDuckGo? If you really need Google like for maps and business info just use !!g in the query.

Other Free As In Freedom Search Engines

Here is Marginalia Search a completely novel search engine written and hosted by one dude that aims to prioritize indexing lighter websites little to no JavaScript as these tend to be personal websites and homepages that have poor Search Engine Optimization (SEO) score which means the big search engines won't index them well. If you remember the internet of the early 2000s and want a nostalgia trip this ones for you. Its also open source and self-hostable.

Finally, YaCy is another completely novel search engine that uses peer-to-peer technology to power a big web-crawler which prioritizes indexes based off user queries and feedback. Everyone can download YaCy and devote a bit of their computing power to both run their own local instance and help out a collective search engine. Companies can also download YaCy and use it to index their private intranets.

They have a public instance available through a web portal. To be upfront, YaCy is not a great search engine for what most people usually want, which is quick and relevant information within the first few clicks. But, it is an interesting use of technology and what a true honest-to-god community-operated search engine looks like untainted by SEO scores or corporate money-making shenanigans.

Free As In Freedom, People vs Company Run Services

I personally trust some FOSS loving sysadmin that host social services for free out of altruism, who also accepts hosting donations, whos server is located on the other side of the planet, with my query info over Google/Alphabet any day. I have had several communications with Marginalia over several years now through the gemini protocol and small web, they are more than happy to talk over email. have a human conversation with your search engine provider thats just a knowledgeable every day Joe who genuinely believes in the project and freely dedicates their resources to it. Consider sending some cash their way to help with upkeep if you like the services they provide.

Self-Hosting For Maximum Privacy

Of course you have to trust the service provider with your information, and that their systems are secure and maintained. Trust is a big concern with every engine you use, because while they can promise to not log anything or sell your info for profit, they often provide no way of proving those claims to be true beyond 'just trust me bro'. The one thing I really liked about Kagi was that they went through a public security audit by an outside company that specializes in hacking your system to find vulnerabilities. They got a great result and shared it publically.

The other concern is that there is no way to be sure companies won't just change their policies slowly over time to creep in advertisements and other things they once set out to reject once they lure in a big enough user base and the greed for ever increasing profit margins to appease shareholders starts kicking in. Companies have been shown again and again to employ this slow-boiling-frog practice, beware.

Still, If you are absolutely concerned with privacy and knowledgeable with computers then self hosting FOSS software from your own instance is the best option to maintain control of your data.

Conclusion

I hope this has been informative to those who believe theres only a few options to pick from, and that you find something which works for you. During this difficult time when companies and advertisers are trying their hardest to squeeze us dry and reduce our basic human rights, we need to find ways to push back. To say no to subscriptions and ads and convenient services that don't treat us right. The internet started as something made by everyday people, to connect with each-other and exchange ideas. For fun and whimsy and enjoyment. Lets do our best to keep it that way.

view more: next ›