I’d like to have my own server at home sorta like a home AWS.
How to set up one and make it available to anyone over the Internet? What tech specs should I buy (RAM, CPU, # of cores, operating system, etc.)?
How much does it cost to keep one running all the time?
Be extremely careful. Plenty of people are really smart and malicious, so you need to isolate it from everything on your network. You’re giving random people remote code execution on your local network, which is like the worst case scenario for security.
Your basic requirements are:
- Some kind of domain / subdomain payed or free;
- Home ISP that has provides public IP addresses - No CGNAT BS;
- Ideally a static IP at home, but you can do just fine with a dynamic DNS service such as https://freedns.afraid.org/ (will update your domain with your dynamic IP when it changes);
- Ideally a home ISP that allows for “bridged” mode or has a ONT device + router where you can add a switch in between and have the server directly connected to the Internet, with its own public IP, outside of your home network (more bellow);
Hardware coices:
Don’t get server hardware, use regular desktop/laptop machines as they’ll be more than enough for you. Server hardware is way more expensive and won’t be of any advantage. If you’re looking to buy you can even get very good 9-10th gen Intel CPUs and motherboards that are perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games.
This hardware is also way more power efficient and sometimes even more powerful than any server hardware that you might get for the same price. Get this hardware for cheap and enjoy.
If you don’t require a TON of computer power some people might suggest ARM board, such as the Raspberry Pi, but be careful with those. ARM is great for power savings but compared to consumer hardware is it shit when it comes to performance and reliability. Also I personally like to avoid the Raspberry Pi and their stuff as much as possible. They’ve done good things for the community however they’ve some predatory tactics and shenanigans that aren’t cool. Here a few examples of what people usually fail to see:
- Requires a special tool to flash. In the past it was all about getting a image and using etcher, dd or wtv to flash it into a card, now they’re pushing people to use Raspberry Pi Imager. Without it you won’t be able to easily disable telemetry and/or login via network out of the box;
- Includes telemetry;
- No alternative open Debian based OS such as Armbian (only the Ubuntu variant);
- Raspberry Pi 5 finally has PCI. But instead of doing what was right they decided to include some proprietary bullshit connector that requires yet another board made by them. For those who are unware other SBC manufacturers simply include a standard PCI slot OR a standard NVME M2 slot. Both great option as hardware for them is common and cheap;
- It is overpriced and behind times.
For what’s worth the NanoPi M4 released in 2018 with a RK3399 already had a PCI interface, 4GB of RAM and whatnot and was cheaper than the Raspberry Pi 3 Model B+ from the same year that had Ethernet shared with the USB bus. If you still want ARM and you’re about just serving a few websites, cloud service wtv pick a Chinese brand such as friendlyelec or rockpi. More computing for less money and a lot less proprietary BS.
Mini computers from big brands though, for 100€ you can get an HP Mini with an i5 8th gen + 16GB of ram + 256GB NVME that obviously has a case, a LOT of I/O, PCI (m2) comes with a power adapter and more importantly it outperforms a RPi5 in all possible ways. Note that the RPi5 8GB of ram will cost you 80€ + case + power adapter + bullshit pci adapter + sd card + whatever else money grab.
Side not on alternative brands, HP mini units are reliable the BIOS is good and things work. Now the trendy MINISFORUM is cool however their BIOS come out of the factory with wired bugs and the hardware isn’t as reliable - missing ESD protection on USB in some models and whatnot.
Quick check list for outward facing servers:
- Isolate them from your main network. If possible have then on a different public IP either using a VLAN or better yet with an entire physical network just for that - avoids VLAN hopping attacks and DDoS attacks to the server that will also take your internet down;
- If you’re using VLANs then configure your switch properly. Decent switches allows you to restrict the WebUI to a certain VLAN / physical port - this will make sure if your server is hacked they won’t be able to access the Switch’s UI and reconfigure their own port to access the entire network. Note that cheap TP-Link switches usually don’t have a way to specify this;
- Only expose required services (nginx, game server, program x) to the Internet. Everything else such as SSH, configuration interfaces and whatnot can be moved to another private network and/or a WireGuard VPN you can connect to when you want to manage the server;
- Use custom ports with 5 digits for everything - something like 23901 (up to 65535) to make your service(s) harder to find;
- Disable IPv6? Might be easier than dealing with a dual stack firewall and/or other complexities;
- Use nftables / iptables / another firewall and set it to drop everything but those ports you need for services and management VPN access to work - 10 minute guide;
- Use your firewall to restrict what countries are allowed to access your server. If you’re just doing it for a few friends only allow incoming connection from your country (https://wiki.nftables.org/wiki-nftables/index.php/GeoIP_matching)
Realistically speaking if you’re doing this just for a few friends why not require them to access the server through WireGuard VPN? This will reduce the risk a LOT and won’t probably impact the performance. This is a decent setup guide https://www.digitalocean.com/community/tutorials/how-to-set-up-wireguard-on-debian-11 and you might use this GUI to add/remove clients easily https://github.com/ngoduykhanh/wireguard-ui
Point of order on the raspberry pi:
Here’s your Debian https://raspi.debian.net/tested-images/
There multiple issues with those Debian images and while I would love to run them, they don’t cut it. Generic images might underperform in your board, the GPIO and other low level components will, most likely, not work and you might burn your storage as logging and other I/O intensive operations aren’t tweaked for SD cards.
There’s also Armbian (https://www.armbian.com/rpi4b/) but only Ubuntu based right now. Armbian could be a great solution however there has been not much interest in the RPi board most likely due to what I pointed before.
Also WRT telemetry: https://forums.raspberrypi.com/viewtopic.php?t=341514
The only telemetry is pertaining to what the imager is burning to the card. So if you don’t use the imager there’s no telemetry, if you use the imager but disable telemetry, there’s no telemetry, if you don’t disable it, it just sends back what you’re installing.
Here the problem: they’re forcing people into the Raspberry Pi Imager with shady tactics. Without it you won’t be able login via network out of the box and by default it enables telemetry. This isn’t okay.
I’ve already spoken about the “telemetry” but here’s your ssh login. Literally all the installer is doing is adding a blank file.
Then if you don’t want to do that every time, just create an image for it. That’s your new image to flash onto the SD cards.
There’s nothing stopping you from not using the imager. dd works just fine. There’s no telemetry on the OS itself, so here’s how you personally get what you’re looking for.
- dd the base image to a card
- verify the card and image are working properly by booting on a pi
- turn off pi
- insert card into computer and create file in boot directory
- create a new bootable backup image from the card, and save that on the computer it’s plugged into, cloud or local backup storage you’re running, whatever
- dd that image as the base image for all new cards.
but here’s your ssh login. Literally all the installer is doing is adding a blank file.
Yes and why are they forcing us to go through hoops / non standard BS instead of doing it like any other SBC and just enabled by default. Armbian does it and once you login you’re required to change the password for security.
I remember before the imager the RPi also had SSH enabled by default. Don’t sugar coat it around security, this is bullshit to force people into their imager.
None of this forces you to use their imager though… It’s barely a hoop, most people running multiple pi’s as servers will have done this for a reason other than ssh anyway.
And yes one solution to this security problem is to require changing the username and password, the more effective solution is to not have the process running at all, unless specifically enabled. I’m sure that sentence sounds familiar from your company’s security team.
Raspberry pi’s serve a lot of purposes, many of those purposes don’t need ssh. But if you enable it by default that opens the pi up to being a target, which we saw be a huge problem before this change.
Also, this is not the only distribution that has ssh disabled by default. It’s just the only popular distribution I’m aware of that doesn’t have a server image option 🤷♂️ it’s actually standard security procedure.
For example, if you install Ubuntu desktop, it’ll have ssh disabled, because it is standard. Pretty much any distro should do this as well as long as it’s not their “server” ISO.
In any case it’s a good practice to backup your images regardless of what hardware you’re running on, especially if you’re running a cluster, it allows for easy reproduction across the cluster.
I just found your comment, that was very helpful thank you!
You’re welcome.
make it available to anyone
To do what?
Say: let them use a web app?
Generally speaking, not a well-advised idea, especially for someone who has to ask how to do it (truly not being snarky).
I was a cisco instructor in the 90’s, (so teaching networking and security were my bread and butter for a while) and I wouldn’t think of doing this - except… If the only access was via a mesh network client such as Tails/Tailscale, the server was dedicated to just this purpose, it was isolated on its own LAN segment/DMZ with no routing path to my home network segment, the server was not Windows, but Linux, and I had a robust backup plan, access control plan, and access monitoring with alerts.
There’s just too much risk exposing a port to the world.
If you’re only accessing the server remotely via Tailscale and no ports are open, is it necessary to have the server on its own isolated VLAN? I like accessing my server locally most of the time and via Tailscale when I’m out and about.
I’d still do this.
Security isn’t one thing, it’s layers. So if any single layer fails another still prevents access.
With just Tails, if a bad actor gets access via a compromised user machine, they could potentially get access to the rest of your network. If the server is on an isolated Lan, there’s nothing for them to access - it’s a rock-solid guarantee that the most they can do is damage to that server and network segment.
We (us IT folks) see users get compromised almost daily, largely through social engineering. It’s a huge risk.
And it’s trivial to have something on its own Lan segment.
If you let random people install stuff on your server all you get is assholes install monero miners to gain 1 cent from you wasting 30 dollars in electricity
You can host most basic web apps off a raspberry pi. You just need to:
- connect your device to the internet
- start your server application
- set up port forwarding on your router to forward the port your application is being hosted on
- get a domain name
- configure ddns
- Maybe get some SSL certs
.
Edit: BearOfaTime brings up a great point. I’m telling you how to do what you asked but you probably shouldn’t. If you do, try to airgap the server from your personal network as best as you can
Edit edit: You know people will let you use their servers for small projects for free right? Check out https://ctrl-c.club/#what or hang out in the LowEndTalk forums and provide quality input and enter some of the giveaways for server space
Although the drawback to ctrl-c club is that you’re not going to get full control of how you install libraries and applications
That’s a REALLY broad definition.
A web app that does what?
Are you running your own Netflix-ish server? Transaction processing? Cloud storage? Ai chatbot?
Each one has very different requirements, and this is just the first four that came to mind .
AWS has hundreds of buildings filled with millions of servers, so you aren’t going to compete with that, even on a small scale.
But could you run your own little Facebook type thing? For a handful of users, sure. Could you handle the number of users that Facebook actually has in a day? You are looking at buildings filled with Computers, not a single machine’s spec 
I think you should take baby-steps and focus first on just getting something running for you to use. Maybe first experiment with configuring an application you’d like in a virtual machine before you spend money on hardware too.
You can install OpenStack, but I’d probably not let any random person run code on your machine
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CGNAT Carrier-Grade NAT DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web HTTPS HTTP over SSL IP Internet Protocol NAT Network Address Translation RPi Raspberry Pi brand of SBC SBC Single-Board Computer SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TCP Transmission Control Protocol, most often over IP VPN Virtual Private Network nginx Popular HTTP server
12 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.
[Thread #240 for this sub, first seen 25th Oct 2023, 05:25] [FAQ] [Full list] [Contact] [Source code]
I am a big fan of this not. The worst thing about tech is all the bloody acronyms.
Tailscale is your friend, and stupid simple!
If OP wants to expose it to the public, tailscale isn’t an option. Cloudflare tunnels is the way to go.
Tailscale Funnels expose whatever service you have running to the public. I personally use this, so yes, it is an option.
Are these part of the official tailscale protocol? Do you have a link to documentation or something?
https://tailscale.com/kb/1223/tailscale-funnel/
It’s like a reverse exit node. Tailscale sets up a DNS domain for you and directs TCP connections (encrypted) to your designated node.
The big downside is that they only offer the use of their own (sub)domains and you use their HTTPS certificates, you can’t use your own domains/own certs.
Edit: it may be possible to forgo the use of their certs and instead set up a CNAME record in your DNS that points at the funnel node’s address, and add your own certs (for your domain) in a reverse proxy running on the node. I haven’t tried.
Awesome. I didn’t know about this. Thank you
I’m personally using headscale, would be interesting to see if they have that. I guess I can also reverse proxy from my vps into my tailnet.
Thanks !
Install proxmox on a computer with plenty of RAM and CPU and you’ll be able to create VMs which you can give out or rent out to anyone.
In regards to access, ipv4 is not a good idea. Especially not residential IP addresses., You should get ipv6 addresses maybe from a tunnelbroker. But anyways, first you need the server with the hypervisor (which is what you’re looking for) and then you can slowly run tests, learn and eventually figure out networking.
Btw, it might be cheaper to simply rent a server, which would solve the issue of ip addresses. OVH has cheap servers and a proxmox install wizard.
Just please don’t use it for anything sensitive until you can find someone to give a quick check up in regards to security to make sure you haven’t missed anything. Unlike a regular PC, this one is expected to receive inbound connections which has its risks.
But don’t worry about that too much now. Find an old computer or rent a server, install proxmox and start testing, playing around and learning.
Edit: chatgpt is good when wanting to learn this stuff. Especially gpt-4, but even gpt-3.5 will do. Just don’t trust it blindly as it still messes up about 20% of the time. But it’s often better than googling for tutorials since you can’t often find what you’re looking for.
Edit2: the setup I propose will allow you to divide a regular computer into 100s of virtual ones limited only by the total RAM, disk and CPU. If you only want a web server on dedicated hardware get a raspberry pi, because my proposal would be overkill. But it’s the closest to “being your own cloud provider”.
What does a “home AWS” mean?
I just bought a decommissioned computer from a public institution for 40€ (they are usually relatively cheap and still top modern, since companies replace their computers after 2 to 4 years, for tax reasons).
For this I just bought 2 HDD hard drives (I can only put 2 in; they are relatively cheap in comparison for a lot of storage space) and a nvme2 ssd was already included, there is the OS on it.To make the server publicly accessible with a private internet connection (not a business connection), I bought a domain (I bought it at namecheap) and then I set up DynDNS at my domain provider and my router. This was relatively easy (with namecheap and a FritzBox).
I added a DNS entry that forward all subdomains to my DynDNS. The software, I want to have installed, I then simply install in a docke/ podman container and make a reverse proxy to the docker container via Nginx. This allows me to let multiple applications use the http(s) port from the outside via subdomains, so the URL doesn’t need a port.
I can post the specs later, with an edit.
Edit: I don’t know how much the electricity costs, because I currently don’t pay for the electricity. But I have a 200W Power Supply and the machine is idling around a lot, as the service just not often used, but sporadic.
First of all you need that your ISP actually gives you an IP that points back to your home network. It’s not uncommon that your IP points to some ISP NAT that routes the internet to many houses, making it impossible to expose some device in your network to the internet.
It was my case, then I needed to call them and ask to have an IP that goes directly to my gateway.
After that you can go to your gateway and do port forwarding from the internet to your server in your home. For example, you can forward port 80 from internet to your server private IP on port 80, so when someone browsers your IP it will get whatever page is hosted on your server.
About server tech specs, it depends on what you want to host. I used to host a personal Nextcloud server in a raspberry pi, which is really power efficient and cheap to maintain. Maybe you’ll want a server with higher specs that might draw more power. It’s really up to what you wanna do specifically.
None of that is needed with cloudflare zero trust + tunneling. You simply install an agent on the machine and configure internal IP:port you want to access the outside in the cloudflare portal, pointing to a domain you own. You can even allow passwords to access internal IPs directly if you want.
Of course you can use a reverse proxy to expose your apps to the internet.
Here’s another similar solution that you can self host in a cheap cloud VM:
That also requires you to manage updates and security on that device… If you want less work not more zero trust cloudflare is a great free solution. I used to use nginx proxy manager which is free as a reverse proxy but again one less machine to worry about. I literally just migrated days ago and I couldn’t be happier.
That’s a question ChatGPT would excel at… It sounds like it might be a big task for your shoes, but here goes (100% organic reply, mind you):
- get a Raspberry Pi, install your web app, make sure you can reach it on the LAN
- forward the port your web-app is running on to the Raspberry Pi: app is now reachable under IP:port from the internet)
- set up a domain with a provider that supports dyndns, set up a job on the Pi to regularly check in with the dyndns provider and tell it your current WAN IP: app is reachable on your own domain; success
This is leaving out any security measures and you will want to take care of those before opening your home LAN to the world! Have fun learning!