Well this is a hardware question more so then linux software, but I figured Id come here since I run debian + apache + vsftpd and I didnt want a bunch of windows people commenting on something they are not familiar with.
Ok thus far I have been using a desktop for my webserver/ftp server/file server. Its been running great for the past few years, but the internals have failed here and there and while its still running I am thinking about switching to something I know will last for the long haul. Enter my x200s. It was used as a temp file server at work for a while, but is now at home and I have contemplated switching it out for these duties. But Im wondering if the hardware is going to be good enough to run my website, ftp site and fiel server for the next couple of years. Plus Id love to lower the power consumption of my home data center.
The hardware is as follows:
SU9400
4GB ram
320GB 7.2k rpm HDD
eSata 2TB HDD (ftp and home file server)
So what do yall think? Think its worth the effort to try this out?
-
Should work. Nothing really cpu intensive if done right and you don't get /.ed.
*Pets his nginx webserver on his tablet + script to convert movies on the fly" -
-
Yeah i figured it would be fine. It runs debian very well and even can run a VM or 2 pretty well. I guess I have my weekend workload set out for me.
Sent from my PG06100 using Tapatalk -
but I'm now looking a lower power dual core setup (same as yours) so it'll have less heat and low power consumption. -
I have a WebServer, an FTP Server , SAMBA Server and a DLNA Server all running on my TP-Link OpenWRT Router.
And it consumes the same amount of power as any other router but does a lot more. -
-
Enable etherwake on the Router to remotely boot your servers from WAN.
Move the FileShare and other less intensive services to the router.
How cool is that? -
-
I have one of those sff pc's with the dual core atoms as my ubuntu server. Besides if I am using it to build something for some reason, its not slow at all for the task. I have it currently running as a TM target, homedir nfs server, nginx + rails + node webserver, and it will convert my videos on the fly for playing on my tablet if needed (using ffmpeg as its engine)
-
LoL then my SU9400 should crush debian squeeze + apache + vsftpd + dyndns via ddclient. Granted I have a few other daemons to run, but I doubt Ill have any issues. To think I was worried about the hardware being enough
-
the power savings alone are worth it!
-
I would just like to say if this web server is public then i would be very uneasy about running it along side a file server. at the very least make sure you have hardened your system from attacks. if its privet ignore my post
-
-
if only that windows server doesn't have to be up 24/7 then I could do away with a MIPS router with NAS support.
-
I use dyndns and ddclient for my dynamic ip.
-
So its been a couple of days with it as my server and I must say it was a transparent switch. A few apt-get commands and a rsync and bam my server was up and running.
-
ALLurGroceries Vegan Vermin Super Moderator
rsync ftw
congrats -
-
ALLurGroceries Vegan Vermin Super Moderator
OS X actually has rsync, it's saved me numerous times from Finder's crappy copy functionality.
-
-
ALLurGroceries Vegan Vermin Super Moderator
Dunno.
I see where you were going with that. -
Yeah
Code:rsync [email protected]:/etc/* /etc
-
You can use dd, diskutil and do full clones.
As for backups/clones/transfers OSX has two major advantages that I don't know other OSes have:
1) Target Mode. By pressing a key combo at boot, you can turn one mac into a hard drive and plug firewire-to-wire and the second machine can boot from it and clone it. Basically, I boot my macbook as a HDD and plug into a Mac mini, run some commands and the mini will be an exact clone of the macbook
2) 90% of the time, there is no need to rebuild drivers for different hardware And that is because Apple has a limited hardware set. I've done rsync off xserves to mac minis as live failovers. When the xserve fails, a mac mini cloned boots right up without re-configuring boot/kernel/drivers/xdisplay. It then chugs along as if it was an xserve.
You can have one set of clone images for multiple macs. -
Sent from my EVO using Tapatalk 2 -
You forget that OSX is a certified UNIX operatin system. It isn't just *NIX like, IT is 100% certified UNIX:
Register of Open Branded Products
It can do everything you say except SSH. You can't just copy over SSH settings. You need to set up cryptographic keys but
SAMBA, Apache can you just copy over to a new build. In fact, I have Apache builds running on CENTOS that I copy over to OSX via rsync/SSH.
This is not unique to OSX, if you are runnign a locked down CentOS with Bastille, you can't just copy the files over like in your example.
But that isn't the point. Mac system admins would prefer to do CLONE images which is faster than doing a clean build and copying over your settings via SFTP/SSH. It is faster when you can consider than the new server may need a bunch of dependencoes installed.
So if it takes you 4 hours to install a new linux box with "BESPOKE" modules and 5 minutes to SSH the setting files, isn't restoring a clone in 15 minutes ona mac faster?
Your builds must be very simple. Mine tends to be more complicated.
With linux. Example Ubuntu ( I can give a CentOS example), you need to install the OS. If you are not running server or doing a JeOS from an appliance build, you need to apt-get install lamp-server^ or install taskel.
All you get is the barebones PHP,mySQL LAMP stack in all cases.
You want a noSQL MongoDB? First you need to add the PPA, authenticate, upgrade your apt list, download. Takes more than 5 minutes. Want it to work with Apache and PHP? Then youneed to add the Pear Library via PECL, then add the .so to your config file. The path of the mongo.so may be different for each version so you can't just copy the path from your source to your new machine.
Then if you want FFMPEG compiled for an 8-core with a CUDA-centric GPU optimization as a PHP module with certain flags for h.264, you can't just apt-get whatever vanilla version Ubuntu wants to provide. You need to build the source and this can take 1-2 hours. Same for imagemagick, ODBC drivers, memcache Or anything bespoke as Apache modules.
Trust me, you can't just rsync/SFTP your /etc/apache2 config files in less than 5 minutes. Many of those modules need to be downloaded and compiled.
The linked .so always have unique names so you still need to redo symlinks.
On a mac, you already have a working clone and it takes 15 minutes to setup a new machine with all those specific build requirements.
In your use case scenario, I can only see it being faster if you already use an pre-built Appliance from somebody like turnkeylinux.
For my linux installs, I have a JeOS build and it is plain-jane as you can get with just LAMP and fulle OS under 600 megs.
I'm not saying macs are better. I use both platforms and see the strengths in both.
Calling all webserver admins.
Discussion in 'Linux Compatibility and Software' started by Thaenatos, Mar 20, 2012.