Saturday, January 29, 2011

Server Hardware Recomendation

I'm ready to buy a few servers and I'm wondering what the best setup would be.

My site serves about 1M php pages day. And the database has about 4M rows that are constantly growing. (60% read, 40% write, 100qps)

I just need to know what exactly would be the best hardware for what I'm doing.

I use CentOS, MySQL 5, and lighttpd.

I have the money to invest on good hardware, at the moment im looking at:

http://www.newegg.com/Product/Product.aspx?Item= N82E16816101260 N82E16819117185 N82E16820148259 N82E16822116059

(had to list them like this because i cant post 4 links)

Would this been a compatible setup? Mainly i want to stray away from renting a box's and collocate instead.

  • You'll probably enjoy a dual quad core, preferably Intel Harpertown processors since they are quite affordable, but Nehalem processors with HT will offer more processor threads.

    RAM is important, maybe 12GB or more of RAM, to avoid hitting the disks too much with DB queries.

    Buying several SA-SCSI drives and placing them in a RAID-10 will give you added speed and offer fault tolerance. You will need at least 4 for a proper RAID-10 setup.

    Beyond this, you will also need to keep MySQL optimized as well as your queries optimized if they are not.

    From gekkz
  • You can do a lot on a 1 or 2 web servers with 1-4GB of RAM and minimal disk. Spend your dollars on the MySQL box for now. The more of your dataset that can be crammed on to RAM the happier MySQL will be. MySQL doesn't utilize many cores well, so raw CPU speed (mhz) is more important than the number of CPUs/cores.

    We can't give specific recommendations without considering your application (do you use memcache, or another cache system? Why not? What are your redundancy needs? How much of the app is static or semi static vs dynamic?)

    If your app is straightforward, you might want to just consider cloud hosting it. Rackspace, Amazon and others offer affordable services that allow you to ramp up and down the amount of systems you have online. This can be very cost effective, especially if your system load changes a lot (slashdot/digg effects) or you have limited information how how your app usage will grow over time.

    From mfarver
  • Thats only an average of 11.5 req/sec. Which assuming normal traffic patterns will mean 25 - 30 req/sec peak on a normal afternoon. Even if you assume 10x peaks that could run anywhere from a 512mb slicehost VM ($40/month) to an 8-core/64gb/6-disk-raid-10-w/bbwc ($12K up-front, plus colo/bandwidth/power ongoing per month). It will vary widely by what your php scripts are doing and your db schema.

    You mention your traffic in the present tense, presumably then your existing server is handling the workload but you're looking to improve performance? If you are not currently swapping then its unlikely you'll need more memory on a new server, though you'll probably want some room for growth.

    Just about the only no-brainier I can recommend without more info is that if you intend to keep this on your own hardware make sure its raid-10 on a controller with battery backed write cache.

    From cagenut
  • I would recommend spending the extra money and getting a system from one of the big guys (Dell, HP, IBM, etc). When you buy a server from these guys you get a support contract. That support contract gives you replacement parts within hours of the failure so that you don't have to wait for the new part to arrive. Unless of course you want to be down while you wait for the new motherboard to ship out before the BIOS upgrade bricked the thing.

    Also you get some assurances that the RAM, board, disk controllers, etc will all work together without any funky driver issues as the vendor will have taken care of this for you already.

    From mrdenny
  • I hate to say this, but you don't mention anything about what is probably the most important piece - is your application designed to scale to a farm of multiple servers? If it can't be split up well then you're going to hit a fairly hard cap on your capacity where you can go higher but you're climbing the expensive part of the cost/performance curve.

    (posted as an answer because I lack the level to comment instead)

    From fencepost

0 comments:

Post a Comment