XFileSharing Pro - alternative web server beside apache - Page 2

Message
Author
sherayusuf3
Posts: 94
Joined: Jan 18, 2009 4:29 am

#16 Postby sherayusuf3 » Nov 16, 2009 11:02 am

cuty wrote:There are benchmarks out there with that module.You dont have to take my word for it.

:Note..the more you increase max connections is the more resources will be be sucked up, so have a beast server.

Using nginx produces a considerable difference in cpu usage,but I still kinda prefer some servers to use apache, because when the cpu load get fairly( i.e 30 points )high using nginx , or there is a DDOS attack at the time causing it, nginx times out or takes up to 3mins to start the download.Where as apache the load has to be about 900 for you to notice.
im already test Xfilesharing work perfect on litespeed web server, but is not free
maybe u can use it

oke try tweak u system kernel

for big size HD add on /etc/rc.local

Code: Select all

echo 1024 > /sys/block/sda/queue/read_ahead_kb
echo 256 > /sys/block/sda/queue/nr_requests

for + 512 RAM add on /etc/sysctl.conf

Code: Select all

vm.swappiness=50


kernel.sem = 250 32000 100 128
kernel.shmall = 2097152
#A single application can allocate up to 2 MB 
kernel.shmmax = 2147483648
# minimal allocation unit 
kernel.shmmni = 4096

fs.file-max = 256000

vm.vfs_cache_pressure = 50

net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 10240 87380 16777216
net.ipv4.tcp_wmem = 10240 87380 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.core.netdev_max_backlog = 5000 

please post u output

Code: Select all

cat /proc/sys/vm/overcommit_memory 
cat /proc/sys/vm/overcommit_ratio 

User avatar
PilgrimX182
Posts: 2186
Joined: Mar 22, 2006 1:39 pm

#17 Postby PilgrimX182 » Nov 16, 2009 5:42 pm

sherayusuf3, my personal thanks for sharing this.

cuty
Posts: 103
Joined: Apr 14, 2009 11:20 pm

#18 Postby cuty » Nov 16, 2009 9:16 pm

sherayusuf3 wrote:
cuty wrote:There are benchmarks out there with that module.You dont have to take my word for it.

:Note..the more you increase max connections is the more resources will be be sucked up, so have a beast server.

Using nginx produces a considerable difference in cpu usage,but I still kinda prefer some servers to use apache, because when the cpu load get fairly( i.e 30 points )high using nginx , or there is a DDOS attack at the time causing it, nginx times out or takes up to 3mins to start the download.Where as apache the load has to be about 900 for you to notice.
im already test Xfilesharing work perfect on litespeed web server, but is not free
maybe u can use it

oke try tweak u system kernel

for big size HD add on /etc/rc.local

Code: Select all

echo 1024 > /sys/block/sda/queue/read_ahead_kb
echo 256 > /sys/block/sda/queue/nr_requests

for + 512 RAM add on /etc/sysctl.conf

Code: Select all

vm.swappiness=50


kernel.sem = 250 32000 100 128
kernel.shmall = 2097152
#A single application can allocate up to 2 MB 
kernel.shmmax = 2147483648
# minimal allocation unit 
kernel.shmmni = 4096

fs.file-max = 256000

vm.vfs_cache_pressure = 50

net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 10240 87380 16777216
net.ipv4.tcp_wmem = 10240 87380 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.core.netdev_max_backlog = 5000 

please post u output

Code: Select all

cat /proc/sys/vm/overcommit_memory 
cat /proc/sys/vm/overcommit_ratio 
why would I do this? I'm not having any issues.

sherayusuf3
Posts: 94
Joined: Jan 18, 2009 4:29 am

#19 Postby sherayusuf3 » Nov 17, 2009 1:48 am

sherayusuf3, my personal thanks for sharing this.
Wlcm Pilgrim182
why would I do this? I'm not having any issues.
let me explain

Code: Select all

#A single application can allocate up to 2 MB
kernel.shmmax = 2147483648 

Code: Select all

# minimal allocation unit
kernel.shmmni = 4096 
Kernel parameter SHMMAX by default is too small on Linux
As a consequence of this, an Informix instance that needs to allocate one shared memory segment of 320Mb , will actually use 10 Linux segments of 32Mb. This issue causes a performance problem due to excessive operation system shared memory allocation.
This new kernel setting will be lost if the machine reboots. To change it so that it takes effect every time the system reboots, edit the file /etc/sysctl.conf and add the following line:

why u apache consume higt Memory? can u explain?

Code: Select all

vm.swappiness=50 
allocated Phisycal ram:Swap area 50:50
If you have on average a lot of memory free (or used for cache) you probably want to prevent your system to start swapping. This is done by setting vm.swappiness to 0. In my personal experiences this is what you want for ubuntu with 512 mb of memory or more

Code: Select all

echo 1024 > /sys/block/sda/queue/read_ahead_kb
echo 256 > /sys/block/sda/queue/nr_requests
As Linux comes out of the box, the Linux kernel does not expect to work with a single large disk of several TByte in size. A few tuning parameters come to rescue. Note though, the effect of these settings is highly dependent on the workload of your machine. Larger numbers are not necessarily better.

First increase the read-a-head of your RAID device. The number is given in kilo Bytes. The default is 128 KByte. We set it to 1 MByte in this example.

The 2.6.x Linux kernel has seen quite some work put into optimizing disk access by properly scheduling the IO requests. It seems that the queue depth of the device and the scheduler interact somehow. I have not looked at the code, but mailing list evidence suggests that things work better if the device queue depth is lower than the scheduler depth, so:

you can use it if you want, but I'm not forcing you to use it
why would I do this? I'm not having any issues.
my bad, sorry it hink u is Thread Started

cuty
Posts: 103
Joined: Apr 14, 2009 11:20 pm

#20 Postby cuty » Nov 17, 2009 2:07 am

I have no doubt that the info you copied here is very useful .

thanks for sharing

ihavenoidea
Posts: 19
Joined: Oct 14, 2009 11:21 pm

#21 Postby ihavenoidea » Dec 06, 2009 7:55 pm

Has anyone tried using SpeedyCGI?
The highest process is the index.cgi on my main server which is literally KILLING my server (DUAL XEON 3GHZ, 8GB RAM).

chennaihomie
Posts: 8
Joined: Dec 19, 2009 5:59 pm

#22 Postby chennaihomie » Dec 19, 2009 6:14 pm

There is fairly lot of tuning that can be done to server. Some basic things are

1) remove all un-necessary apache module.
2) use hosted DNS with your domain registrar. This can offload some amount of load on the server.
3) stop all un-necessary services on server.
4) increase max connections in apache. This can increase your number of connection but will increase your memory usage.
5) Tunneling static files with lighttpd or ngix can offload a lot of load.

komi
Posts: 161
Joined: Nov 27, 2009 12:41 pm

#23 Postby komi » Dec 19, 2009 6:56 pm

chennaihomie wrote:5) Tunneling static files with lighttpd or ngix can offload a lot of load.
Hi, can you tell us how this can be done with a working XFS? I'm interested in doing this.

chennaihomie
Posts: 8
Joined: Dec 19, 2009 5:59 pm

#24 Postby chennaihomie » Dec 19, 2009 9:52 pm


komi
Posts: 161
Joined: Nov 27, 2009 12:41 pm

#25 Postby komi » Dec 19, 2009 10:20 pm

Thanks. Did you already manage to do this on your own XFS site?

chennaihomie
Posts: 8
Joined: Dec 19, 2009 5:59 pm

#26 Postby chennaihomie » Dec 19, 2009 10:24 pm

no, my site gets only about 2k visits per day (max has gone upto 6k) and it easily manages on apache, Might implement this in future. I have seen sites implementing this for static content delivery (image hosting sites).

komi
Posts: 161
Joined: Nov 27, 2009 12:41 pm

#27 Postby komi » Dec 19, 2009 11:40 pm

I understand. Thanks for the info.

sherayusuf3
Posts: 94
Joined: Jan 18, 2009 4:29 am

#28 Postby sherayusuf3 » Sep 01, 2010 2:11 pm

Up Up Up

Sundul Gan!!!