Hiawatha and Nfs server

18 July 2009, 02:21
Hi Hugo,

I have been continuing to use Hiawatha on the file repository. I wanted to move a huge amount of data from the main server to a backup server. This is what I did:
- Enabled NFS server on the main server
- Exported the /var/www/html (webroot) as a shared folder
- Ran NFS client from the backup server
- Started copying from the NFS share to another folder on the backup server

When doing "TOP" on the main server, I notice that the CPU usage of Hiawatha went way up - almost at 100%.

I wanted to compare it with Apache. Started apache instead of Hiawatha and did the same thing as I did above. The CPU usage and memory of Apache does not show up as high in "TOP". But looking at "ps auxf", I found that Apache had started 32 instances of itself.

This is not a bug report. Just an observation of what I found. Not sure how NFS impacts the HTTP server. Maybe because they are using the same folder.

BTW, do you have any suggestions/ideas on how to copy large amounts of data between these VM's - in a fast/multithreaded way? I know .. this is outside Hiawatha.

Hiawatha version: 6.15
Operating System: CentOS 5
Hugo Leisink
18 July 2009, 03:22
I don't understand. Hiawatha used 100% CPU because you were using NFS?? What has NFS got to do with a webserver??

Copy large amount of data: tar and write to netcat. On recieving machine, untar from netcat
Hugo Leisink
18 July 2009, 13:14
I did a download test myself. One large file from my server to my laptop. Hiawatha's load was 1.0, the server's load was 0.25.

The big difference between Hiawatha and Apache is that Hiawatha is multithreaded, while Apache does prefork. You can see this difference in load while doing multiple simultaneous downloads. One hiawatha process does all the uploads while each of the multiple apaches forks does only 1 or a few uploads. The total load is the same. This total load is just devided over all the running processes.
This topic has been closed.