Upload Bandwidth Cap
April 20, 2007
I’m currently uploading a 100MB file, which will take 1 hour at a sustained rate of 30KB/second. This is intolerably bad. There is no (legal) way around the upload cap set by the ISP. So how can we improve bandwidth? I can see 16 other wireless APs in my apartment building. If we dynamically construct a P2P mesh network, each one can upload at 30KB/sec. If we all contributed, that’s 16 X 30KB/sec = 480KB/sec. The file would be uploaded in 3.5 minutes. If only 3 contributed, it would upload in 19 minutes. The problem is that the file must be reconstructed somewhere before it can be sent to its final destination. Assume the file is chopped in 33MB chunks and all 3 contributors starts sending. The chunks will arrive out of sequence. Therefore, you need a server somewhere that will gather the chunks and put the file back together on some high speed storage out on the Internet (i.e. Amazon’s S3). It has already been done by P2P networks, now we just need a wireless mesh network. And some nice neighbors.
While I was typing this, I uploaded half my file and IE suddenly crashed. Now I have to start over. Why can’t DropSend implement something to resume after failure? The technique has been around for 20 years, well before NcFTP did it. They could just stick the code into their desktop client.