<br><br><div class="gmail_quote">On Sun, Jul 10, 2011 at 3:02 PM, Mike Miller <span dir="ltr"><<a href="mailto:mbmiller%2Bl@gmail.com">mbmiller+l@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On Sat, 9 Jul 2011, Robert Nesius wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I'd use sftp over scp for big files like that. You might also use wget (through an ssh tunnel or some other manner), as wget can resume downloads that didn't complete without retransmitting bits already sent. (Not sure sftp does the same or not - it's worth reading the man page to see if it does).<br>
</blockquote>
<br>
<br></div>
It doesn't look like sftp/scp can do that. wget does it with the -c option, but for me it is stalling a lot, like almost every 20 MB or so. sftp did the whole file on the first attempt and with good speed. I'm using wget just to test that I get the exact same file after stalling 20 times. This result just in: md5sums are identical. So "wget -c" did the job. It would be a much bigger hassle if I had to establish an ssh tunnel every time I restarted wget, but this wasn't a secure transfer, so wget was fine.<br>
<br>
I wonder if they changed something in the network at the U. I didn't used to have this kind of problem, but now it's happening all the time.</blockquote><div><br>I'd be less suspicious of the U and more suspicious of the ISPs. Bummer that wget timed out as well, but at least you have it in your back pocket now for a similar situation in the future. <br>
<br>-Rob<br> <br></div></div>