-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow transfer over LAN #5037
Comments
are you sure that you're requesting it off the LAN and not the internet? there's a chance your machine could be fetching it off the internet? why are you running the garbage collect before attempting to download the file? |
It could be fetching off the internet, but I would still expect it to be fast since My hunch right now is it's CPU-bound due to older hardware, with [edit: Report of 5000% CPU usage might be just a display bug with htop over ssh.] |
Ah yea if swarm can find the peer than it should be connecting to the peer. |
@piedar hrm... could you see how fast an Another thing to try that would help us debug is to run the daemon fetching the file with |
Yes @whyrusleeping, it's certainly faster without the routing!
And |
That very interesting... It would appear then that getting this PR: #4333 merged should help transfer speeds overall. |
(well, that PR and its followups) |
Could DHT announce be adjusted to run in the background, only when there is no active transfer in progress? Though maybe that's too much complexity if the root can be solved by speeding up the operation in general. Anyway, I'll run this test every couple versions and report back if the results change significantly. |
@piedar running the DHT announce in the background is pretty much what we want to do, the main sticking point for why that hasnt happened yet is that, technically, thats whats happening right now. The reason its slowing things down is that there is backpressure from the DHT provide process slowing down anything that sends hashes to it. Since bitswap fetches each block of a graph independently, it sends one provide call per hash (which can be millions of calls). The change we need to make is to make the DHT providing process a bit smarter, so we can tell it 'here are the objects/pins we care about, make sure the world knows' and it can enumerate hashes on demand (and entirely separate from the process of us receiving them). |
:) Please do! This is so helpful to us |
Yes this would be a tremendous improvement. Please include this fix into an upcoming release |
Me too faced such issue. Seems it does not download from local peers. |
I could confirm this issue today with go-ipfs 0.4.15 under Linux openSUSE x64. The setup: I have two computers connected to the same home router, mine in one room and my mother's in the hallway. The daemon on mine contains a group of large video files (for DTube) which are pinned, I'd estimate 10GB in total (the size of my ~./ipfs directory). I ran the bash script I created to pin this list of videos on my mother's computer, with the daemon on mine also running since I thought that would cause the files to be served more quickly. The result: Despite being directly connected to a 10 MB/s or 100 MB/s cable, the pinning process hasn't finished after over 6 hours. Judging by my network traffic monitor, my computer appeared to only serve content periodically: For roughly 5 seconds, I'd see it sending data at over 1 MB/s... after that the transfer rate would drop to roughly 300 KB/s or less and stay there. I know the two daemons were exchanging data over LAN because one of them was posting generic errors and they were all about an IP of the format 192.168.0.1 (the local IP's assigned to our machines by the router). I immediately found that surprising but thought I must be missing something else. I asked on the IRC channel and someone pointed me to this bug. I figured sharing this experience might help. |
So, make sure you're not confusing megabits and megabytes. Those cables are probably 10Mbps and 100Mpbs, 8x slower than 10MB/s and/or 100MB/s. For comparison, I'd try connecting the two machines with netcat and piping data directly over that connection. To measure the actual transfer speed. However, that still looks wrong.
|
I was referring to Megabytes of course. Both machines have classic hard drives, no SSD yet. Resource usage: On my mother's old computer (slow single core CPU), the IPFS process kept using roughly 40% CPU. Memory wise it was over 350 MB. I used Netcat in an unrelated test weeks ago, kinda forgot how to use it since but I could look into it again. Those ipfs stat commands seem like a better test, I might look into them first. |
This is a screenshot from KSysGuard on my mother's computer showing the network transfer rate. This is all go-ipfs, no other process should have been sending or receiving any significant amount of data. Sharing it here because the transfer rate is abnormally erratic: One moment it's receiving at over 1 MB/s, the other at 100 KB/s. I see no explanation as to why it wouldn't be at +1 MB/s all the time. |
Hm. Yeah, that doesn't look right at all. I'd expect it to be a bit erratic (known issues) but not that slow. |
@MirceaKitsune Have you tried running the receiver with |
I have the same issue. Freshly installed |
@hannahhoward |
This issue is pre-bitswap refactor. Three are still known issues but nothing here will likely be relevant. |
It takes over 1 minute to transfer 21 MB between two machines on a local network.
For most of this test, the receiver's
ipfs daemon
uses 150% CPU - suggesting it bottlenecks the old dual-core hardware. However,ipfs add
hashes new 30 MB files in under 10 seconds. I don't yet understand why the network transfer adds so much overhead.Workaround: run the slow node with
ipfs daemon --routing=none
.Sender
Receiver
After a few seconds, the transfer starts but it stutters at 1.38 MB, 3.75 MB, 7.50 MB, etc.
Without
ipfs repo gc
it takes about 5 seconds. Asiperf
shows, the network connection is fine.X-Post discuss.ipfs.io
The text was updated successfully, but these errors were encountered: