Skip to content
This repository has been archived by the owner on Dec 14, 2018. It is now read-only.

FileStreamResult very slow #6045

Closed
angelsix opened this issue Mar 29, 2017 · 61 comments
Closed

FileStreamResult very slow #6045

angelsix opened this issue Mar 29, 2017 · 61 comments

Comments

@angelsix
Copy link

On a local machine, create brand new ASP Net Core website.
In any controller action do:

        new FileExtensionContentTypeProvider().TryGetContentType("test.m4v", out string contentType);
        Response.Headers["Content-Disposition"] = "attachment; filename=\"movie.m4v\"";
        return new FileStreamResult(System.IO.File.OpenRead(@"C:\somemovie.m4v"), contentType);

This will popup a download save dialog on request. The local SSD can do 1800MB/s. The ASP Net Core download tops out at 17MB/s.

@benaadams
Copy link
Contributor

On a local box you are doing both send and receive and 17MB/s is a network transfer rate of 136Mbit/s and both disk save and load.

Are you running in release mode -c Release; are you going direct to Kestrel or via IIS/IISExpress/Ngnix; are you running from command line or visual studio.

Is the file being virus scanned #6042 (comment) both on load and save?

Are you downloading direct, or via browser where its doing its own secondary scanning etc

@angelsix
Copy link
Author

Direct through kestrel and tried IIS and IIS Express all the same results.

Disabled all antivirus

Tried debug and release mode

Tried command line and VS

Tried on a Windows 10 machine (64GB RAM, i7 6950X, SSD) and a Windows Server 2016 watercooled beast)

Tried via browser as that's how the users will download from a website. Firefox.

Tested SSD, manually copy/paste the 1GB file takes less than a second.

If I read it into memory first:

            var memory = new MemoryStream();
            result.Response.ContentStream.CopyTo(memory);
            memory.Seek(0, SeekOrigin.Begin);

            // Return the file stream
            return new FileStreamResult(memory, result.Response.ContentType);

Then I get 100MB/s if the file is < 900MB. Above that and it goes back to 17MB/s or worse (probably due to haveing over 1GB of stuff in RAM.

Still, it should download at well over that. I can download files from the other side of the world on a loaded server at 210MB/s, so locally I should be seeing practically the SSD speeds, or at worst 200MB/s, not 17.

@pranavkm
Copy link
Contributor

Does setting up the FileStream to be read async (https://github.com/aspnet/FileSystem/blob/dev/src/Microsoft.Extensions.FileProviders.Physical/PhysicalFileInfo.cs#L48-L57) make a difference?

@angelsix
Copy link
Author

I'm also using the very latest VS and dotnet 1.1.1

@angelsix
Copy link
Author

Setting it up as:

new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 1, FileOptions.Asynchronous | FileOptions.SequentialScan);

Slows it down to 7MB/s

Increase buffer to 10240 increases to 39MB/s
Increase buffer to 102400 increases to 92MB/s
Increase buffer to 409600 causes the same staggering effect as loading over 1GB into RAM, and it pauses, goes, pauses, goes and averages 40MB/s

So we are getting somewhere. It is how its reading the file from the disk, yet reading the entire file from that same stream using CopyTo is instant (as fast as reading it from the SSD), but passing that same stream into FileStreamResult gives much worse performance

@benaadams
Copy link
Contributor

Just to get the max throughput metrics of your loopback could you run NTttcp

You want to extract the file exe then in two command windows enter the NTttcp-v5.33\amd64fre directory and run for server

ntttcp.exe -s -m 1,*,127.0.0.1 -l 128k -a 1 -t 15

Then for client

ntttcp.exe -r -m 1,*,127.0.0.1 -rb 2M -a 1 -t 15

And you should get an output similar to

   Bytes(MEG)    realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
    11926.000000      15.000       1409.344          795.067

@angelsix
Copy link
Author

On windows 10 machine
Copyright Version 5.33
Network activity progressing...

Thread Time(s) Throughput(KB/s) Avg B / Compl
====== ======= ================ =============
0 15.001 234465.436 17856.987

Totals:

Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
3434.781250 15.001 1394.081 228.970

Throughput(Buffers/s) Cycles/Byte Buffers
===================== =========== =============
3663.522 47.876 54956.500

DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
============= ============= =============== ==============
1965.936 87.604 75595.294 2.278

Packets Sent Packets Received Retransmits Errors Avg. CPU %
============ ================ =========== ====== ==========
2583424 2583515 54 23 19.171

On server 2016
Copyright Version 5.33
Network activity progressing...

Thread Time(s) Throughput(KB/s) Avg B / Compl
====== ======= ================ =============
0 15.002 748273.732 26348.401

Totals:

Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
10962.502472 15.001 1379.706 730.785

Throughput(Buffers/s) Cycles/Byte Buffers
===================== =========== =============
11692.556 3.758 175400.040

DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
============= ============= =============== ==============
21.199 26199.673 14214.452 39.073

Packets Sent Packets Received Retransmits Errors Avg. CPU %
============ ================ =========== ====== ==========
8331504 8331496 4 0 62.600

Both max download at 92MB/s with the 102400 buffer on the stream. Yet the server is capable of 730MB/s

@angelsix
Copy link
Author

Running through the web (over a fiber line though) the speed is insanely slow. 1MB/s. So it gets worse when going over the internet too.

@benaadams
Copy link
Contributor

What if you go a bit more wild?

// using System.IO.MemoryMappedFiles

new FileExtensionContentTypeProvider().TryGetContentType("test.m4v", out string contentType);
Response.Headers["Content-Disposition"] = "attachment; filename=\"movie.m4v\"";

var mmf = MemoryMappedFile.CreateFromFile(@"C:\somemovie.m4v");

Response.OnCompleted(
    (state) => {
        ((MemoryMappedFile)state).Dispose();
        return Task.CompletedTask;
    }, mmf);

return new FileStreamResult(mmf.CreateViewStream(), contentType);

@angelsix
Copy link
Author

Runs at around 70MB/s for 1-2 seconds, then pauses for 3-4 seconds, then does it again, over and over. Similar to the large buffer issue. So averages back out to the same speed

@benaadams
Copy link
Contributor

You have server GC on?

@angelsix
Copy link
Author

Out of the box web template from VS. How do I enable/disable it?

@Yves57
Copy link
Contributor

Yves57 commented Mar 31, 2017

Just for curiosity, why is the buffer size 4KB in FileStreamResultExecutor and 64KB in StaticFileContext?

@angelsix
Copy link
Author

angelsix commented Mar 31, 2017

Ok so I found a culprit. Kaspersky created a network adapter it was routing everything through. Removed that and I see improvements.

Original code: 38MB/s
MemoryMapped: 127MB/s
Async Filestream with 102400 buffer: 113MB/s

However, on the Windows server machines with faster loopbacks and running Server 2016, I get far less improvement. They didn't have Kaspersky on, and I disabled Defender, but that changed nothing. The servers get worse:

Original code: 14MB/s
MemoryMapped: 13MB/s
Async Filestream with 102400 buffer: 34MB/s

I've tested this on 2 different Windows 2016 servers with identical results, and my faster Windows 10 (when using memory mapped or async file) I have only tested on 1 windows 10 PC. I'll test on another tomorrow.

So even on my Windows 10 dev machine, 127MB/s is half the speed the local loop is capable of, and on the servers its 21x slower than the loop is capable of.

Also the other bug with the MemoryMapped is Response.OnCompleted never fires, at all. So after running it once, the file is locked with the previous memory mapped file and obviously we get memory usage too.

@benaadams
Copy link
Contributor

benaadams commented Apr 1, 2017

You can try using the TwoGBFileStream to test without File IO and see what that does.

Also OverlappedFileStreamResult from AspNetCore.Ben.Mvc to see what overlapping reads and writes does with a larger buffer size (usage in samples)

Though I didn't see much difference between them, with Chrome being the main bottleneck (at 100MB/s) on loopback:

TaskManager

But it may help isolate the issue...

@benaadams
Copy link
Contributor

benaadams commented Apr 1, 2017

Using wrk from WSL (Bash on Ubuntu on Windows 10)

ben@BEN-LAPTOP:~$ wrk -c 1 -t 1 http://localhost:5000/Home/OverlappedFileStreamResult  --latency --timeout 120
Running 10s test @ http://localhost:5000/Home/OverlappedFileStreamResult
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.81s     0.00us  12.81s   100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  Latency Distribution
     50%   12.81s
     75%   12.81s
     90%   12.81s
     99%   12.81s
  1 requests in 12.81s, 2.00GB read
Requests/sec:      0.08
Transfer/sec:    159.90MB
ben@BEN-LAPTOP:~$ wrk -c 1 -t 1 http://localhost:5000/Home/FileStreamResult  --latency --timeout 120
Running 10s test @ http://localhost:5000/Home/FileStreamResult
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.89s     0.00us  12.89s   100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  Latency Distribution
     50%   12.89s
     75%   12.89s
     90%   12.89s
     99%   12.89s
  1 requests in 12.89s, 2.00GB read
Requests/sec:      0.08
Transfer/sec:    159.22MB

Both types come out about the same for me. 159MB/s being about 1.27 GBit/s on the network/loopback

@angelsix
Copy link
Author

angelsix commented Apr 1, 2017

Short test on Windows 10 machine so far...
Firefox using TwoGBFileStream gets 113MB/s
Chrome however hits the local loop limit (almost) of 209MB/s using 8% CPU. So I think it's safe to say that removed the bottleneck there and it's in the reading of the file.

wrk for bash doesnt seem to be there any more. Got developer mode on, installed bash, can run it, but typing wrk gives invalid operation as its not a command, says:

luke@LUKE-M:/mnt/c/Windows/System32$ wrk
No command 'wrk' found, did you mean:
Command 'wrc' from package 'wine1.6' (universe)
Command 'ark' from package 'ark' (universe)
Command 'irk' from package 'irker' (universe)
Command 'wmk' from package 'wml' (universe)
wrk: command not found

I spun up a ubuntu docker instead but that doesnt have wrk either

I'll do the tests with OverlappedFileStream and both on Windows server machines (which had much worse performance originally) and see what I get.

@angelsix
Copy link
Author

angelsix commented Apr 1, 2017

So on the servers totally different result:

Original
firefox: 17MB/s
chrome: 18MB/s

With 102400 buffer
firefox: 36MB/s
chrome: 48MB/s

With TwoGBFileStream
firefox: 13MB/s
chrome: 14MB/s

With overlapped:
firefox: 33MB/s
chrome: 36MB/s
(identical to original change of just adding a 102400 async buffer)

Now run all this again without it going through IIS (so kestrel direct from command line)

Original
firefox: 17MB/s (no change)
chrome: 18MB/s (no change)

With 102400 buffer
firefox: 36MB/s (no change)
chrome: 56MB/s (slight improvement)

With TwoGBFileStream
firefox: 38MB/s (3x improvement)
chrome: 48MB/s (4x improvement)

With overlapped:
firefox: 38MB/s (slight improvement)
chrome: 48MB/s (slight improvement)

So it seems on the server 2016 we have totally different bottlenecks, however no matter any of the tests so far the server 2016 (which is what it will run on) cannot get above 40-50MB/s even when not reading from disk.

I ran this test on a local watercooled server, and a server on the amazon AWS, two totally different specs, both very powerful, and identical results.

I did notice though on the servers, the bottleneck seemed to be the IIS Worker and the browsers both reaching 35% CPU. Go straight through kestrel and the bottleneck is still the browser. Yet the same setup running on Windows 10 the download goes at 210MB/s and the browsers CPU is only 8%

What's worse is accessing those server 2016 websites from anything but local (so over the internet) the download speed is between 300kb and 1MB max. The servers processes show now CPU usage (3-4% max) and the client doesn't either, so there is another bottleneck in that process too.

@benaadams
Copy link
Contributor

benaadams commented Apr 1, 2017

wrk for bash doesnt seem to be there any more.

You have to install it per the linux steps https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux

What's worse is accessing those server 2016 websites from anything but local (so over the internet) the download speed is between 300kb and 1MB max.

That's more likely due to the nature of TCP protocol and the round trip time

If you want to optimize for large file transfer, on a single connection, with round-trip latency at the expense of more memory consumed you will have to tune a few things.

If you are on the Windows Server 2016 Anniversary Update you'll already have the Initial Congestion Window at 10 MSS, otherwise you may want to increase it with the following 3 powershell commands (WinServer 2012+)

New-NetTransportFilter -SettingName Custom -LocalPortStart 80 -LocalPortEnd 80 -RemotePortStart 0 -RemotePortEnd 65535
New-NetTransportFilter -SettingName Custom -LocalPortStart 443 -LocalPortEnd 443 -RemotePortStart 0 -RemotePortEnd 65535
Set-NetTCPSetting -SettingName Custom -InitialCongestionWindow 10 -CongestionProvider CTCP

On Windows Server 2016 you should be able to go up to

 -InitialCongestionWindow 64

However you probably don't want to go too high.

You can also configure the amount Kestrel will put on the wire before awaiting (defaults to 64 KB) e.g.

.UseKestrel(options =>
{
    // 3 * 1MB = 3MB
    options.Limits.MaxResponseBufferSize = 3 * 1024 * 1024;
})

Then you want to check the network adapter settings and bump up anything like:
Max Transmit, Send Descriptors, Transmit Buffers, Send Buffers etc varies by NIC

You way also want to adjust other settings like your TCP receive window (assuming you've done the eariler ones) e.g.

Set-NetTCPSetting -SettingName Custom -AutoTuningLevelLocal Experimental -InitialCongestionWindow 10 -CongestionProvider CTCP 

But this is more about TCP tuning than the http server itself...

@angelsix
Copy link
Author

angelsix commented Apr 1, 2017

Ok I'll try all that, but we still have the underlying issue that the speed is slow when actually reading a file?

@benaadams
Copy link
Contributor

As @Yves57 pointed out earlier the buffer size is likely too small at 4KB per read; Kestrel will only put 3 uncompleted writes on the wire so that may be 12KB on the wire max using FileStreamResult. Though is what am trying to determine.

Also it uses Stream.CopyToAsync which I think has been improved in .NET Core App 2.0; but a larger buffer size would help. There's also StreamCopyOperation which StaticFiles uses and it should probably use but doesn't.

@benaadams
Copy link
Contributor

benaadams commented Apr 1, 2017

Re: garbage collection; if you are using a project.json project, ensure it has

  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },

And if you are using the newer csproj then the Web project type should already cover it

<Project Sdk="Microsoft.NET.Sdk.Web">

@benaadams
Copy link
Contributor

Also you can try new FileResult which may use the SendFile api (for example behind IIS) and see how that goes

@davidfowl
Copy link
Member

Send file no longer works on IIS since it isn't in proc. It does work with WebListener though

@benaadams
Copy link
Contributor

benaadams commented Apr 2, 2017

@angelsix as an aside to whether FileStreamResult is slow, which hopefully can be resolved if so.

Other options to consider to serve the files are add ResponseCaching if you have memory on the go; the StaticFileMiddleware; or get IIS to serve the static files directly. The latter two are covered in @RickStrahl's blog: More on ASP.NET Core Running under IIS

@Yves57
Copy link
Contributor

Yves57 commented Apr 2, 2017

More generally, I was asking myself why all XxxxResultExecutor classes are in the Internal namespace and not in a ResultExecutor namespace? All code (like here is developped to be compatible with code injection. So for example if FileStreamResultExecutor.ExecuteAsync() where virtual, it would be possible to call "services.AddSingleton<FileStreamResultExecutor, MySpecialFileStreamResultExecutor>();" on startup. Same remark for PhysicalFileResult / PhysicalFileResultExecutor.
It would be nice to optimize performance in some specific cases, add custom logging, add custom cache, etc.

@benaadams
Copy link
Contributor

If FileStreamResultExecutor.ExecuteAsync() where virtual

Yeah was thinking that when making the test overlapped version; its methods should either be virtual or it should resister with interface e.g. IFileStreamResultExecutor rather than registering against a non-overridable class. Instead you have to create a new result type and call that.

@angelsix
Copy link
Author

angelsix commented Apr 2, 2017

Ok so with increasing the file stream buffer to 64k using new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 65536, FileOptions.Asynchronous | FileOptions.SequentialScan); I get an acceptable speed. Around 120-150MB/s stable on Windows 10

I'd like to see more, but I think that will do for now.

Changing the max response buffer in Kestrel did nothing. All the other suggestions didn't improve on this speed either so this seems the fastest speed possible right now.

Doing the TwoGBFile stream direct from RAM reached the local link limit on Windows 10 so I think the bottleneck in the current situation is still the reading of the file stream, but all suggestions so far have not gained us more than 150MB/s and I know the network can at least reach the 230MB/s we see with the TwoGBFile stream.

I'm moving onto trying to fix the Windows Server 2016 issues now. Nothing I've done so far can get the files to transfer faster than 58MB/s locally, and I cannot even get files to transfer faster than 1MB over the internet, even on a fresh Amazon AWS 2016 server and my fiber internet, so there is definitely issues there.

@benaadams
Copy link
Contributor

The changes for TCP/Kestrel response buffers would be for network rather than loopback

@angelsix
Copy link
Author

angelsix commented Apr 2, 2017

Ah ok I'll add that back and do some internet speed testing too

@angelsix
Copy link
Author

angelsix commented Apr 2, 2017

Also any suggestions on improving the localhost/127.0.0.1 loopback speed on Windows 10? Out of the box all my machines top out at 180MB/s. I read that Windows 7 had NetDMA which could easily reach 800MB/s but it was removed in Windows 8 and later, with Fast TCP, which isn't natively supported it has to be software supported, so regular TCP loopback tests still top out at 180MB/s? I think thats the limit I am reaching on windows 10 when directly running kestrel.

Nevermind, I just realised this would be kind of pointless anyway as the end-purpose will be over real ethernet, at best 1GB, which is 125MB/s anyway so I'm above the speed I need to be on the Windows 10 machines at least. Just need to solve the issue with the 2016 server machines.

@RickStrahl
Copy link

RickStrahl commented Apr 4, 2017 via email

@angelsix
Copy link
Author

angelsix commented Apr 4, 2017

I don't think that relates to my issue though. I run the kestrel through IIS on a new 2016 server, no connections, just one, to download a file, and it maxes out at 38MB/s on a machine that tests capable of TCP traffic up to 125MB/s

Also the exact same code runs on Windows 10 in IIS at 75MB/s so totally different results. Then remove IIS on windows 10 and you get 125MB/s, and 58MB/s on server 2016

@Eilon
Copy link
Member

Eilon commented May 17, 2017

@jbagga - can you take a look?

@jbagga
Copy link
Contributor

jbagga commented May 25, 2017

On Windows 10 x64, I downloaded video content of size 987MB using the code here.

I tried ASP.NET Core 1.1 and the current dev branch (there have been recent changes as part of this PR which I thought may affect the results; like use of StreamCopyOperation)

Note: When Content-Length is not set, the Transfer-Encoding header is set to "chunked"
Here's what I found:

filestreamdownload

Conclusions:

  • Setting Content-Length helps when using IIS
  • Increasing the buffer size helps when using IIS and Transfer-Encoding is chunked
  • Kestrel is faster

cc @Eilon

@davidfowl
Copy link
Member

/cc @pan-wang @shirhatti

@rynowak
Copy link
Member

rynowak commented May 31, 2017

A few takeaways from the experiment done by @jbagga

  • We should set the content length where possible. It makes a big difference if you're behind IIS. and is a modest improvement if you're not. It also has better behavior with browser progress bars.

  • We should use a larger buffer size than 1kb. Bumping up the buffer size showed a big improvement with chunked responses, and was a wash or small improvement with fixed content length. This is a relatively easy way to improve the scenario where we're worst today (19.1mb/s -> 47mb/s).

@jbagga
Copy link
Contributor

jbagga commented Jun 1, 2017

#6347 results in expected speed of ~60 MB/s for IIS and ~134 MB/s for Kestrel for the same resource as my comment above

jbagga added a commit that referenced this issue Jun 1, 2017
@Eilon
Copy link
Member

Eilon commented Jun 2, 2017

I think what we have here is a great improvement, so I think it's ok to close this bug.

@angelsix - please let us know if you have any further feedback on this change.

@jbagga jbagga closed this as completed Jun 2, 2017
@angelsix
Copy link
Author

angelsix commented Jun 3, 2017

Thanks I will test this coming week but I expect to see the same improvements everyone else has. Thanks for looking into this

@JesperTreetop
Copy link

Is there any chance of this fix getting into ASP.NET Core 2.0? I am doing a middleware which looks at Content-Length after a response has finished and logs some statistics of some file serving requests into a database - Content-Length not being there except for Range requests (which explicitly sets it as part of handling ranges) limits the usability of this.

(In addition, it would speed up the site when run with Visual Studio Code attached as a debugger, where sending a 26 MB file with FileStreamResult in my testing is on the order of ~30 seconds for my scenario, compared to 0.6 seconds (50x faster) when still running the unoptimized Debug version with Development "environment" but without a debugger attached. This is not a primary reason and this is probably more Visual Studio Code's or the C# extension's fault, but I can't deny that it would help with this too.)

@jbagga
Copy link
Contributor

jbagga commented Jul 12, 2017

@JesperTreetop https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Internal/FileResultExecutorBase.cs#L91
Content-Length is set for all requests with a valid file length but is overwritten with the length of the range for range requests. Have you tried the 2.0.0 preview? If you have, and it does not work for you, please file a new issue with more details and I'd be happy to investigate the problem.

@JesperTreetop
Copy link

@jbagga That's exactly the line I want in 2.0. The same file in 2.0.0 preview 2 (which I'm currently using) doesn't have it (SetContentLength is only called from SetRangeHeaders which of course is only called for range requests) and therefore also does not have this fix. As long as 2.0 is being spun from the dev branch, I'm guessing this fix will make its way in, but I don't know if 2.0 comes from the dev branch or by manually porting fixes to some sort of continuation of the 2.0.0-preview2 tag, which seems like something Microsoft would do with higher bug bars ahead of releases.

@pranavkm
Copy link
Contributor

@JesperTreetop it's in our 2.0.0 RTM release: https://github.com/aspnet/Mvc/blob/rel/2.0.0/src/Microsoft.AspNetCore.Mvc.Core/Internal/FileResultExecutorBase.cs#L91

@JesperTreetop
Copy link

JesperTreetop commented Jul 12, 2017

@pranavkm Wonderful - thanks!

@chintan3100
Copy link

chintan3100 commented Sep 5, 2017

@jbagga I have tried to changes buffer size and other + changes .net version 1.1 to 2.0 but i am getting max 20 MB/sec speed to download file.
Can anyone have git repo to test this issue ?
Thanks in advance.

@JesperTreetop
Copy link

@chintan3100 I don't have a repo, but I did see massive slowdown when running with a debugger attached from VS Code. Running without a debugger attached (dotnet run in a terminal or Ctrl+F5 in Visual Studio) sped it up a lot for me, even without changing the build configuration to Release.

@chintan3100
Copy link

chintan3100 commented Sep 5, 2017

@JesperTreetop : I have checked on Azure with release mode with S3 plan. Tested on Azure VM.
But It show only max 20MB/sec speed. I tried a lot but now change in download speed :(

@benaadams
Copy link
Contributor

benaadams commented Sep 5, 2017

@chintan3100 there are other factors also; how fast can you read from the disk in that setup? Is it faster than 20MB/s? Can you save to disk faster than 20MB/s on the client.

20MB/s is 160 Mbps; is the bandwidth capped either by receiver or server? etc

Also what is the RTT latency between client and server as that will determine the maximum throughput due to the TCP Bandwidth-delay product

@chintan3100
Copy link

I tested same api with .net framework 4.0 it is having 150 MB/sec download speed whereas in .net core it is 20 MB/sec.

@JesperTreetop
Copy link

@chintan3100 Posting the code you're using for the action in both ASP.NET MVC 5 and ASP.NET Core 2 would probably help diagnosing this, at least how the action result is constructed.

@chintan3100
Copy link

chintan3100 commented Sep 6, 2017

  public ActionResult Index()
        {
            var stream = new FileStream(@"You File Path", FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 65536, FileOptions.Asynchronous | FileOptions.SequentialScan);
            return File(stream, "application/octet-stream", "aa.zip");
        }

Please run same code in new .net core project and asp.net project.
It's simple read file and return stream. Please specify and large file in path.

This is what i am using in both asp.net MVC 5. and .net core 2.0.
Result
20 Mb/sec .net core
150+ Mb/sec in asp.net MVC5 api

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests