-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spiked HttpResponsePipeWriter #22916
Conversation
@davidfowl @pranavkm @Tratcher - Alessio was looking for a bigger project to contribute for ASP.NET Core and I suggested that he look at the Razor rendering process, and removing some of the layers it has. One of those layers this this boi - which basically functions like a The pipe model of I/O is better for Razor because Typically an |
/// <summary> | ||
/// A <see cref="Stream"/> that buffers content to be written to disk. | ||
/// </summary> | ||
public sealed class FileBufferingPipeWriter : PipeWriter, IAsyncDisposable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not totally sure what the use case for buffering output to disk. I can't think of a place where we need that today. I know in Razor's use case we already have a pretty efficient in-memory representation of data, most of the complexity is in avoiding sync over async.
We've also gotten some really significant bad performance feedback about the use of disk for buffering I/O - so we'd like to avoid it when it's not absolutely required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revisiting this comment, it looks like we do this for XML serialization (which is synch). It's probably fine to leave the XML serialization alone and just let it use streams.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mainly were I've been stuck for a long time. My understanding is that views can not do chunking, so it requires the whole response to be buffered.
So it is not used only by the XML and Newtonsoft serializers, but also by the view executor:
await using var bufferingStream = new FileBufferingWriteStream(); |
If this is instead only done to avoid sync over async, it will change eveything in this PR.
Or maybe I am missing something on how to avoid chunking.
|
||
var length = _encoder.GetByteCount(value, false); | ||
var buffer = _writer.GetSpan(length); | ||
_encoder.GetBytes(value, buffer, false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GrabYourPitchforks - is there a more efficient way to do this? wondering about the pattern of GetByteCount
immediately followed by GetBytes
_disposed = true; | ||
FlushEncoder(); | ||
// TOOD: flush | ||
_writer.Complete(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this probably need to be surfaced in the public API (or not done) - since it seems similar to disposing the underlying stream. HttpResponseStreamWriter
never disposes the stream
|
||
Write(value.Span); | ||
Write(NewLine); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a little bit of a red flag that the WriteAsync
methods don't call PipeWriter.FlushAsync
. This means that it's going to buffer until Flush is called by the caller even if you're using all of the async methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it not be switching to chunking if I flush?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @rynowak , if I flush after every write it will cause sending lots of small chunks of a few bytes, as the rendering of a view involve a call to write for every element. What am I missing?
Currently it is buffered on disk so it is sent as a single chunk. I'm still not sure if that is a requirement (I believe it is given
aspnetcore/src/Http/WebUtilities/src/HttpResponseStreamWriter.cs
Lines 453 to 454 in 0889a62
// Note: our FlushInternal method does NOT flush the underlying stream. This would result in | |
// chunking. |
Thank you!
@@ -20,7 +21,7 @@ public class OutputFormatterWriteContext : OutputFormatterCanWriteContext | |||
/// <param name="writerFactory">The delegate used to create a <see cref="TextWriter"/> for writing the response.</param> | |||
/// <param name="objectType">The <see cref="Type"/> of the object to write to the response.</param> | |||
/// <param name="object">The object to write to the response.</param> | |||
public OutputFormatterWriteContext(HttpContext httpContext, Func<Stream, Encoding, TextWriter> writerFactory, Type objectType, object @object) | |||
public OutputFormatterWriteContext(HttpContext httpContext, Func<PipeWriter, Encoding, TextWriter> writerFactory, Type objectType, object @object) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately this isn't something we can change. We'd have to add a new property and a new constructor.
/// </summary> | ||
public interface IHttpResponseStreamWriterFactory | ||
public interface IHttpResponseWriterFactory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really the whole reason for this API was to abstract away the memory pool. Another option is to just don't when it comes to pipes. This change could be much more minimal if it didn't try to update this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. I started without but later I brought it back to have similar abstractions with the same level of indirection. I don't think there was a technical reason to use this, but I could be wrong as it was a while back. I'll try to remove it.
@@ -49,7 +49,7 @@ private class NullLoggerFactory : ILoggerFactory, ILogger | |||
|
|||
private class BenchmarkViewExecutor : ViewExecutor | |||
{ | |||
public BenchmarkViewExecutor(IOptions<MvcViewOptions> viewOptions, IHttpResponseStreamWriterFactory writerFactory, ICompositeViewEngine viewEngine, ITempDataDictionaryFactory tempDataFactory, DiagnosticListener diagnosticListener, IModelMetadataProvider modelMetadataProvider) | |||
public BenchmarkViewExecutor(IOptions<MvcViewOptions> viewOptions, IHttpResponseWriterFactory writerFactory, ICompositeViewEngine viewEngine, ITempDataDictionaryFactory tempDataFactory, DiagnosticListener diagnosticListener, IModelMetadataProvider modelMetadataProvider) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to see some changes that avoid PagedCharBufferTextWriter
as well as what the impact is on perf 👍
_memoryThreshold = memoryThreshold; | ||
_bufferLimit = bufferLimit; | ||
_tempFileDirectoryAccessor = tempFileDirectoryAccessor ?? AspNetCoreTempDirectory.TempDirectoryFactory; | ||
PagedByteBuffer = new PagedByteBuffer(ArrayPool<byte>.Shared); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we should be using this but now I'm convinced we should expose something in Pipelines for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well this is not actually used here, I just brought it from the other implementation as I thought I would have needed it later to avoid chunking. Based on the outcome of that conversation, this can hopefully be avoided as otherwise it would not be possible to get big performance benefits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @davidfowl , any progress on exposing something in Pipelines to support this?
Hey @davidfowl , is there any way with Pipelines to avoid excessive chunking? or should I keep track of how many bytes I have written to avoid calling |
cceb2bd
to
4b74562
Compare
f2d489a
to
e797602
Compare
Does it make sense for me to resume the work on this to see if it could be included in 6.0? |
We'll need to look at this change a bit in more depth, but I'm supportive of it. My concern is that the current area is very highly fine-tuned and I want to make sure we don't regress performance for important scenarios. I think I would be more confortable working on it early in 6.0 than at the end, so I think this is a good time to start. |
e797602
to
8b913b8
Compare
Hi. Looks like this PR seen no activity for a long time. What is the latest status @javiercn ? Do we plan to take this or should we close it? |
Sorry @alefranz we dropped the ball on this. Unfortunately there are a number of merge conflicts here and it's unlikely we'll be able to spend time reviewing the changes for correctness for the first half of .NET 7. I'm going to close this for, we can engage with you if we end up deciding to do work here. |
Hello!
This is an experiment related to #7836 which I spiked a while back as mentioned on #7836 (comment)
This introduces a HttpResponsePipeWriter to write encoded to a
PipeWriter
instead of aStream
, to be used by MVC including Razor views.Please note that the buffering is not implemented yet and it is writing directly to pipe, instead of buffering in memory (and spooling to file)
I'm opening this draft PR to see if there is an interest for this change and in the hope of gathering feedback to understand what would be the right approach for this feature.
The goal is also to understand what changes would be required in terms of public API surface, and eventually create an issue to discuss them later.
I've failed a separate issue as I am currently unable to run the benchmark #22915
Thank you,
Alessio