Skip to content

Latest commit

 

History

History
681 lines (372 loc) · 139 KB

nov-16.md

File metadata and controls

681 lines (372 loc) · 139 KB

16 November, 2020 Meeting Notes


Attendees:

Name Abbreviation Organization
Michael Ficarra MF F5 Networks
Rob Palmer RPR Bloomberg
Daniel Rosenwasser DRR Microsoft
Waldemar Horwat WH Google
Bradford C. Smith BSH Google
Jack Works JWK Sujitech
Jordan Harband JHD Invited Expert
Chip Morningstar CM Agoric
Ujjwal Sharma USA Igalia
Daniel Ehrenberg DE Igalia
Michael Saboff MLS Apple
Devin Rousso DRO Apple
Shaheer Shabbir SSR Apple
Richard Gibson RGN OpenJS Foundation
Leo Balter LEO Salesforce
Yulia Startsev YSV Mozilla
Sergey Rubanov SRV Invited Expert
Robin Ricard RRD Bloomberg
Aki Rose Braun AKI PayPal
Jason Yu JYU PayPal
Caio Lima CLA Igalia
Istvan Sebestyen IS Ecma
Marja Hölttä MHA Google
Myles Borins MBS GitHub / MSFT
Chengzhong Wu CZW Alibaba
Zhe Jie Li LZJ 360
Shu-yu Guo SYG Google

Opening, welcome, housekeeping

Presenter: Aki Braun (AKI)

AKI: (presents slides)

AKI: Welcome to the 79th Meeting of TC39, here in historic Budapest, Hungary. I'm sorry I didn't say that in Hungarian, I spent about 15 minutes trying to learn it until I got discouraged. I already knew this but Hungarian is really hard to learn for people who speak American English as a first language. Maybe Istvan, can you give us a welcome one sentence?

IS: ["Welcome" sentence in Hungarian]

AKI: My name is Aki Rose Braun, I'm co-chair of this committee and a delegate of PayPal. These are my fellow 2020 chairs: Brian Terslon from Microsoft, Myles Borins from GitHub, Ron Palmer from Bloomberg, and chair emeritus Yulia Startsev from Mozilla.

It is a requirement of Ecma bylaws that we keep attendance at our meetings. I assume that you all, by virtue of your presence on this call, have signed in using the Google Form distributed on the Reflector. If you somehow obtained a link to join without filling out the form, please do so as soon as possible.

I want to start today by addressing our Code of Conduct, which is available on the footer of our website, tc39.es. Anyone participating in TC39 activities is expected to be familiar with the Code of Conduct and behave in a manner that reflects a respectful understanding of it. One of the most important aspects of a Code of Conduct isn't necessarily the expectations it spells out for behavior, but rather what happens when things go wrong. If you have a concern relating to a code of conduct violation, the website has clear reporting instructions. The enforcement process is also outlined on the website, giving you an opportunity to understand what to expect when reporting Code of Conduct violations.

Now that we are all pros at remote meetings I trust that the Teams client is working as well as possible.

CM: Have we made a decision that we're going to go with teams? Because I really don't like it as much as some of the alternatives.

AKI: No, but right now your chair group has the easiest access to teams for managing these meetings. So that's how it's gonna be. I'm not saying this is a final decision.

CM: That seems like a fine criterion.

AKI: Yeah, it is not a final decision by any means, but it is what we have that we can sort of manage the best.

If you are new or unfamiliar with TC39 meetings. I want to mention that the “raise hand” feature within the team's client is not the best way to say your piece. For that we have a tool by the name of TCQ, which is linked in the reflector. You can log in with GitHub at TCQ.app. Once you're logged in you'll see the agenda view; you can switch to the queue to get an opportunity to see what's getting discussed in greater detail. Yulia, do you want to describe the new temperature check feature real quick?

YSV: If you were here at the very beginning the meeting. We were already playing around with this. We now have the ability to check the temperature. We're calling it temperature check. This is just a way to get a vague sense of what people are feeling in the room. We have two positive types of emojis, and then we have three to express other thing:s the "following" one was requested by delegate. So I think this would be interesting how this one works. I don't think that all TC39 delegates will be familiar with this Emoji. But let's see how it works. We've got to sort of more negative ones and there is an ongoing bike shedding discussion about which other emojis are necessary. Waldemar raised the need for a “confused” emoji, and we're looking for a good match there. And once we have a sense of what that should look like we can update that. This is a trial if you want to check the temperature on your topic ask one of the chairs that to take the temperature of the room or something like to that effect, and then we can have people say how they feel about things in this nonverbal way to get a count of where people are. That's how this is going to work, it's totally an experiment. Let's see how it goes.

AKI: Alright, so once you're in the queue you may end up having that temperature check depending on whether we are actively doing a temperature check on a given topic. There's also these buttons near the bottom or middle of the page. Switching to the queue view will give you an opportunity to see what's being discussed in greater detail, and contribute your own voice.

Clicking New Topic allows you to start a conversation on a new question or statement. Consider this the primary button to use in TCQ.

Discuss Current Topic allows you to reply to an active conversation, and should only be used when directly relevant to the topic at hand.

Clarifying Question will jump you to the top of the queue, you should only ever use it if you are otherwise unable to follow the conversation. Abuse of this button will not make you friends, but using it responsibly will.

The Point of Order button is like the nuclear option—it interrupts all conversation to be acknowledged. The best example of using it appropriately is when there are not enough note-takers, though I will come back to that in a moment.

Once it's your turn to speak you'll have a little "I'm done speaking" button added to your view. Please remember to click it when you are done speaking.

TC39 also has several IRC channels. These all have their purpose, but two in particular tend to be very active during meetings. #tc39-delegates is for technical conversations, especially (but not exclusively) about the current agenda item. This channel is moderated, which by the very specific IRC definition means it's public for anyone to join, but only registered TC39 delegates are able to participate. It's also logged.

Our hallway track will once again be Mozilla hubs, the link can be found in the Reflector. Thus far it's the closest we've found to the real experience—wander between conversations, sit down to put the finishing touches on your slides, or strike up a conversation about the botched PS5 pre-release. The sky's the limit. If your computer is struggling with the rendering, try tinkering with your settings to force Hubs to render at 800x600.

Now, let's talk IPR, or Intellectual Property Rights. Generally, in order to participate in the TC39 plenary or champion a proposal, you must represent an Ecma member organization as a delegate. There are exceptions, including Invited Experts, who may attend and participate with the permission of both the chair group and the Ecma Secretariat. Any participant who is not an active delegate for a member org must register by signing the R-F-T-G agreement. Those letters stand for Royalty Free Task Group, and I am not a lawyer but I'm pretty sure it means you are relinquishing your IP rights over your contribution to Ecma so we can publish the standard every year. For more information, see the CONTRIBUTING document on the Ecma262 repo.

Alright, let's talk notes! Thank you to anyone who signed up on the reflector for note-taking shifts. Your work is immensely appreciated. We're still a bit short for total coverage though, so if we could get some volunteers, I'd be forever grateful. Note-taking is a little different this meeting, due to a phenomenal new tool from Kevon Gibbons. Kevin, do you want to introduce that?

KG: Sure. So as you all know, notes are vitally important; as you also all know, we never have enough note takers. Thank you as always to the people who've been taking notes. I have tried to hopefully relieve some of the burden on y'all - or us all I should say - for this meeting and hopefully future meetings. So we're going to be trying a new tool which just glues the output of the meeting to the Google Cloud speech-to-text API and then glues that into Google Docs somewhat haphazardly. So if you look at the notes, you will notice that they are being taken in real time. That is my other computer doing that. You will also notice that they are full of typos and trying to capture literally what you said instead of what you meant to say, with all of your umms and ahhs and so on in there and with a bunch of typos, so we definitely will continue to need note takers. I will let the chairs do the calls for those, but hopefully it will be less work to take notes than it has been in meetings past and I am hoping we can get some people who have been hesitant to take notes because it would take too much of their attention to sort of just follow along and correct typos as they go and maybe swap out long sentences with summaries and that sort of thing. Please ping me if it falls over.

AKI: Thank you very much. Talk about a great example of when it's the right time to use a point of order - if the notes transcriber falls over. Alright, so we will still be calling for note-takers. We will still need no takers. But the task of notetaking hopefully just got quite a bit easier. We'll see.

Our next meeting will be January 25th through the 28th. It was meant to be at the Netflix Sunset Boulevard Penthouse. Never even got fully planned because of the Rona. It'll be Pacific time because it would have been in Los Angeles.

On to some standard housekeeping. Has everyone had an opportunity to review last meeting's minutes? Do we have approval from the 22 of you or whatever who are present? [paused for input; group remained silent] I'm gonna say motion passes. We have approval of minutes for the last meeting.

How about this meeting's agenda which you all have seen because there was a 10-day deadline for the important bits to get onto it. Can we move ahead with the current agenda? [silence] Great, excellent. All right time for secretary and editor reports unless chairs. Do you have anything else you wanted to get in before we move on to two reports and updates?

Plenary Scheduling

RPR: This is an update on scheduling of the plenaries. So just a recap of the last year's we really kept our usual long-standing meeting cadence of once every couple of months. So six meetings a year and we make sure now we've got our full 16 hours of that over four days. So we switched to fully remote so quite a big change, but actually everything seemed to be basically ok. And we've been doing each day has been split into two sessions which are two hours each. The reason for going to four days was basically that was it's a long time to stare at a computer if you are in there for the full six or seven hours and on the time zones, we still kept the notion of a geography and today we are in Budapest thankfully and so that's be retained as well. So looking ahead, for next year we talked about this or we've kind of advertised this a few times over the last last year in our next year's plans, but we're moving to quarterly meetings and this is kind of intended to reflect the full in person meetings that we would have done. Obviously it's not really going to be in person. I think the earliest we might consider going to real life meetings might be the end quarter of next year, but obviously we really have to see how everything plays out

We then have smaller meetings that are two days. This is still official plenary. So it still means the 10-day advance notice you want if you want stage advancement, but it's a shorter session and perhaps this kind of pro forma - this kind of boilerplate we do at the start - might be a little bit shorter as well and the same structure of the day. So two sessions two hours each and what this amounts to is the same number of hours, but more frequent meetings. This is all based on the feedback that we've had. So the actual schedule is here. We've kind of worked hard to look at the delegate survey to influence the time zones and the locations that we've selected trying to be fair. And so there is a bias towards specific time, but that is because you know half our members are on Pacific Time. And so this is you'll be able to see this. Well, this is already in the tc39 calendar maintained by Yulia and we'll get this into the well. And then yeah just to be clear. This is still full plenary even for the shorter meetings. We haven't completely locked down the start and end times for some of these but we won't be a little bit flexible. We'll make sure that that's always set well in advance. If you have further thoughts, the chairs always like feedback, particularly constructive feedback, and so please let us know on the reflector.

LEO: I just have a question and suggestion to tweek later for the four days meetings. I'm not complaining about anything, but I'm just suggesting we consider some of the four days meetings, especially when they are too far from the Pacific time zone to actually start those meetings on a Tuesday. So from Tuesday to Friday.

RPR: We have already done that - you'll notice that next year's meeting in the Japanese time zone starts on a Tuesday because we’ve already had that feedback.

LEO: Thank you so much.

ECMA Secretary's Report

IS: [presents slides]

WH: When is the December GA meeting next year?

IS: Ninth and tenth. Wait a second. What your next month next week next month did not send 10 and don't you next year is eight and nine December if I got it correctly, I copied it. All right.

WH: Well, this presents a problem because the TC39 meeting will be at the same time as the GA meeting. That's not good for those of us who need to go to both.

IS: Okay, that's that's a good good catch. OK, so apparently we have a conflict with TC39’s new schedule which was presented just half an hour ago and the GA meeting. Yes, so we since it is difficult to move the general assembly meeting. So I would say that we have to move the TC39 meeting. It would be better the week beforeI know can become brings theOut to the general assembly meeting.

RPR: Alright, will review that now and make a swift change to the tc39 meeting them? Yeah. thanks Waldemar. Thank you very much. That was a very good catch.

ECMA262 status update

Presenter: Kevin Gibbons (KG)

KG: Alright, so starting off with an overview of changes since last meeting. The big one is 2007, which was the work of many people over many many months, which was to make the distinction between various kinds of number in the specification clearer. We have historically, and particularly with the introduction of bigint, conflated various types of numbers, especially between IEEE floating-point numbers (that is, JavaScript numbers) and actual mathematical numbers. These obviously have different semantics in a number of cases and in a few cases the difference was observable and in all cases that distinction was unclear - no one thinks that if you say to do a thing until a counter hits a certain point that you actually mean that you want to go forever if that certain point is above 2^53rd or whatever. I'll talk more about that particular PR later. A few other changes: we changed the Reference type to an explicit Record. Previously, it was a value with components, but we have a type that represents a value with components in the spec. That's a record. It's much like a JavaScript object. Hopefully, that's a good bit clearer.

Also, we, or rather an external contributor - jmdyck, whose real name I can never remember - changed a few places in the spec where a built-in function was defined as a series of overloads into a single thing with explicit switches on the type of the first argument, which everyone thinks is much clearer and nicer. The overloads were extremely strange.

#2110 is just here so that people are aware of it going forward: the spec has various algorithms that are like some of them are defined in terms of static semantics, meaning purely in terms of the parse tree, and some of them are more runtime, meaning they rely on evaluation context, and some of the latter category were prefixed with the words "runtime semantics" in their name and others were not. The distinction was not clear and not well maintained as new parts have been added to the spec. So we just removed the "runtime semantics" prefix entirely so that an abstract operation is by default runtime semantics and those which are used statically are explicitly prefixed as such.

This last one, #1966, is just a very minor tweak to the grammar. I'm sure any of you who have worked on c-like languages are familiar with the dangling "else" problem in grammars. That's where you have an if-else statement within an if statement and you don't necessarily know which if to associate the else with. You want it to be associated with the inner one, but the grammar needs to make that clear somehow. Previously this was done with a normative note that just says it's associated with the closest possible. Now, it's done with a lookahead restriction, to match the rest of this spec. I don't believe there's any problems with that. But if you believe there are, please let us know so we can back that out.

And then just a few normative changes that we've landed. These are both normative PRs that we discussed at the previous meeting. #2164, aligning detached array buffer semantics with web reality. In particular, just as a reminder, the specified behaviour has long been to throw when performing any operation on a detached buffer, but no one was actually doing that and trying to do that would have broken the web. So we have made some tweaks to the language so that it better matches what browsers actually do and in fact need to do. Unfortunately that PR was not entirely complete, it turned out. We have since landed it and found new cases where the spec did not match web reality - or I say should say that Ross and also the test262 maintainers have found such places. I believe we'll be talking about a couple more of those today, but the original PR was good in the sense that it was progress towards the world we want to live in, so it's still in. Unfortunately we're not done with that project yet, is all.

And then 2120. This was one of those that fell out of 2007 where the distinction between mathematical values and real numbers needed to be made more precise. So this was just making it clear that a particular operation was being done with IEEE doubles rather than mathematical values so that you would get the thing that everyone actually does.

Upcoming work is basically the same list as it's been the last few meetings. Landing 2007 was the major project this last couple months, but just to recap, we are still intending to refactor syntax directed operations so that all of the definitions for a given syntax directed operation are in a single place rather than being co-located with the production. There will be links from where the syntax directed operations are currently defined under each production under a set of Productions to the SDOs that are defined over it. That hopefully will happen before the next meeting, it’s the next thing we intend to to tackle. There's a few other inconsistencies and general tweaks. I should call out 2045 which is a fairly major change, which is defining the notion of a built-in generator. This is purely editorial. It's just a different way of specifying iterators than what we currently do that allows you to use a Yield macro in the same way that we have an Await macro. That just makes it easier to specify operations. And that's intended to be used in the iterator helpers proposal which adds a lot of iterators and was looking for an easier way to specify them. Personally I think that it's a lot nicer.

MM: Generators have this weird bi-directional nature where you can feed values into the next(), whereas iterators are essentially unidirectional - is this really intended just to be sugar for iterators or you are also intending them to be able to accept values?

KG: It's the latter. So moving on, a couple of other consistency and clarity things that I'm not going to call out - actually in fact quite a lot of consistency and clarity things that I'm not going to call out. Oh, I should mention that we track these on the projects tab of the ecma262 repo. And then I wanted to call out specifically regarding 2007 the mathematical values versus number thing. Here we have a chart of the various kinds of numbers and operations you might want to do with them. So this middle column is the text that you would write in the specification and this rightmost column is the text that you would see rendered in your browser. So specifically mathematical values are not bolded and do not have a subscript you just write 0 to mean the actual 0. JS number values are bolded and all have this little F subscript to indicate that they are talking about the floating point numbers rather than real numbers, with the exception of NaN. NaN does not get a suffix because NaN could never have been a real number. BigInt values get this little Z suffix, the integer suffix, and are bold. And then you can convert between a number and or a bigint and a real - you must do so explicitly. There's no implicit coercion. You can convert using these three operators, and those are all defined precisely in the algorithm conventions or notational conventions portion of the spec. So, try to get these right. The linter will help you if you do something which is obviously wrong, but the linter does not have a type system. So it's only able to identify things that are obviously wrong. We will do our best in reviews to get these right as well. I'll give it over to Jordan.

JHD: I've decided to step down as editor after ES2020 is completed. I've been serving as editor since 2018, covering about three years of the specification, and helping to provide continuity during a time when interest from others in the role was inconsistent. Over the last year it's become really clear to me that SYG and MF and KG all have an enduring enthusiasm for the work, and they all have strong and complementary editorial visions. Although I'll continue volunteering to do repository maintenance at the behest of the editor group, I'm confident that fewer cooks in the kitchen will be more efficient, and I’m happy to reclaim a bit of extra time for family and other stuff. Thanks everybody.

AKI: Thank you, Jordan and Kevin.

ECMA-402 (Intl) status update

Presenter: Ujjwal Sharma (USA)

USA: Good morning everyone if it's morning for you. Shane like a lot of our friends in Pacific Time found it really hard to make it here, so I'm here to make sure that you all don't miss the amazing status update for Intl stuff. Just real quick, what is ECMA 402 for the uninitiated? ECMA 402 is JavaScript's own favorite built-in internationalization Library. So say you have a Date object. Those are fun, right? And you can then if you're printing out that data object to a website you get you could use DateTimeFormat and then you can produce formats that would be satisfying to people on both sides of the pond. How cool would that be? So how is Intl developed? Well Intl is part of a separate specification that is developed by TC39-TG2, which is different from TC39-TG1, and it's called ECMA402 it's a different specification from this specification though we move proposals through the TC39 standard process. And we have a monthly phone call to discuss the issues in greater detail. If you want to join this monthly call or if you're interested in getting more involved in the space, just send an email to ECMA 402 at men at chromium.org. He and we'd keep badgering you with all these reminders so that we show up and if you want some more information about the project and about the people who work on it, just follow this link.to the repository TC 39 TG2 is a project that requires a lot of people to keep functioning. There's people from all over the board from Google Mozilla. Igalia and Salesforce are helping out and let's see what we have in store. There's no normative PRs ready for review by this time, so let's go straight to these proposals. The first stage proposal is date format range by PFC. So this one stage 3 shipped in Chrome 76 and is behind a flag right now in Firefox. It's one of the candidates for stage 4 pending a few editorial issues. It's still being worked on in the repository about the issue about the proposal. It's pretty amazing.you can basically format a whole range of 8 so you can say well you're delivering with happen anywhere between January 10th to 20th 2007. I don't know why they use that date. But yeah, okay about Intl Segmenter that is championed by Richard Gibson. It's shipping in Chrome 87 without a flag, and it's enabled on the JSC trunk. This is also super interesting. You can use it to segment words or sentences or paragraphs and then iterate over them. How fun is that and you can also do that irrespective of which look at Kyle Durand, so it's even more fun.stage one and two proposals are there's a few so if you remember format range from date, there's format range for numbers as well. So you can say well your seven-foot-tall Marie plushie would cost anywhere from 70 to 50 dollars or something like that. I don't know it's championed by hey, there's a bunch more group into this than just format range. Go check it out, and it's pending resolution to design issues before promoting these days three the exact design details are still being worked out. So if you care about this proposal do check it out next up we have a duration format which was championed by YMD and myself, which is also pending afew design details, but it's if you're familiar with temporal durations this can help you format those durations. So you can have a duration of two hours and 30 minutes and can format it in Spanish, too.

USA: Well, okay, next up we have the Intl enumeration API. It's at stage 2, championed by Frank and it's a good API. It's blocked on a bunch of concerns regarding privacy or fingerprinting concerns by certain people. We've been working with certain folks to resolve these consensus. See what we can do within the constraints of a private API. If you're somebody who knows about these subjects and wants to help out, we’d really appreciate this help because we aren't experts in privacy. So yeah any help in this would be really appreciated. Smart unit preferences is a fun one. So it's champion also ins and is Vlog on discussion involving for a to scope. So there are a lot of concerns regarding if this proposal is even in scope or normative, you know, it's because it's not strictly formatting and now we're considering adding it to number format. So that sort of the concern here if you have any thoughts regarding that please chime in on this issue and that's no lie that for this proposal Intl display names v2 is another one. It's championed by Frank and it adds a bunch of amazing things to display names. So that's Wii games trying times earnings calendar names a bunch of things. It also adds two dialects, which is really cool. And there's an update schedule later in this meeting. So was the space same for into local info Frank is doing some amazing work on this one and it's going to expose a lot of amazing information.That's going to be really helpful friend off actualization apis for the Locale object and there's an update also scheduled later talking about the stage their proposal the fun one we have in the pipeline right now is user preferences. It's not actually a single coherent proposal yet. It's a bunch of issues that are proposals attached to them which when put together make for a more comsive solution, but we still need to figure out how exactly to navigate the space. So there's proposals like The Navigators Locale proposal and you know adding accept language headers stuff like that. This is really one of the things that we're expected to work on in the next few years and things that this is one of those proposals spaces that we're really excited about. So if this is something you're interested in also,Get involved! So that's that's my last ask for you if you want to help us out with documentation or implementing stuff in js engines and polyfills or if you're a C++ or Java visitor and want to help us with ICU stuff, please help us out and if you want to join our monthly call again this email is going to help you. Thank you.

ECMA-404 (JSON) Status update

AKI: Chip, your standard 30 seconds if you don't mind.

CM: Nothing to report

Chairs group update

Presenter: Rob Palmer (RPR)

RPR: We've had feedback that the committee doesn't really want to spend much time on this and so we are streamlining things quite a bit here. This year in 2020 we've had a chair group of four co-chairs. You saw their faces earlier on AKI's wonderful opening slide. You know who they are. And we also had previous chair Yulia Startsev assisting us in the Dowager role. This was something that was brought up in the February meeting, but I think it's also worth clarifying for everyone what that means for the Dowager role. That's someone who has access to pretty much all chair activities, but not have the same kind of final say. And they also have access to things like the weekly chairs meetings, the messaging channels that we use to organize and in set up meetings and so on but obviously it's a lower commitment and a lower time expectation than the chair role itself. And the reason for clarifying this role is that we are taking on a new Dowager in the form of MBS. MBS was a full co-chair this year and for 2021 he's enthusiastic and very excited to take on the title of Dowager. So we thank him for his service. This is the proposed set for 2021. It's not all that much change. So what this means is that we're not planning to have an election. You may already be a little bit weary of elections particularly those in the US. So if there are no objections, then we will adopt this in the January meeting and if you have any feedback, we have an open reflector issue. Thank you.

Handling of NaN & side effects in Date. prototype.set* methods

Presenter: Kevin Gibbons (KG)

KG: This is a PR that I brought up very briefly at the last meeting and kind of tried to sneak in at the end, but people didn't have time to review. So we didn't advance it there. Hopefully you've all had time to review this PR now. Basically what's happening Here is that there are these operations - date dot prototype dot setThing. All of these said that you needed to be performing these operations on a time value, where the time value is derived from the this object. So we call like setFullYear on a date value and the date looks at its time value and then passes that value to these various time calculations that unfortunately didn't really operate on NaN. So a thing that you may not have known is that the date object JavaScript can represent an invalid time and the way that is represented is by having a time value of NaN. And again these operations do not make sense on NaN. So there was this question then in implementations of what exactly to do here, because the spec didn't make sense. And this was observable in that some implementations when the time value associated with this was NaN, they would return before converting all of the various operations to Numbers, and other implementations would return after converting all of the various operations to Numbers. Of course converting to Number is observable because we have this wonderful valueOf operation in JavaScript that makes any conversion of an object to a Number an observable. So we need to pick a particular semantics for the whether to return before or after performing any side effects in the conversion of these values to NaN.

I have chosen the thing that I think makes the most sense and matches a majority of implementations, which is to perform the coercion for every argument and then do the check to see if the this time value is NaN. So that's what I have specified here on the right hand side, and here's a variety of operations that needed this and I have just done exactly the same thing in each case. Right. So that's the proposal. There's a proposed semantics impact to some implementations, but not others. It's almost certainly web compatible. And I think it makes more sense. Do we have a queue?

DE: No, yeah, sorry for not managing for you this before last meeting, but this looks good to me. Is somebody writing tests for it?

KG: That's an excellent question. I wrote tests for some of the date ones. I don't think we have tests for this yet. But I think that's mostly a matter of I haven't communicated to the test262 maintainers that that needs to happen.

DE: This seems good to change to me. Once there are tests in my opinion it seems good to land. It seems like a good change.

KG: Okay, we'll call that consensus. I'll make sure this has tests before it lands.

AKI: Excellent.

Conclusion/Resolution

  • Add tests and land.

Handling await in left operands of exponentiation

Presenter: Daniel Rosenwasser (DRR)

DRR: All right, I can get this started. Gooooood morning Budapest. I was really hoping to be presenting from Budapest this year, but maybe next year. Do you all see stuff? I'm going to assume you all see stuff. Do you see my presentation? Yes. Okay. Great. Okay. So this presentation - I'm here today because Kevin found an inconsistency between implementations and it stems from this original issue, right? So way back in the day when we added exponentiation there was this question of what is the order of operations when you have a unary operator on the left hand side and depending on what your background is the order of operations is arguable: in some context you might interpret this as negating first and then exponentiating by Y, in other contexts, you might exponentiate X by Y and then negate at the end and the solution that tc39 came to was to disallow this entirely and so this avoids visual ambiguity. If you want to get either one of those meanings you have to parenthesize, so you can parenthesize negative X then exponentiate, or parenthesize the exponentiation and then the negate. But part of the change also meant that certain other things were disallowed too - so specifically you can't have an operator on the left side of an exponentiation either. Basically this was another visual ambiguity issue and we disallow this as well. delete, typeof, etc were all disallowed. And specifically the one that I want to talk about today is await because await is one of them. You can see that if you just go through the grammar where if you get to ExponentiationExpression, there are two branches one where you're allowed to use the exponentiation operator and when we are not, and unary is the one where you're not allowed to use that and can clearly see "await" there. So where's this done correctly? Well TypeScript correctly disallows await x ** y, right? It doesn't allow that syntax (XS doesn't either) but unfortunately everyone else seems to allow this syntax. And from what KG has told me this sort of stems from everyone taking a very similar implementation approach, but just for completeness. These are the specific ones we know of that have this issue that KG told me about and it's very sad as you can see from all the Emoji. So the proposal that I have is implementations should ideally just fix the bugs and become spec compliant and because this seems to be a common issue across implementations have just some sort of notice within the specification to tell people "hey, just so you know, await really is supposed to be handled specially here" or I mean in a consistent way here. There is an alternative which is that maybe engines just can't change now, maybe there's some sort of like web reality thing that we have to account for here and to reflect that you can make a naive fix but that has some poor effects and specifying that is kind of tough specifically like naively the thing that you might just be prone to say is well, there are those two branches for unary for exponentiation. Just move await into that other branch where you do allow it, right? So you move it into UpdateExpression, but now you have this other issue where now you allow some of the original syntax that we found to be bad; you end up having the same issue as before where you can end up with at least part of that issue of await -x**y. And so I'd go as far as to say this is actually worse than what we were trying to avoid in the first place. So my recommendation is still we should try to continue with what the spec currently says today across implementations. So I'll leave the floor open to discussion at this point and I will stop presenting.

MM: Historically the best way to get the implementations to follow the spec is to have a test262 test for the non-conformance. Does test262 test for this case? If not, adding that should be at least the same priority as adding a note and talking to the implementations.

DRR: I agree with you that we will need a test for this. I don't know the status of that Kevin might be able to speak to that better than I can.

KG: I don't believe there is because I think if there were implementations would have gotten it right.

MM: Exactly, exactly.

DRR: if that's the best way to go about this then I think that's all right. I think I just want to get a gauge of what implementations feel about this as well at this point, but that sounds like the right path forward.

BSH: Yes, I just wanted to say that I think that this shouldn't be a problem in terms of web reality at least based on data available to me. I did a search through our very large code base in Google and could not find any JavaScript that actually would break if you change this at all now. We don't use this syntax anywhere.

DRR: I agree because I think if you wrote this I'd have questions for you. So seconded.

WH: I'll keep it short and sweet. I think implementations should implement the spec as written. And if there are any web compatibility issues then come back, but I don't expect that to happen here. The best thing is to just implement the spec as it's written.

DRR: Yep, I agree. All right.

SYG: So yes, so I agree with what people have been saying. It seems unlikely that these would actually cause compat issues. It seems yeah, you just keep that this code is not something you would go right? I think to be extra careful. We can do a query over HTTParchive since this is a syntactic thing that we're trying to change. I am going to fix this without such a thing, but if someone wants to be extra careful, feel free to craft a query and then ask me to run it for you so you don't encourage large GCP costs. Yeah, seems good to me to fix the implementations.

DRR: sweet.

LEO: Yeah, just a quick comment on what is disallowed in the grammar and why there might not be tests for things like this. ECMAScript does allow extensions of the language. So not having a grammar (the lack of a grammar allowing something) doesn't directly mean something is disallowed. So test262 cannot have a test for everything that is just not in the grammar because they are not disallowed. It is allowed to to have extensions conforming to ECMAscript unless it's explicitly said by static errors or forbidden extensions, etc. So this is very complicated in terms of like it doesn't - it's not implicit. So if you want to disallow something I recommend putting this [audio cut off]

DRR: I think your audio cut off. However, that's a really good point and I'm not exactly sure how to address that. it seems like we have a lot of places or at least a couple of places where the intent is to explicitly disallow syntax may be the most recent case being knowledge nullish coalescing where we basically crafted the syntax in a way that it couldn't be mixed with other operators at a logical level, but then we punted on the idea of having an early error of some sort. So I still think implementations should switch their behavior up. But I don't know how to - Mark, you seem to have a comment.

DE: So I want to disagree with LEO. I think even though there's some text about disallowed extensions, I don't think the traditional way of looking at JavaScript as allowing extensions is useful the same way today as it was in the like actionscript and jscript era. I think languages that extend JavaScript talk about themselves explicitly is supersets. And I don't think we should worry about the editorial difference between early errors and something not being present in the grammar. I think if we did then we would just have a have a huge amount of additional early errors to add in order to accurately describe the language, so I want to revisit this at a future point. If people feel that the distinction significant than I would really like to to discuss that further.

LEO: yeah, we always discuss this and we always disagree. I'm not sure we can make a productive use of time during this meeting. Okay.

WH: I agree with DE. I don't want to open the Pandora's box of explicitly specifying everything which should result in a syntax error in the spec.

AKI: And also SYG +1 DE and has nothing more to add beyond that. All right. So where does that leave us?

DRR: It sounds like maybe we need to test but it's not clear to me. Who or how I can try to work on it a little bit, but I'd need some guidance. So can we have a volunteer to help coach me on writing a test 262 test for now.

KG: Let's just open an issue on that repository and we can talk about it there.

DRR: Sounds good. All right. Thanks Kevin. Thank you all and I'm done. Thank you.

Conclusion/Resolution

  • Add tests.

proto normative optional options

Presenter: Jordan Harband (JHD)

JHD: So this is talking about PR #2125 on the spec. The current status of the pull request is that it is a normative request that moves proto out of Annex B. There was an open question on it about whether it's allowed for implementations to do all to do a partial implementation of Annex B in other words, can you have defineGetter and omit defineSetter? The editor group decided that we would land 2125 once it had had appropriate updates without addressing the All or Nothing question, and Gus decided to add this to discuss in plenary about that question, which can be done in a separate PR or this one.

JHD: As we move things out of Annex B, the question will come up if normative optionality comes in bundles or not. Meaning, is it acceptable to choose one normative optional item in the entire spec and implement it and then omit the rest, let's say - or, should we instead decide that there are certain normative packets of normative optional things that you either have to implement the entire packet or none of the packet. So for example, __defineGetter__ and __defineSetter__ are things that I don't think anyone has implemented without doing both or neither of them, and it doesn't seem to make sense to anyone I've spoken to that anyone would want to implement one of them without the other one. Perhaps more even relevant for these, it is likely that if you only Implement one of them you will not actually achieve the goal of being compatible with code that expects these things to be present. So essentially the question as I understand that then is: should we, and and how do we, specify groups of things to be required wholesale or omitted? Hopefully I explained that well if anyone wants to clarify me before we go to the queue, please feel free.

AKI: That's a lot of words that you would have to be in this committee for a really long time to truly appreciate, and I'm not accusing you of being intentionally obtuse—I don't think there's another way to say it.

JHD: I did my best :-)

AKI: It's just like a hard thing to follow right? I think there aren't any clarifying questions on the Queue. So I think we can go to MM.

MM: To clarify the history, for Annex B itself, there was an issue of how to interpret it. The intro language on Annex B implied that Annex B as a whole was considered a single Atomic package, which I think was a counterproductive interpretation. When we decided to move things from Annex B into the spec. We also decided on the a la carte approach to the normative optional, that they were separate. But I very much like your question and I don't want to predetermine the answer. Going completely fine grained separation is counterproductive. I like the idea of defining bundles. I think that some bundling is obvious, and it will be clear where it stops being obvious. I think we should err towards finer-grained bundles so that there's more choice, but that's that's just an opinion I'm offering now as the question comes up.

JHD: The challenge, is that making the determination is a normative requirement, which is something that the editors feel is beyond the scope of what the group is supposed to do. So while I agree with you that it is likely to be a relatively intuitive normative decision with each a la carte question, we would have to still decide that in plenary, I think.

MM: I agree.

AKI: Okay, is the tension resolved?

MM: Yes.

DE: I remain a bit disappointed that we're still leaving some of these parts as normative optional. I saw there are also marked as Legacy in the pr and I was hoping that if we can't just make them normative normal, at least we would be calling them deprecated, but mandatory. I'm pretty concerned about the variation of different flavors of JavaScript. I think we should be focusing on compatibility not optionality.

JHD: When Annex B discussions have come up before, where the committee landed based on my understanding is that similarly we would discuss that question a la carte. As it relates to this specific PR, the syntax for __proto__ for example is intended to be required. For security reasons the __proto__ accessor on Object.prototype is intended to be normative optional so that it can be removed. And the rest of it like __defineGetter__ or __defineSetter_ is continuing to be optional at the moment, and that's something we could certainly discuss with an appropriate agenda item. So I think Dan to your question, if there's anything that you think that you would like to be made required that goes beyond that or has other things come up as their hoisted out of Annex B, it seems like a great idea to talk about that in committee so we can make that decision.

DE: Yeah, I don't want to get us off topic right now. I just want to register my continued disagreement. I previously proposed that we make all these things normative, but we don't have consensus on right now.

JHD: okay, so there's nothing else on the Queue. What I think would be great to unblock Gus here is, do we have consensus that defineGetter and defineSetter should be implemented as a bundle, meaning do them both or don't do either. Let's start with that. Then if people are comfortable with that then the follow-up question would be should we then assume that we will continue to ask this question. As for each thing that has been brought out of Annex B so we can make a decision.

MM: The obvious bundle is actually larger, despite my saying that we should err towards smaller bundles. lookupGetter and lookupSetter are clearly part of the same bundle as defineGetter and defineSetter. I agree that each time this question comes up it should be brought to tc39 plenary, but I expect quick answers.

JHD: So then to modify the proposal then Mark you're suggesting that all of __defineGetter__, __defineSetter__, __lookupGetter__, and __lookupSetter__ be treated as one bundle.

MM: exactly.

KG: But not the proto accessor; the proto accessor would be a separate part.

MM: exactly. And in fact, I would be okay with the defineGetter bundle becoming mandatory. Salesforce had a security concern that surprised me, but that is real, that makes me agree that the proto accessor should remain optional. And in any case it's quite distinct from the defineGetter bundle conceptually.

JHD: So I'm going to then assume that we have consensus on making those four items one bundle and leaving the __proto__ accessor separate. And in the future as these questions come up we will bring them to plenary with the understanding that we are likely to decide on various little bundles ad hoc. Thanks everybody.

Conclusion/Resolution

__defineGetter__, __defineSetter__, __lookupGetter__, __lookupSetter__, will be treated as one “bundle”: an implementation must implement either all or none of them __proto__ accessor will remain independently normative optional Future questions of this nature will be addressed in plenary in an ad-hoc nature as items are lifted out of Annex B

Re-resolve unresolvable bindings in PutValue

Presenter: Shu-yu Guo (SYG)

AKI: Lovely. Alright, so next up we have SYG with re-resolve unresolved resolvable bindings in PutValue. We are using words today man. It is early and we are just going for it.

SYG: All right, so pretty small item. Basically, it's this snippet you see here. The way the spec is written references. Are these things that remember the object that they were that they were resolved on. So when you resolve the left hand side, at the time, there is no binding; the name is not declared. But when you execute the right hand side, you can actually then make a binding with the same name. Came back because the reference that makes that the state of whether it was resolved or not is remembered. Come time to do the actual assignment it will double check. “Well is the left-hand side at the time I was offered before execution”. The right hand side was a result and if it's not resolved, it should throw in script mode. The problem is nobody actually does this and nobody has done this for a decade. There's that part that's like there is from 10 years ago in Firefox and I'm pretty sure this is just web reality now, for other implementations as well. So I am proposing to change the suspect Behavior to match implementations in this case, which is to re- resolve the left-hand side reference when you actually do the assignment.

[transcription error] So if it's defined after you execute the right hand side, you have a binding of name then the assignment succeeds. Yeah, that's about it. Let's go to the queue.

AKI: Queue is empty.

SYG: All right. Thanks, I don't know. A few items have come up in the queue. Give it maybe half a minute and I'll assume this is not controversial and we have consensus. To repeat here: no engine or other implementations need to do anything.

MM: Okay, so it says re-resolve, why not just resolve it once but later rather than re-resolving it.

SGY:Good question, I don't remember. I have to check if the current implementations in fact due two resolutions or it is just late resolution or and if there is different behavior. Can we actually observe this - we can right?

MM: Yeah, if there's an accessor there the getter would be called twice. That seems very bad.

SYG: Do we go through that path for putValue? I'm just thinking out loud - that we go through to get her packed, but that's right the Center PATH and sure the sentiment the center would be called toes. Hang on. Anyway to Mark's question. I will investigate and comment on the issue. I will take consensus in this case to mean that we would change the spec. As to exactly how that will change the spec, I need to investigate your question.

MM: Okay, so I agree that we should change the spec. I feel rather strongly that if we can do it with one resolution at a later time that that's way preferable. The other question that I want to make sure is covered is that there are two places with an object environment rather than a lexical environment - the global is one. The other one is “with” and the same question should come up. Are you proposing the same semantic change with regard to "with".

SYG: No, I'm pretty sure it just doesn't come up with "with" but I also don't remember while I remember thinking about this. I don't remember the exact reason why it doesn't come up with “with”. I will also get back to you on that.

MM: Yeah, I would really want to understand that before this goes forward.

SYG: Yeah. That's all good. I mean that seems fair. So let the notes reflect that no specific resolution for what to do because I forgot some of the details.

Conclusion/Resolution

  • Revisit with more details.

IntegerIndexedElementSet should always indicate success

Presenter: Kevin Gibbons (KG)

KG: I'm presenting on behalf of Ross who I believe is at an all hands or something. This is another follow-up to 2164 "align attached buffer semantics with web reality". The specific issue is that behaviour specified as of this that PR was that if you write past the end of a typed array - not even necessarily a detached array, just just past the end of an array - the write would fail, but moreover in strict code that write failure would throw reference error, just as would, for example, trying to write to a non-writable property on an object. And unfortunately, it turns out that it is not web compatible to change implementations to match the specified behavior; that is, this little case here [in the PR] must succeed for the web to work. Which is unfortunate. It breaks non-trivial websites like godbolt. So this proposal just makes any integer indexed write to integer indexed collection always succeed. It will still perform any side effects involved in the coercion of the argument; that's not changing, that was true in the previous specification, but it doesn't matter if you are trying to write a detached buffer or otherwise write past the end of an array. It just always unconditionally succeeds modulo those side effects. And again, this is a web compatibility thing that implementations have found that they cannot change. Do you have any questions?

MM: This is terrible! This is the entire reason why when we introduced strict mode with es5 we went through the pain of turning failed assignments into thrown errors. So that code that assumed that the assignment succeeded would not simply proceed on control flow paths that assumed the success. Over here if you're going to indicate success for any assignment and then the code turns around and reads at that same property name you're presumably not going to give back the value that was written because you're not actually writing a property. So you're losing the integrity goal that goes back to the introduction of strict mode in es5.

KG: Yup. ¯_(ツ)_/¯

AKI: Okay, do we have more to add to this line of thought or do we want to move on to Waldemar?

WH: Mine is an actual question. Just for my edification, I want to understand: how did we get into this situation? Did we get this wrong in the spec or did the implementations just implement something other than the spec here?

KG: So I believe the history here - and please someone incorrect me if I'm wrong about this because this is before my time. The history is that the typed arrays specification wasn't originally ours. The typed array specification was something that the Khronos group did as part of the webgl stuff, and they did not specify things in quite the way that we would have specified things, and browsers implemented these things before there was a proper tc39 specification, and people started using them in the wild before there was a real specification. This is my understanding of history. And I think that what happened is that it's not exactly that we got the specification wrong or the implementations got it wrong. It's that those two efforts happened in parallel and produced different results.

DE: To fill in a little more, browsers have been aware of this mismatch the whole time. So browsers shipped the Khronos typed arrays this way, then ES6 came out and said, please make several changes to typed arrays. It seemed like they would have web compatibility risks. But even though there were test262 tests indicating a lot of these kinds of failures browsers didn't implement those to avoid the web compatibility risks. There were other changes that were implemented, the smaller ones like the toLength change and adding all the methods, but I think it just wasn't really possible for tc39 to adopt the Khronos typed array spec and then make a bunch of changes to it. The plan was - my understanding of the plan of the committee at this time was to write this spec, throw it over the wall, and hope that browsers work things out much like the Annex B function hoisting incompleteness. And I think this is just a mode that we can't work in because we've seen time and time again that it doesn't work instead. We need to do work together and to assess the web compatibility of things before adding them into the specification.

AKI: Did we have an actual conclusion yet?

KG: Yes, so I'd like to ask for consensus for this PR that's been open for a while, which is just as I said: it makes IntegerIndexedElementSet, which is the operation for writing a integer index to a typed array buffer, always succeed regardless of if you are writing past the end of the array or to a detached buffer.

MM: I really find it hard not to object.

KG: I should mention that no implementation is ever going to change this.

SYG: The status quo is worse, I think.

MM: [sighs] I do not object, but very reluctantly.

KG: I will capture that in the notes.

AKI: All right. Thank you all. I believe Yulia had something on the queue, but we had a minor blip in our queue. Was there something else?

YSV: it's cleared up in the chat.

BSH: I'm just curious - it was stated that we know that if we fix this it breaks various important websites; do we know why? Do we know what is it they're doing that somehow the program is actually working even though they're setting things that didn't actually get set and yet that's okay.

KG: I don't believe we know why, unfortunately.

BSH: Okay. Just wondering, yeah.

Conclusion/Resolution

  • Accepted, reluctantly.

clarifying conclusion to proto normative optional agenda item

JHD: Should proto getter/setter be bundled as well? I'm not a hundred percent sure of what to suggest actually because I believe only the center has a security concern not together.

MM: I feel strongly they should be bundled together. The way to remove the setting behavior to remove the accessor property.

JHD: Does anyone else have thoughts on that? if not, I think we'll just go with bundling the two of them together.

MF: My preference aligns with Mark.

Conclusion/Resolution

__proto__ getter and setter will be bundled

Give %TypedArray% methods explicit algorithms

Presenter: Shu-yu Guo (SYG)

SYG: So this is another thing where in Ross's quest to get the spec to reflect reality about typed arrays, we found some other parts that were under specified and have some engine disagreement. So this is one of those follow-on PRs I'm presenting for Ross. So currently on TypedArray.prototype for the different TypedArrays, the first bullet point lists [on the slide] all the things that do not have algorithm steps. They only have prose. The only two methods that have algorithm steps are map and filter. So what the PR does is it tries to give everything algorithm steps, but we ran into some issues. So first of all what the prose currently says is the following three things. I'm paraphrasing of course, but they all basically say to implement the same thing as the Array.prototype counterparts except when you would read that length property, instead read the internal slot [[ArrayLength]] on the typed array, and at the start of every method call validate typed array basically all valid a type. All it does is it throws if the typed array is detached at the beginning of the method call. And what this PR does is then tries to follow the spirit of what the process would be. So it first copies the spec text word for word step for step from the Array.prototype, then inserts ValidateTypedArray call at the start, then substitutes all the reads of the length property with reads of the [[ArrayLength]] internal slot. And this is highlighted because this is the part that has disagreement and is under-specified and in red. The [[HasProperty]] checks because TypedArray doesn't have holes and implementations don't call [[HasProperty]]. Notably the prose doesn't say anything about what you should do about the [[HasProperty]] checks. So the implementations kind of interpreted some way and did something. Luckily only three methods care about the HasProperty checks. So steps 1 to 3 here is sufficient to fully define in a compatible way for all the methods except these three methods: includes indexOf and lastIndexOf. Once a typed array type to raise the underlying buffer is detached, type 3 behaves as if they have no index to come properties. So all calls to the hasProperty, the [[HasProperty]] internal method will return false for its properties.

So first, I'm going to show you how we differ. suppose we have the following code. We make a new Uint8array and then we make a poison pill that sneakily detaches the typed array when you use it as a value and then we check if the typedarray includes the value undefined. and the algorithm steps of aray.prototype.includes where we copied to the algorithm steps from what it does is, it gets the the value of the second argument which is the start index to search for my thing like it does the value of conversion and stuff after it validates that es that this argument so the validation in this case. At first it's not detached, so it doesn't throw but then it detaches when it tries to coerce or convert. So what should this do currently? JSC throws because JSC kind of throws on all the detached stuff; SpiderMonkey doesn't, it returns true meaning for sneakily detached; V8 returns false, saying the detached TypedArray does not include undefined or the proposal is to not aligned with you know, step by step what the array methods do but instead try to align with the spirit of what we interpret the spirit of the original Pros definition to try to mean which is aligned with the observable output of the array method. So what I think the observable behavior ought to be with the detached case is to be the same as the analog of setting the length of a regular array of zero. I consider that an analogous case to sneakily detaching typed array. I sneakily set the length of a regular rate to zero. So this is basically the same example as before but instead of detaching, the length is 0. Here the engines all agree. We all say that a sneakily truncated array behaves as if it includes undefined. So recall again, the only engine that agrees with arrays today is spider monkey. So the proposal is to have SpiderMonkey semantics, to align with what the array analogy does. I think the web compat risk is very low. I don't think people are writing code like this. It seems very unlikely to exist in the wild. IndexOf again. Here's two examples that kind of tries to sneakily detach a typed array and truncate a regular array. For indexOf, V8 is the one that aligns with the array behavior. meaning array index of and typedarray index of both have a hasProperty check. and because a detached typedarray behaves as if it has no own properties, it does not find the index of undefined. And same thing for lastIndexOf in reverse. So again aligned with the semantics here which aligns with the array semantics. For spider monkey least I don't think this is controversial. There's a join method that can also sneakily detach or truncate the input array and array that's joined is as if it's array full amount of undefineds.

WH: You said that if you call includes on an empty array and you search for undefined it returns true, but it actually returns false.

SYG: what if you if you search on empty array for undefined you get false you right you search an empty array in this fashion where you sneakily truncate the array? in the conversion of the second argument you initially when you called includes the array is not yet empty. Okay. Does that clarify?

WH: Yeah, I'm just trying to figure out what the regular array semantics are that you're trying to match here.

SYG: What I'm matching is that in TypedArrays when I first call includes, the TypedArray isn't yet detached, it's still attached. It is the conversion the second argument that causes the TypedArray to be detached.

SYG: The thing I'm trying to match is I think the closest thing in regular arrays to a sneaky detached typed array is a sneakily truncated array where I make it empty by setting the length to 0. If you disagree with that we can discuss it after the presentation. I think this is the closest one.

SYG: I don't think there's anything controversial here. We should return empty string comma empty string. Just calling it up that spider monkey for some reason stringify is undefined in a typedarray of a detached typed array to the string "undefined" which does not do for stringifying undefined in regular arrays. And that's about it. We have more questions on the queue. But to be clear the consensus I'm asking for is that everything else, all the methods that are listed here except join, index of lastIndexOf, [?] and includes have agreement among all the engines.Once they're given explicit algorithm steps and for the ones that do disagree that we get consensus on. Specifically for includes to have spider monkey semantics for index of and last index of to have a V8 semantics and for join to have V8 semantics, which all aligned with what I think is the closest analogy to a detached typed array case on regular race.

SYG: All right, so it's fine to me. I'll consider this consensus. Yeah, it looks like it's generally positive.

AKI: Thank you for being so flexible. Thank you everyone who continues to be flexible. Thank you. All right.

Conclusion

Consensus on the PR, to wit: Algorithm steps copied exactly from Array counterparts for all methods where [[ArrayLength]] read instead of "length" and ValidateTypedArray is called at the start except includes, indexOf, and lastIndexOf. For includes, the TA version does not have HasProperty checks, just like the Array version. SM is the only web engine that currently implements the proposed behavior. For indexOf and lastIndexOf, the TA version does have HasProperty checks, just like the Array version. V8 is the only web engine that currently implements the proposed behavior.

Concurrent JS: A JS vision

Presenter: Shu-yu Guo (SYG)

slides

SYG: So first of all, I am not asking for any stage advancement here. I'm throwing some ideas out here, but I'm not presenting a concrete proposal yet. This is more of a - we've had some vision talks before; this is something along those veins - something I would like to see, a Direction I would like to see the language go. I'm going to describe the general problem space describe where I think I wanted to go. I'm going to throw some concrete ideas app, but not asking for any stage advancement and I apologize for not really having any reading material ahead of time until now.

SYG: So I start with motivation then I'd list the concurrency models. the JS ecosystem currently has described them, describe a road map I would like to see us go, and then throw out some concrete ideas. So I think the motivation is actually pretty cut and dry. We've been saying both in industry and academia the we need programming languages, too. Take advantage, of cores better. I mean now we're at a point where our phones have 6 to 8 cores, high-end and low-end have 6 to 8 cores, and we’re at a point where heterogeneous computing is going to be the regular way for things to work. Arm has this thing called big little where there are big power-hungry cores and then there are small lower frequency much much more power efficient cores and the key thing with big little is that the little core are much more power efficient than they are slow like a big core I don't think consumes an order of magnitude more power, but like at least multiples more power than the little cores. Not the same multiple of speed up to power consumption, In the future, especially as more and more things move to this heterogeneous arm architecture. It makes a lot of sense to judiciously offload stuff that's not super urgent to little cores so you don't drain the power except in desktops. Of course, we've got a bunch of cores and using those cores kind of real clunky in JS right now. There's no great way to really do concurrent programming.And there are big sophisticated web apps with big company backing that are running up against performance pain points around concurrency. So I think this is a thing that we should solve at the language level. It's the concurrency models.

SYG: The JS ecosystem has two models right now. On the one hand, we have something that's web like, its run to completion. It's kind of sort of actor inspired in that we have these isolated thread-like things that execute code where we pass messages. It's not exactly actors. It's not - it doesn't you know, it doesn't follow the same axioms as well as an actor model. It's heavier weight then something that's designed from the ground up to be an actor model, bu it has some of the trapinggs as an actor model. We pass messages, everything. Of course, there's event loops and importantly there's no data races by construction. By design, data is isolated between the workers. They have their own copies of the primordials, of some of the web apis as well by default. There is isolation.

SYG: At the same time. We also have something that's thread-like by thread like, I mean synchronous API is with manual synchronization, right? We added Atomics with this low-level futex API for locking for building your own mutexes. We didn't even give you mutexes we gave you the building blocks of mutexes. It's really low level. By doing something thread-like you opt into Data races in shared memory. You have shared memory. You have workers. They have the same memory model luckily so you don't have to worry about the two different weak memory models, but you are opting into this very difficult to reason about the world. pictorially the web like - pictures this you got some agents. They have their own memory that separate they'll have an event Loop and they pass messages to each other thread like thing is they have their own memory, but they also have some shared memory and they all kind of point to each other and they all have the same execution. There's some locks to do the synchronization. The reality on the web and on node as well, and I can't say for moddable. I think from moddable I guess maybe you don't have multi threading in the IOT environments, but I have no idea. But the reality of where the node is at least that we just have both you have some shared memory with locks, but most of the time we want to stick with the actor-like async everything isolated kind of model.

SYG: That's for a good reason. The web-like model makes it easy to reason about and easier to use your execution is causal. Forget about interleaving like, you know causality applies. That's something that you take for granted, but that's something that goes out the window when you have the Web-like model: you don't have to worry about data rases by construction. You have things that are isolated. Asynchronous APIs not only mean more optimization opportunities, but it generally means that your program will be smoother. You're not accidentally blocking something with a lot of work. You have to up front. Just think about how to deal with things in an asynchronous way. It's less focused on manually synchronization mechanics and you're not blocking things, you're not manually using semaphores not manually using message queues. This is all kind of built into the system. The main downside I guess is that it leaves some performance on the table, especially when you're migrating from large existing code base that already is using thread-like things like Wasm. Or rather the wasm use case where you take an existing large code base and try to run it on the web. so thread-like, the good part of the thread like stuff is where webassembly is on the uptic, but the webassembly adopting some of the optic there's a future for webassembly things like wasm GC that in a couple years time two or three years will probably also have threading support and they will have threading support right? There is no like actor like model built into wasm. That's not Wasm’s MO, that MO is to support existing code bases right now.For existing code bases. It's important to have threading support. Plus if we do more work for the thread stuff, we will create a cottage industry of researchers and academics finding bugs in our memory model for years to come. The bad thing of threads is that it's basically what I've said before: you have to manually synchronize stuff you hopped into Data races. every once in a billion executions, there's an execution that's acausal and you're just like what is going on? and to hit home, this is hard to use thing - there's a funny picture I'll show later. And also it exposes climbing channels, but that is spilled milk under the bridge. And on the web att least we have chosen to combat the side channel issue by making sites explicitly opt into to using shared memory with some headers. So that will continue to be the case in the future. so here's a picture on the back 2015 as picture of David Barron [?] at the Mozilla San Francisco office where he printed out a sign that said "must be this tall to write multi-threaded code" and posted it much higher than a person could be. But this is the general sentiment around multithreaded code even by you know, World level expert programmers like David Baron.

SYG: Now, having presented those two models, my central thesis of this talk is that we need to push to improve both models simultaneously mainly because both models already here and the pragmatic person and where they're bad kind of complement each other. What I think should be out of scope is the kind of greenfield redesign right there. There's been good advancements in the PL world, with concurrent programming like Rust ownership is getting good reception, and honestly, it's pretty great. But that kind of heavy dependence on static checking is non-starter for JS. Similarly redesigning JS to being fully actor-like I think is not going to serve our present needs as well. So where I would like to see the road map for concurrent. This is on the web like sito push with like the web like and current model. We need language support for async communication and we have we need to have the ability to spot units of computation on the thread like side wouldn't have shared memory. It's a basic synchronization primitive like few, [?] and the ability to spawn threads. Luckily, we already done we have promises. We spent years doing promises we have async await, which is just really nice. We have workers. We have shared array buffers. We have a compass [?] so we have everything we need for the basic basic building blocks already. I think where we are now is face to where we need to make data transferred to message passing part of the web like model to be more ergonomic and more performant for both code and data. And for thread-like, I think we need to have higher level objects that allow concurrent access and higher level synchronization mechanisms in the bare bare minimum close to the metal shared array buffers and atomic that we currently have. because unlike wasm, we're not trying to be purely as a compilation Target.

SYG: So for web-like where I would like to see us go is mainly trying to address where I've observed the biggest pain points, which is transferring data. one expensive transferable things very limited thing that many things are transferable. There's weird re-parenting of stuff like it's just not very ergonomic. And if it's not transferable it is structure cloneable, then you're just copying. We could be much better there. It's too expensive to really use in a way that's compelling to spread your workout over the cores. The serialization deserialization point for ergonomic use is about the same point. Is that when you copy not only do you copy? Sometimes you can even copy because the thing you want to copy is not structure-cloneable and you have to manually serialize and deserialize it into something else that is in fact copyable and postMessage-able across workers. And finally transferring code is basically not possible. You can't transfer functions. We can't transfer modules. We can only stringify them and there will be another presentation later in the plenary that specifically addresses this point. And to address the code transfer point, there will be a module blocks proposal by Surma. I thought I was going to present after Surma as I said see Surma's presentation, but I guess moved around but please wait till later to see how that proposal would help the situation. I'll take a quick detour here to talk about how I want now. I think we can make transfering and sharing of data more performing and more ergonomic with a straw present proposal. so the basic problem with the ergonomics of it is that as I said before this transferable thing exists, but it's very limited, array buffers for example are transferable, but you can't transfer regular objects and most JavaScript applications, you are not coming up with your own manual object layout system and layering them on top of array buffers. You are using plain JavaScript object. So if you want to transfer things you end up having to serialize them into the array buffer one side and deserializing them out of the original from the other. and if you're doing this by serializing and deserializing things that are wrappers it's tantamount to a copy so we're not really making it very easy. I think the basic goal should be that plain JavaScript objects should be transferable and shareable across workers, across agent boundaries rather, and in addition to transferring which is really about transferring ownership. We should also look doc the traditional reader/writer lock and say and also rust ownership kind of basic Insight, is that you either allow a single agent exclusive write access.If nobody has write access, then it is safe to have everybody read this case. You cannot havem in this kind of a single writer exclusive [?] off your reader, You cannot have data Ras by Construction. And so that's the ergonomic problem is that we want performance, when we want to transfer a plain JavaScript object. These objects exist in graphs. So fundamentally, what we need to do is not to be able to transfer your object, but we need to be attributed able to transfer a graph. And the problem is if you start with some graph some object to starting point is, you have to find out its closure you have to find out the transitive closure of all objects that are reachable from it, and the problem is that closure discovery from some starting point is linear in the size of the graph and you eventually probably bottomed out at some stuff. You cannot transfer like object.prototype and function.prototype stuff that's intrinsically tied to a particular global. You can't say I'm going to transfer this thing over to another worker. The second performance problem is that, as I alluded to with the serialization deserialization point around array buffers, earlier transferred objects probably should not create wrappers copies that point to the same backing store. That's the thing we are trying to solve because for complex object graphs if you have wrappers on the order of vertices that you have any object graph that costs a lot of memory that really eats into what you're trying to accomplish by sharing things and transparent things to begin with. It should be more or less performance writes in the memory will case be like copying pointers. There might be some overhead costs some constant overhead costs, but they certainly should not - that extra cost should not scale with the size of your object graphs. The straw person proposal is combining all these problems. We'd let the developer separate object heaps into shareable transferable and shareable parts and the not shareable nosferable parts manual. and we maintain the invariant that while the non-shared parts can point to a shared parts the shared parts cannot point back out. This invariant is very important because this invariant is what allows us to transfer something without having to discover its closure. We know that by construction at the closure that the object graph of the transferable heap is always closed. So the transferable and the shareable becomes the unit of share instead of individual objects. This is coarser grain. This is less expressive, but I think it's a promising avenue of investigation and it strikes the right balance between performance and ergonomic. Let me show you some code. I've been talking about heaps a lot. But you know, what is this object graph that's closed that we have. It sounds a lot like a Realm. the this is a different use case for a realm like thing then the existing rail proposal into compartments proposal, but the the core thing I'm interested in hearing what I mean by a realm is basically this this idea of a disjoint object graph that's closed under itself that's closed by construction and as an invariant that's maintained not just as a like initial state, which the current runs [?]. So walking through some strawperson code here. So the idea is that you transfer this Anonymous module block, which is unfortunately not yet a proposal. The proposal has not yet shown it hasn't been presented to tc39 yet. But the idea is you have some module block that you then transfer over to t to the shared realm to to run. And you get this object out which is the default support here. And this object, because it is allocated in a shared realm, is in fact shared and transferable. And it has this it has this property that it cannot ever point back out to agent-local stuff.. Any attempts to assign an agent local object to any object that is originated from a shared realm will throw. Primitives are okay though. So the string literal "foo" is okay here. now you make a new worker and you transfer the realm to the worker and after you transfer it you'd lose ownership of it and you can no longer mutate it as you did here. On the worker side you receive the realm you have then you also receive ownership of everything within the realm like this object we just created and you can need to t

WH: You said you can no longer mutate it. I assume you also cannot read it any more — or can you still read it?

SYG: You can no longer read it either. That's correct. That's a good point. If you touch any of the references, you still have access to an object in a shared realm it'll throw. If you want to read it, what you do is you can fork these read-only views. Once you fork a read-only view everybody who has access to a read-only view of the realm can read it but nobody can write. And I ran out of space here. But basically the idea is that all the workers who have read only access can read it, but nobody can write anything and you only regain access once you join all the [?]. Each worker has to explicitly give up access. It's read-only be used as I'm done with the read only part I gave you the view back you join it. And once there are no more outstanding views we gain exclusive write access back, but it's not fully fleshed out yet. These are some early ideas. It may require the re-architecting of memory subsystems of our engines and V8 isolates in particular probably need to be rethought a bit. and even though from a language level the thing that makes sense to share, the line we draw for things that are shareable versus not shareable should be primitive versus objects precisely because Primitives don't have identity and objects have identity. From an engine point of view that's not what matters but actually manages its allocated or not whether something is boxed. And implementations differ here. So we need to figure out if your implementation concerns there. For example, V8 basically allocates everything to be a heap object except small integers. Well javascriptcore and SpiderMonkey and and boxes and for example, do not allocate doubles as heap objects. We need to think through the performance implications of the various implementation techniques. Do we have a unified heap or a separate heap in the implementation? This is not the mental model we are talking about here. This is not whether we have one realm or multiple realm, but we're talking about - if we need to allocate primitives like strings out of somewhere. Do we allocate out of a single key or do we allocate out of multiple groups? And they both have trade-offs for how we coordinate GC of these separate workers that they have if they are different they all can have references to a shared GC Realm. This proposal I think should also work for sharing code but more needs to be thought through. like in particular, I think it might work for sharing code because if you make the unit of sharing a realm, the functions always kind of implicitly close over their realm up to the Prototype chain of to the global. So without making the unit of sharing a realm you have to answer the question, well what happens when you transfer function or share a function in read-only mode across different Realms like you re-parent things to the function prototype? What do they close over when they read a global like Math? So these are questions that you can just design out by making the unit of sharing a Realm. So that was a quick detour over a straw person proposal. I plan to propose in the future.

SYG: Coming back to what more needs to be done for thread-like things. The biggest pain point I observed is frankly nobody knows how to use shared array buffers and atomics very well. The impedance mismatch with idiomaticatic JavaScript is just way too high. I think it works fine for wasm integration. It simply does not work that well as the first class things to build more sophisticated libraries on top of it for use as applications. I think Spectre was in large part to blame here because we turned it off for a year or two and we're slowly turning it back on now. but even for Chrome and for software projects at Google with Google level of effort where Chrome desktop we had sight isolation. Shared array buffer were turned on shared array buffers. Were still not. They just aren't good trade-off currently between ergonomics performance the amount of the serialization deserialization you have to do to share things by shared array buffers, it's not something that software projects want to take on just too much maintenance and too slow. So another proposal here that Dan Ehrenberg is spearheading could be typed objects as something with as objects with fixed layout that could be concurrently accessed now typed objects. I think Dan and other champions of that proposal myself included have different goals. I think typed objects solve different problems. This is a particular lens that I would like typed objects to help solve. In the future looking further out. There are many more things we could do. Do we want to have a concurrent standard library that does concurrent stuff? better tooling integration with scheduling apis like the power considerations that talked about earlier as heterogeneous stuff gets more more mainstream via phones we need to be able to schedule better. And while systems of all the OS will be able to schedule things should we also give JS when times get ability to schedule things? And on the thread like side, we're going to need to carefully think through integration with the future of wasm who is definitely going to expose multi-threading in its way and maybe some tools and stuff. That's like a throwaway item. You can always do more work on tooling. now there's related work, of course. To pick a few. I personally worked on it data parallel research project for data parallel JavaScript between Mozilla way back, seven or eight years ago and that ultimately failed because the JIT and the lack of type stability meant to significant warm-up wap was required to really start up the JIT so you can start running the same workload in parallel, but the minute you hit type instability, where you Where you hit a type that your just in time compiled code can't handle you have to de-op and the drop the synchronization points and that just basically killed all data parallel performance. So that experiment ultimately failed. There's this PhD thesis of a blog post from Fil Pizlo, technical lead of javascriptcore, basically laid out an implementation plan for retrofitting concurrent access to plain JavaScript objects kind of like the way Java does where you can lock particular fields for compete access. It was JSC Focus, but he had many great ideas, but I think retrofitting existing objects is a non-starter and I the main difference, with the high level difference with that work is over that I think is that we should instead pursue concurrent access via a different kind of object instead of plain object.

SYG: I was hoping to be a little bit more excited about this, but I'm kind of tired and sleepy. So thank you for your indulgence. Look for more proposals more concretely in the future as I iron out ideas. and finally big congrats to Apple with your new Apple silicon seems like it's great performance. let's put those cores to good use. Thank you.

JWK: In the previous slide. I see it's mentioned the ergonomic problem of sterilization. So is this project going to develop an economic sterilization API? I am interested.

SYG: I'm not thinking about it. I think it could fit, I'm trying to address the ergonomic serialization issue - where people can't share plain object graphs. I'm trying to address that by letting you share plain object graphs. That doesn't mean that all use cases are like that. And sometimes you really want to also sterilize with plain object graphs for different reasons. So I think any serialization improvements here should be complementary and I would to work with that.

MM: One of the benefits of what you call the “web-like model”, the communicating event loops concurrency model, in the absence of shared array buffers, is that communicating event loops are only asynchronously coupled to each other. That makes them a clean unit of preemptive termination. you can kill one one event Loop, one worker, without having to kill all the workers coupled to it, because the synchronously accessible inconsistent state is all partitioned. I wanted to make sure that this is also the case in making this shared realm transferable. In particular, you're thinking about having the transfer only occur at turn boundaries where all the invariants of the mutated objects would be restored before the graph gets transferred, and there would not be any mixed call stack that might involve stack frames from the shared realm. If the stack is empty at the moment of transfer then you can't have a stack with mixed stack frames.

SYG: I am not yet thinking about those. I think there are more fundamental issues to be worked through but I take your point and they would need to be thought through and the implications that they have. I think I'm not yet convinced that it is implementable. But suppose I were convinced of that then yes, I would need to think through some of the issues that you have raised.

MM: Okay, great. I think this is a very nice speculative direction and I'm glad you're provoking us to think this far ahead. This is great. Thanks.

SYG: And yeah if I ever get there to think through the isolation concerns I definitely would need your review and I would like to work with you on those problems. I think you have a much better grasp of what to look for than I would.

MM: Very much looking forward to working with you on this.

JWK: What if you define a function that mutates local states in the shared realm? It seems like it's only preventing mutating the shared realm states with read-only view or transferred shared realm, but the state is “local” for the function defined in the shared realm. So if I call it can I bypass the read-only view limitation?

SYG: That's a great question. I haven't I don't have a good answer to this. I think ideally what you want if it's possible is that local state like purely local variables that do not Escape. You should be allowed to mutate those however you want. But whether or not that is doable, I don't know yet. Doable both from a spec point of view and from an implementation point of view. I just don't know that but yeah, that is the main problem, right? Now if we were to think of the read-only views naively and want to run functions in them. It just seems like they can't do anything. I hope there's a solution there. I don't have one.

JWK: Thanks.

DE: Great presentation. I'm really excited about all the different directions. just about typed objects. I wanted to clarify where I am. I'm not championing a proposal. I'm more interested in the area and chatting with people to figure out to learn more about it and figure out different people's goals. And so if you're interested in this area, please -- lets get in touch and you know, maybe we could work together.

SYG: Sorry for putting words in your mouth. I know you've been thinking about it, and I think you were going to champion at some point.

DE: I mean, I hope to but also other people have done good work, and I really don't want to be like claiming this out from under efforts.

WH: I’m quite confused by whether this can actually become a good future direction in terms of how this could be extended to functions. Jack and Mark already presented some of the questions I have which didn't have good answers. Without being able to transfer functions you have issues with composability, because functions are kind of a canary for other kinds of objects which might have hidden state — they all have similar kinds of concerns. I don't see how this can address issues that arise when you try to transfer things like functions or objects with hidden state.

SYG: so I think functions are - so I think the transferring case is much easier than Jack's case, which was what happens if you call a function in read-only mode in the transfer

WH: I'm using “transfer” more generically. I don't even understand how, when you have a function defined in one agent, you can transfer it to a different agent. What I see is that the realm can define a function that doesn't actually get transferred anywhere — it just stays in the realm it’s defined it in. There are a lot of nice goals to aspire to but I don't understand how this proposal holds together.

DE: This is really answered by multiple blocks. So the problem is not transferring basically bytecodes. It's not transferring behavior. The problem is transferring stuff that you close over. So what module blocks do is they give you realm independent code, it gives you modules modules that are not yet instantiated on a realm so you can share that and then within a realm you can you can instantiate that module and execute so it's all about the references to objects, but code itself can be shared.

WH: I don't understand your answer. I have a closure which I got from somewhere. I want to send the closure to a different agent.

SYG: I think if you just have a closure that you just got like you made a closure syntactically in your agent local way that you normally would for a function and you would like for it to transfer that to a different agent, that is not possible in the thing I am envisioning. What I am envisioning is that the developer basiasically has to make an up front design choice to separate their heaps their object routes including function instances to be shareable parts and things that you would want to transfer between agents. You would need to instantiate from within the share ground to begin with, and then the thing you pass back and forth is the realm, to share ground level.

WH: Yes, this will lead to a lot of fights over slicing complex objects where you have various complex structures,DE including some built-in ones like maybe regular expressions or arrays. You’ll want to instantiate some of their behavior in the transfer realm while keeping some of the behavior local. I'm worried about ecosystem implications of that.

SYG:I think I would need to see the issue to understand the concerns more concretely. It might be possible that the hypothesis of this like letting the user separate out what's shareabe and not shareable is a non-starter, but at the same time That's kind of the road. We went down. That's one of the only viable path forward that I see because I don't think we can actually retrofit existing objects to be shareable.

WH: Yeah, I agree about not trying to retrofit existing objects.

SYG: Yeah, so yeah, it's definitely a question to be answered because I don't know how to answer that exactly. I'm trying to get some early partner feedback from people who might be interested in using this and to see, you know, if we run into any issues

WH: I would love to make something like this work, but I would need to understand better what the plan was.

MS: So you started out this presentation talking about scheduling down the CPU level. I wanted to give a little bit of feedback based upon our experience with asymmetric CPUs. I think you count either three or four generations and depending upon how you count em one that we've had experience with asymmetric CPUs, and there's a couple different models. There's one model where your higher performing cores are available the same, theres your energy efficient cores, or another model where it's either or there's cashiering differences you can have in your architecture. I don't think you want to get down to being able to schedule a particular thread of execution to a CPU or or CPU type. We've done some work with these asymmetric CPUs in javascriptcore and it actually - for example background compilation or Garbage Collection, those may be better suited for scheduling on efficient cores, or the whole system may be better to schedule thing on efficient course depending upon the overall activity or the thermal state or battery state of the device. So I would caution that we don't want to go down to that level, and probably want to leave this to the OS. I do believe that it makes sense to have shared sharing of code and data as as possible modulo the discussions, we've already had across multiple threads of execution, but I just caution going down to the CPU level.

SYG: Understood. Yeah, you definitely have more experience than I do here and I would weigh your opinion very highly here. There's currently attempts and on the web platform to introduce a single thread like within one thread scheduling apis for the browser scheduler to better order it's tasks. and that topic was mainly about thinking if there's such a scheduling API which does exist like two or three years future should that scheduling API also take into consideration in these asymmetric architectures.

MS: I think that we found that actually the best API is more of a hint API -- that we don't want to constrain the scheduler, but we want to hint to the scheduler the intent of particular threads of execution.

SYG: I imagine that must be how the schedule API works today. I can't imagine the web API offering any guarantees like you can obviously see here.

MS: We're not we're not running real time environments for say you don't even you know appear to you want that. So yeah, I want to be scheduled to be a little bit higher level because if we Implement a lower level we actually can cause more damage than good.

SYG: Yeah, that makes a lot of sense to me.

MM: The isolation of the shared realm, while still being able to entangle it in one Direction with the other Realms can point at arbitrary individual objects within the shared realm, but if the shared realm is transferred then all of those pointers have to get cauterized.That's that's it. That can be expensive. It very much reminds me of the problem that Mozilla faced when they wanted to implement the semantics of the weird thing on the web where you can truncate the domain of your web page and then object graphs that used to be entangled now becomes severed from each other. They did that by putting a full membrane between them and paying the overhead of that level of indirection. That's the only way I think you can get this reliable severing of object references on transfer. It's basically a full membrane between the plain Realms and the shared Realms. I want you to consider the alternative of, rather than transferring ownership of mutable objects, might we all together be better off by providing good support for transitively immutable objects and sharing of passing them by sharing in which case there is no loss of access that you need, there's no issue of readers/writers. There's the need to support, you know, the functional programming style of incremental derivation of new objects from old objects at low overhead, but the old objects would be safely shareable.

SYG:I think it's a bridge too far in my opinion. I think it's very appealing but I think mutation as a way to program on the web and node, that's just here to stay, and I think supporting the ability for supporting mutation is very important. Ask for the point of this would be very expensive. I think it seems like Dan is anticipating this a little bit. I agree that if you were to do a compartment style implementation of this the severing of the membrane would be extremely expensive as compartments work how I am envisioning this to be actually be implemented now polyfilled and might be implemented is that the objects that are allocated in the shared realm are represented by some by some kind of fat pointer and there is code in the engine that knows to check the the allocator basically of those objects can some thread local value of basically there's this current thread on this on the realm if not grow and you are not doing kind of a full graph walk when you Transfer but instead kind of amortize it out over each access. each access will then contain an extra branch, which probably will be a couple of loads but I think that is the right trade-off for the performance.

MM: Let me clear up some confusion that I think just went by. Compartments do not bundle in membrane separation; compartments and membranes are orthogonal.

SYG: Sorry. I meant the Firefox implementation called compartments where you talking about the proposal?

MM: Ah, you're right.

DE: I don't think it makes sense for us to optimize for polyfilled performance any more than we would when designing something like atomics because this is just a thing that you should really just use when it's having us this multi-threaded impact. If we made modifications like this, it wouldn't take an extension to the object model. I think it membrane could Faithfully implements this. So I think that's a relevant irrelevant lens, but I wouldn't privilege sort of polyfill performance.

SYG: I definitely agregree with that I think. Where I am not clear is if a proxy can faithfully polyfill it is for whatever we designed for running functions in read-only mode, but that's only because it's not known. I don't know how that should work there. But for the data sharing case, I think it definitely can be Faithfullly polyfilled.

DE: I guess I was assuming that you just wouldn't be able to run it functions in read-only mode again. It's too restrictive.

SYG:Yeah, I don't know.

PST: Thank you much for the presentation. Just sell the that maybe will experiment with excess would be going to hang on several Appliance. I mean like real world one XS comes to shhutter speed machines into separate threads on microcontrollers with two cores and the way we do that to your technology is the big shared realm is home and we have like two small hunting access machine and this machine can actually change pointer to the object objects that exist in that strength will and of course when such object are modified, they are aliased by machine thread and then exchanged by copying by marshalling and unmarshalling. And we found that programming model very practical and easy to understand.

SYG: Thanks. I would like to read more about it. Is there like a file or something you can point me to offline for me to read?

PST: I will send you the link.

SYG: Thank you very much.

DRR: It could be nice for TS like tor TS like tools but I mean as I watch this presentation like there have been so many speculations of oh, we could probably try to do, you know something like parallelism of basic tasks or just be able to serve up data based on the common data structures, but we just haven't been able to do that because of many of the reasons you allude to of just like the costs being prohibitive or just the memory model not playing well for that and so we end up often spinning up several servers for editor scenarios and we end up not being able to do things like, you know parallel sort of processing of like trees before a join point in an easy way. So this is just sort of like a vote of “I like this basically”. This looks good. It's a good direction. I'm happy to see that. WWe're exploring it. So I think that's it for me. Thanks.

SHU: Yeah. I want to talk to you more about that. It sounds like where'd you would like more parallelism is this data parallel story where you have this big thing. They need to be crossing over that you can chunk pp in some logical way if only you could distribute that over a bunch of threads. Is that accurate?

DRR: it's partially that but being able to easily respond over the same data structures and because you can't really do this, you know sharing across threads right? Like if an IDE makes multiple requests some of those requests are independent of each other, right? So like a semantic request is often different from like a syntactic request and they can be answered independently. We're not wired up to take to leverage any of this stuff right? I'm just going to put it out there. I mean, other tools that could take inspiration, could leverage this sort of work and this sort of thing. They do leverage it in the C# ecosystem right like the way that their entire IDE experiences is wired up to leverage a lot of shared data structures across the read-through immutability. So yeah, but I'd be happy to talk through some scenarios with you if you like.

SYG: Yeah. I think I think one very high level thing. I would love to partners about is basically, I think with how the web works and how node works and how JS works with these async communicating event loops I think the granularity is going to be fairly coarse such that.fine grained beta parallel algorithms aren't going to scale. The overhead is going to be too big and I am wondering what are the use cases we can? we can realistically help improve and what we can instead kind of punt to well, let's make the thread like model better. If you really need that kind of power. It just bleeds through your own manual multi-threading.

SYG: All right, times up. Thank you very much everybody.

RegExp Matches Indices JSC Implementation feedback

Presenter: Michael Saboff

slides

MS: This is implementation feedback to the regex match indices proposal. It''s currently stage 3. I'll probably give them more laborious synopsis of the proposal than everybody wants to hear but maybe some people want to hear it. Basically when you do a match with like for example regex exec this adds an indices property which returns it's basically an array of arrays and indexes where match has occurred for the whole match and then any kind of sub pattern those are included as well. What is not on this slide is that there's if there's named captures then there's also named properties that have these types of arrays. I want to point out that I am not against this proposal, in fact I actually like it. I believe it's very useful for token scanners and other text parsers, especially when you want to report any kind of errors in the stuff you parse because it tells you exactly where you need to report those errors.

MS: So there were some concerns that were raised almost a year ago at the December 2019 meeting. Shu raised those issues that V8 had when they were implementing and resulted in two issues raised. First was greater amount of memory used obviously any natural objects of these include these properties and it's more of a it's a tree of properties as it were and the other was that there are some performance issues. Allocations: obviously for every time you create one of these match objects, and then there's that means there's more GC work to do. And so these penalized all regular expression use. And at the end of the day V8 decided that instead of materializing the indices properly greedy/eagerly, that they would do that lazily. And so what they ended up implementing was that they would only materialize indices properties when they when they were accessed and the way that they would do that was they actually re-run the regular expression. so effectively this would only penalize indices usage and it would basically wouldn't hurt the performance for existing use cases. So that's what they decided to do. So, let me explain or walkthrough some of the proposal or some of the implementations that we found with this proposal. We've actually tried four different implementations and have various results from that. the first implementation was basically just follow the spec. Everything's eagerly done. So you just create indices property on every on every match and that was was done just to see hey, what's this going to cost performance wise? JetStream2 is one of the benchmarks that we follow. We think it's fairly indicative of the JavaScript usage on the web. It's a couple of years old. It contains a lot of other benchmarks. One of those which was cited in the V8 experience a year ago is the octane 2 Regex Benchmark. and we slowed down 17 percent In that particular Benchmark. but overall jet stream slowed by about 1%. It wasn't that bad. But clearly for a regular expression heavy usage, this was probably unacceptable. So the second limitation was very similar to V8's proposal and that is, well, since we create the indices during a matching why don't we just save those indices and then when we, you know need to materialize the indices property on the matches object we use those indices to construct the the the tree of arrays and do things that way. Well you're much much better on the Regexp Benchmark of octane 2. It is only 3% slow down in jet stream 2 overall is about 1%. Now. Let me point out that jet stream 2 actually has about - one out of eight of the tests in jet stream 2 are regex sensitive to some degree or another. For example, there's a test we call offline assembler. It's basically we ported the offline independent machine-independent assembler. Similarly we have in Ruby that's part of javascriptcore we ported the front end of that to JavaScript. So it's basically a parser. It doesn't generate the code and it's actually was 8 percent slower on this first implementation and four and a half percent slower on the second. So it's still had some performance implications. There's another test called flight planner. It's a reg ex performance test that I wrote that's part of jet stream to 5 percent slower on this first of mutation the direct implementation about three and a half percent slower on the second and as we get to the o the third it's is in the noise.The third implementation is basically doing exactly what V8 did and that is to save the regular expression and and its input and when we materialize indices, we rerun the regular expression. And it was still a slight slowdown in regex, but it didn't appear overall to impact the jet stream 2 Performance. I think that that's acceptable. So we havee success right? Well, here's the concern: we both have the same path forward to implement match indices, but the problem is that we're getting the performance for existing record expressions or nearly so we're actually matching twice for match indices. So the new feature we're basically penalizing the use of this new feature and it for a regular expression that you want indices for you're going to run it twice. And I think the web will be aware of this and it may discourage people from using this feature, which I think we don't want to introduce a new feature. We're basically were implying, at least two implementations are implying, this is a good feature, but don't use it because it'll slow you down - unless you really really want it. And I think it can complicate performance sensitive code because there may be some code where you've had to in the past derive indices and now you want to get them directly, you may be reluctant to do that because it could actually be slower. So the the performance concerns were not raised initially by ourselves or or the V8 engine. It actually has been in the GitHub for this proposal for some time and there's been various alternatives that have been considered to alleviate The performance concerns. One was to add a new regex flagged, there was talk about different ways that we could hint that we want this ant this information, a call back function, new regex functions that would return this, or adding an option value or option bag. So a lot of things have been discussed, some of them have been presented to plenary. So what I'd like to do is, I would like us to revisit this performance mitigation. I had some discussion on the GitHub as well as some private conversation with Ron Buckton who is the champion of this proposal. And I think there's a few principles that I like to guide this discussion. One is that whoever's writing the code that's going to use this feature. They know intent and that intent I think is important. We should use that intent to help us mitigate this performance by using that intent in how we implement this, specifically when do we materialize the indices property. And the intent is actually in two places. There's regular expression itself and then its use. it's quite often the case, but not always that a regular expression is defined in one place and used in many places. There are other cases where regular expressions are defined and used a directly in one place. And there's also a pattern that I've seen is a regular expression is may be defined at one place in the code. - but it's only used in another place in that same code. I propose that that intent with the regular expression, specifically adding a flag is probably the best place to do that. if you look at all the standard - when we go to actually execute a regular expression there's this built-in regex ex function. It takes the regular Expressions as first argument. I imagine that most engines do what we do in JavaScriptCore, and use something like that abstraction. And I think it is a good place to do this. The proposal Champions at one point proposed a flag using the letter o. In my implementation that I tried out where I did this I used 'n'. Note that the Perl and dotnet have 'n' options. They mean a little bit different than what's here. So I don't really care too much just as long as the flag is somewhat meaningful. I mean we can't use I for indices because it's used for ignore case but something makes some kind of sense. I would propose we use that we we're talking about using a want indices fly internally for subsequent matches inside of JSC that is that if you have a regular expression, that doesn't have this some kind of new flag and you go to match it today the code that we've posted for review. It's going to take and say the regular expression and the input in a rematch it but then the idea is that we would flag that regular expression that next time it's run go ahead and eagerly populate the indices flag to to have slightly better performance on the all subsequent uses of that regular expression. There's some issues with that having to do with the shapes of The Returned object and optimizations that we make in our engine possibly others have the same kind of issue. But I would like us to revisit my suggestion. We do go with some kind of flag for the regular expression. So in summary, we tried actually for implementations we had similar performance issues as in V8. This is there's a current stage 3 conformant implementation that we posted that hasn't landed yet. But we're a little concerned that we punish people that use this feature. So, like I said, I suggest that we consider the developer intent and add a new flag to the regular expression itself. So with that I'm open for discussions and questions. And like I say, I'm not a champion of this proposal, but I'm providing feedback with recommendations based upon that feedback and I think it's some and I think Ron's on the call something that I think is worth discussing.

WH: I'm curious if anybody has tried machine learning to try to predict ahead of time which regular expression executions will want indices.

MS: So we didn't try that.I think the cost of doing that would hurt performance right? We you can consider various different ways. You can do that. Certainly the one thing that were discussing doing if we keep the current proposal. All subsequent matches would produce indices so, you know, that's the very simple machine learning last time. We use this rig expression. We need it. So the next time will produce it but it doing some kind of flow of the code or things like that. More expensive than other alternatives. Okay.

SYG: Yeah, I would like to second.

WH: I am in favor of using the flag here by the way.

DE:. I'd like to propose a new method that parallels exec. this is not really feasible today because of the subclassing design. But if we remove that subclassing design its proposed in Shu and Yulia’s proposal. And so this would be a different method with a different signature that returns, you know the the offset

MS: but you have other apis that also produced match, like string match.

DE: Yeah, this is just those two right?

MS: You got matchAll - you know, how far do you go down right it was discussed at one point there. You're having a programmers Express intent on time of use not time of definition. But I say, you know, as I said a lot of times they're the same location. But yeah that's another way of doing it.

DE: It could also be like an options bag argument at the usage site. Anyway the flag does not seem too bad to me. I like the idea of making sure that we're not implicitly giving people this double matching performance cost.

MS: right as far as an option bag or option flag that we put on the use that, that could have a performance implication for existing regular Expressions as well. Because now you have to at least check that before you do something. I should have added. That's one of the reasons why I like the flag is, the flag is no cost for existing code. I mean that there's one check you need to do, one compare and branch you need to do when you're materializing the matches object, and no expense, at least in our case, when you're actually doing the matching and things like that.

DE: yeah, I'm in favor of going ahead with the Practical approach. So that's the flag then than good.

SYG: Yes, I would like to give a strong +1 to surfacing intent here. I think that would definitely alleviate the issue. And I completely agree with you that if we were to ship this re execution re-execution strategy, it basically dooms the long term performance viability of the Proposal. I would like to surface a concern about the flag which is not a personal one. I am in favor of the flag. I rarely use regular Expressions, but when I discussed this internally with the V8 team there were some concerns from devtools folks that an extra flag is added user complexity and that it is perhaps more undesirable than other ways of surfacing intent. I guess I guess I don't really share that view. I wonder if other folks in committee think that an extra flag here is more complexity for the developer to learn about regular expressions to the extent that we should weigh it against fixing the performance issues here.

MS: So others may have better recollection than I do, but we've added, if I recall correctly, two flags in the last like three or four years. the Unicode flag obviously for Unicode regular expressions, and then we had the y flag, the sticky flag in the last several years as well. You know we went from like was it four Flags three or four flags to five or six flags now? so the reason I raise that is, I'm wondering is the concern with the Google Dev folks the number of flags, adding a flag... There's a lot of languages like per that have tons of flags.

SYG: Yeah. I think it's twofold one is that it seems to be indeed the sheer number of flags. I don’t think Mathias is in the call. Maybe he can speak for himself. I'm not sure. I can't really see the participation list. You might not be dialed in. I think one is the sheer number as the complexity. There was another argument. That it's kind of a category error for this to be a flag. Like it's not a think that other flags do.

MS:, it is arguably it is different a little bit different, right?

SYG: Yeah again, I say non heavy use of regular Expressions. I don't really hold a model of what category of things a flag that is a regular expression like ought to do but that is also not an opinion I share but maybe just because I don't use regular expressions.

MS: I think one can argue that all the current flags modify how matching happens and this would be the first one for JavaScript which modifies how the results are presented. other languages already have flags that modify how results are presented in addition to how matches happen. It is definitely a new category as it were of flag for regular expressions for JavaScript, but not necessarily for other languages.

SYG: I see I think insofar as I'm plenipotentiary fo V8, I am happy with the flag to solve this issue.

RBN: I wanted to say that I've spoken with Michael about this, and there have been some discussions about this also in the IRC chat as well. Could you go back to the slide we talked about the different mitigations that have been discussed on the GitHub before? So I think of all of the mitigations that we've investigated for the proposal over the past two years as it progressed to up to Stage 2 and up to stage 3. We looked at using a regular expression flag. We looked at adding a callback that could provide different values, adding new methods, adding an optional value option bag. We looked at all of these options and the last three in this list here. Each of them had significant drawbacks.. Callbacks ere not obvious as to what they need to be. What needed to happen and the kind of thing you wanted to do with the Callback. You only ever really wanted one callback and it was the one that gave you the indices. Most other use cases were, you could just map over the result if that was necessary adding new. It complicates these things because you’re grossly increasing the size of the reg xapi just for one specific use case and then you don't marry not able to use things like string prototype match match all Etc and option values and option bags. And even the Callback function case didn't work very well when we had all of these have all of these symbol based methods that handle reg ex subclassing which we've discussed as being complicated already, but we have to thread these things through and there's possible issues with, if anyone has actually tried to subclass a regular expression. Then this wouldn't work with them if they're not just spreading all the arguments in that they get. It falls apart in a lot of places. The only solution that is simple is adding a flag, at least as far as I can tell. You mentioned “n” is used in Perl and does something a little different, actually end up rolling dotnet to something significantly different. I believe 'n' in Perl and .net handles whether or not captured groups are explicitly captured and only captures named capture groups, so I wouldn't recommend using n because there's a lot of folks who have experience with regular Expressions come from a language like Perl where it's kind of baked into the language and has been. And I have a number of reg ex features that I'm considering proposing and one of them is that capability in I don't want to step on that if that's possible. So I do agree that adding a flag is a good idea is the simplest approach to solving the issue at hand. It doesn't mean we necessarily would be replacing the value that we could just be that this just means that we'll get the indices array in addition to what you would get normally so that adding the flag doesn't break existing code but gives you additional value and it allows you to pay for the cost of allocating indices when you know, you actually want to allocate the indices. So I think that is in my opinion probably the best approach.

DRR: I'm having a hard time in understanding exactly how the flag would be used. I mean I get that you added to the regex itself, but then what do you do differently? Like are all your call sites changed in semantics like if I do matchAll do I get different results, or do I get something extra? Maybe that's something you could answer Michael.

MS: You're not going to get different match results. You'll just get the indices property filled in with the appropriate values.

DRR: Okay. I guess I guess from the typescript side like I'm just trying to think about this because there's no concept where we change the type based on the flags. And so giving users editor support where we say “oh, yeah, you'll have the indices”. It's hard to model in that case. It's not undoable, but it could be misleading. and maybe that's just the compromise that we have to do.

MS: Are you modeling the type beyond that it's an object and a specific type of object?

DRR: Well, you know whenever you get one of these matches you need to be able to say whether or not it's going to have those those index properties filled in and really that's what we're trying to give people as like a some indication that it's not going to be there unless you use that flag on the regex. So you just kind forward that information along.

RBN: I can also talk to you about this little bit more to offline Daniel because right now we have the same issue with groups. Groups are optional on the exact result in typescript and it's because we don't parse the regex to see whether or not there was any groups.

DRR: Yeah, but I don't really see that changing like fundamentally anytime soon. But again, this is something we can discuss offline I think.

RBN: I think there are solutions for this and even if they're not great solutions, there are solutions for this.

MS: Okay, and not just group it's also named captured groups as well on these little deeper than having groups.

DRR: No, I understand that. It's I get that that's always been a limitation on our end. But yeah, I just don't want to make things worse on our end. Okay. Thanks. Any other questions or discussion? Can we get a temperature check?

DRR: What I'd like to do Michael is probably circle back in the next meeting and discuss what we want to do as far as if we do a flag what that flag is and I think we can probably make some decisions there. And as I said, I'm fine with the old flag, you know offline. I'm fine with that, you know for offsets, you know, and that at the proposal was once called offsets remember correctly. Not that I'm saying we need to change these offsets, but - okay. That's fine with me. So should we delay our landing in javascriptcore?

RBN: I think this is something that we need to get solid solution or a solid answer to before it lands and we can make a patch to the both to the GitHub repositories proposal spec and the pr for the spec that exist today.

MS: Okay. Thank you.

Conclusion/Resolution

  • Revisit next meeting.

Supporting MDN's documentation about TC39's output

Presenter: Daniel Ehrenberg (DE)

DE: So MDN Staffing to document TC39’s work. People told me not to be too emotional in this presentation, but I'm sad to hear about Mozilla's layoffs, including all but one of the MDN writers including there was somebody who's really focusing on TC39. MDN staffing is important for TC39.

DE: So MDN is this documentation website that has documentation for all of the web platform and also detailed documentation for JavaScript and it's really been staying up to date for the most part with things that TC39 does. It's really kind of the premier reference documentation besides the specification for our work. So they document what we do in an accessible way and including an introductory guide and it also documents compatibility for what environments which different JavaScript implementations support which which parts. it's really trusted by JS developers to be neutral and and correct and you know, although documentation across implementations is incomplete it's been increasing over time. Although there's been community volunteers the whole time, most things were not documented until MDN staffing, and I know this because I was involved in. I've been working on on some of these community contributions and it's good, but the staff are really necessary. Even if there is a contributor to mentor and make necessary changes to fix up the work. So I think it would really hurt adoption and intelligibility of TC39's work if we did not have professional technical writers. So we could, you know, expect community contributions, we could ask tc39 members to contribute to a fund to fund writers or even somehow employ a technical writer. Ecma could contribute for a contractor writer. The idea here would be to find a way to fund jobs for the laid-off MDN writers. So I'm kind of pushing on a couple of these one is TC39 members contributing to a writer fund. Another is ECMA contributing. So I'm proposing both within [?] and my level that we jointly sponsor this so been discussing it with the Ecma execcom. They're very interested in supporting TC39 for what I mean sir song is there they're practical so I disc the potential of contribution. like this that we could that we could fund a technical writer position for and so based on you know, we discussed this previously on the reflector, there were a lot of thumbs up. I wanted to ask here for TC39's feedback if we could come to consensus to making a request concretely for basically for funding one day a week of work for a technical writer, where we estimate I mean based on discussing with with people involved in this that we would need somebody basically full time to keep up to date if with everything between ecma262 and I can afford to the different kinds of work that we're doing. And then I'm proposing this with igalia that we do this and so hopefully this all together we'll add up to more contributions out of budgeting for technical writers for this. So I want to ask for consensus on that point and also ask more broadly what we want as a committee from JavaScript documentation? How should we work with MDN? I know going back a long time Rick Waldron set up a discussion with them and been acting as a liaison in some sense and they have some project management tools and wondering what people think is MDN's current state for JavaScript documentation. Do eople have input on the current direction, And in general what kinds of documentation are we interested in is TC 39. We have an educator Outreach Group, which is working on introductory documentation for proposals. Should we be integrating documentation into our stage process? I want to come back to this proposal to the ECMA GA. Can we focus on if we have consensus for this and discussing queue items for that and then we can talk about the broader discussion questions.

MS: My question is, if outside Manpower was put towards this, if there's contributors outside. How much does Mozilla still need to provide staffing for that work to make it into MDN? How much of this do we need to work directly with Mozilla to make this happen versus can it be done outside of Mozilla. So the site is a wiki and the plan is to move it migrating to a GitHub repo model. MDN is still a Mozilla hosted website and they've made it clear that they will continue to host it and that they have this continued commitment to it. It's just that they're not currently employing as many writers. So the idea is that this would continue to be sort of working with them.

MS: Okay, but do we know how much staffing they need to take input whether it's just wordsmithing or formatting or you know, how much checking that the material is accurate.

DE: The current way that MDN works is it's literally a wiki. So you just edit it and then it's there online.

YSV: I was going to say effectively what dan just said. It's a Wiki that you edit. There's no oversight which is why they want to move to a GitHub repository and then there may be someone checking it. But not all users are treated the same. Not everyone gets a carte blanche for writing their own pages. They had to get approval for that. That's one of the places where extra work comes from. But what will happe if we have a professional writer they will be given the right to edit Pages specific to tc39 and create pages. So they would basically be allowed to operate without a lot of supervision.

MS: Thanks

MM: This slide you mentioned numbers denominated in Francs. If you just give me a good you just give us a ballpark translation of that into Dollars one.

WH: One Swiss franc is 1.09 dollars.

DE: They estimate it probably takes about a hundred thousand of these currency units to hire a full-time Tech writer. I mean if you were in San Francisco there would be more but maybe the budget is somewhat higher because of labor costs, but it gives us a ballpark.

WH: Having been on the ECMA GA for a while, I'm really not in favor of asking ECMA for any significant amount of money — it's probably one of the poorest organizations attending this meeting. ECMA has had to lay off staff too.

DE: ECMA has expressed that they would be accommodating to this request.

WH: Something like this would have to be approved by the GA. I am a member of the GA and I would push back on that.

DE: I don't understand why if I mean given what I just said

WH: I told you — ECMA really doesn't have a lot of money. They're running a deficit right now.

DE: Can we discuss this further offline? Maybe come back by the end of the meeting for it.

WH: I don't think my position will change.

WH: I don't understand how to square what I'm hearing from you and what I'm hearing from execom management so that why I'd like to discus this with you more offline.

MBS: Waldemar are you saying you would bring this up at the GA? Is this a personal opinion or representing Google?

WH: This is a personal opinion. No, you're misquoting me. I didn't say I would bring this up to the GA. What I'm saying is that the GA would have to decide on something like that.

DE: Okay is to input this input that I can then bring to the execom to form the budget proposal that we went to the GA and then we can then vote on it. And so Google will then take an opinion on whether they support or oppose this but it will be informed by you know, the opinions of tc39 members what's proposed to the GA? So it works procedurally

CM: Yes. I'm just curious about how this would work structurally, assuming that that we can get some combination of ECMA and others in the community to pony up sufficient funds to underwrite this, how would this work? Would this be giving a grant to Mozilla to say “hey, keep doing what you're doing” or would this involve setting up some outside management or accountability for?Just how would it work structurally?

DE: It would not be a grant to Mozilla. We would have to figure out some kind of outde management with - and I think they're funding structures like open Collective and we could use so I think a lot of this is kind of a TBD. Yeah, that would obviously have to be worked out in detail before it was fully ready.

CM: This is obviously early in the process. Who would be responsible for supervising the tech writer?

DE: Yeah, that's something we have to work out. But ultimately I feel like the individuals who were working for MDM before even though we probably should have some kind of oversight. I kind of think we can trust them. I think there should be some way that you see than can give direct feedback to them. Yeah.

DE: Can I get a call for temperature from TCQ?

AKI: Okay. Should we take the last topic on the queue?

MF: So Dan is this what we would call a zero-sum, you know, if ECMA is funding this are we less likely to get funding for the other things we do that we previously discussed wanting funding from ECMA? We've discussed funding for note-taking and for you know, maybe like Zoom or or video conferencing licenses or audio equipment for in-person meetings. Those things seem to be more directly addressing the goal that ECMA has for us to effectively work on the spec.

DE: I see this as the opposite of zero-sum. This is good. We'll be able to get this funding for other tasks. I joined ECMA partly because I really wanted to work out the invited expert issues partly because I really wanted to work out these funding issues. Many of us are spending thousands or tens of thousands of Swiss Francs a year to provide financing to execom which is great because it helps us, you know, we had expanded by ECMA I feel like TC39 deserves more services provided by execom and the execom come really does want to provide services that help us. We just have to identify these services to them in the right way. So we have to identify the services to them in a way where we clearly state as a committee that we want them and then do it by the time that budgets are created and then we can propose that to the GA. So this is something that I've been trying to work towards for a long time. And my understanding of ECMA finances is not one of them being extremely short on and members can look into financial reports on their website and see more details on the website and I'm happy to explain to people how to do that.

DE: we're at times so if there's time at the end for like a 15-minute overflowed item we can continue discussing this I do believe there will be great. Thanks. Thank you. Thank you very much. I'm very interested in following up on this slide and taking temperature on this, so I'm interested in both.

AKI: I want to mention that we will come back to this. There are questions to be answered. All right, and I think that I think that closes us out for this Monday November 16th.