-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Live variables #8
Comments
Maybe this could be solved wih processes. I'm starting to feel streams and processes should hang out more together... (Maybe with some syntatic sugar to wrap a command in a new process kind of seamlessly.) |
If the plumbing of tubes, ehm streams, is going to be explicit and support both compile-time plumbing (i.e. streams known in compile-time) and runtime plumbing (i.e. streams created at wish in any amount in runtime), then I'm fine with any solution - processes sound generally good. But as always - the devil is in the detail. So if you have any specific API in mind, I'm all ears 😉. |
Btw. speaking of "streams and processes shall hang out more together" I'd refer you to my analysis and "vision" I described in (a very long) thread vlang/v#1868 . TL;DR each Go routine will be a standalone "processor" (i.e. not a process, because a process can stop itself unlike processors which can only be added/removed as whole and themself can't influence whether they're running or not - this has a major advantage that a scheduler can spawn 0-N of such Go routines depending e.g. on load and just connect them using channel multiplexers). I.e. an infinitely running loop with it's content being a In other words tailored & tamed dataflow programming with user-customizable scheduler. It's actually a bit similar to actors, but both more high level (thus safer and faster & more efficient - compared to e.g. traditional actors) and bit more limited in terms of API (which is good - it allows fully automated infinite scaling with zero lines of additional code). Actors are defined upon single messages unlike processors above for which a message would be just one sample from the channel. It's thus much more performant and provids much stronger safety guarantees while offering comparable level of expresiveness and overall dynamics. |
With Pids being able to read from and write to streams, that would be easy to implement this: proc f (x) {
receive | foreach msg {
send $parent_process "$x / $msg"
}
}
range 5 | [spawn f whatever] | foreach msg { ... } Someone may think about some kind of syntatic sugar around the |
Of course, how to reference the parent process is one concern and I personally would prefer to simply call The last message solution also kind of address the in-Til generators, with: proc g (x) {
range $x | send
}
[spawn g 7] | foreach msg { ... } |
(Except all that would make the Pid be treated as a command. Not sure about this part...) |
About the stream.cat [spawn reader1 "file1.txt"] [spawn reader1 "file2.txt"] | foreach line {...}
stream.zip [spawn reader1 "file1.txt"] [spawn reader1 "file2.txt"] | foreach line {...}
stream.fan_out [spawn reader1 "file1.txt"] [spawn reader1 "file2.txt"] | foreach line {...} (It's relevant to note that in Til, data is always being pulled, not pushed through pipes, also.)
|
In a sense this sounds closer to what I've described above. It puts more emphasis on the muxers/demuxers whereas actors and my "processors" above put more emphasis on the nodes (producers & consumers) and their life. Overall though it feels too simplistic to be used for general purpose muxing/demuxing (which is not linear and despite this supports arbitrary graphs it feels still too linear). I like seeing this "linear" principle in smaller areas where all participating procedures conform to "do only one thing and do it really well" (basically everywhere where pipes are being used in bourne shell scripting but nowhere else). Let's shed some light on why I think this doesn't conceive what will be needed (if not now, then later - so better to discuss it now at the design phase 😉).
Maybe the actor and processor models are worth investigating 😉 (as they don't emphasize the connections/muxing/demuxing but rather the nodes - closer to the "data over operations" mantra). |
|
Sure and sorry for not being clear with my intentions. My point is two fold. The pipeline itself seems to not accept any "events" from outside. So I can't temporarily interrupt it and "reorganize things" (this is the first problem). I can only stop the whole pipeline and thus can't continue where I left thereafter (this is the second problem). The second problem is the Any insights how to approach it without |
Btw. I'm playing with the idea of cutting down on "live variables" functionality and going for a simple built-in signal-slot mechanism (simple especially syntactically) working across os-level thread boundaries. I don't know yet... |
When trying to come up with a concise and explicit way of splitting/copying & merging of streams I came to conclusion that either there'll need to be support for "live variables" as e.g. Mech lang does (where every variable is basically by default live - very cool concept allowing to cut down SLOC count by about an order of magnitude in fullstack apps - but the problem currently is performance as this requires quite novel methods of optimization of the generated assembly which is not yet researched enough).
Or there'll need to be some
select
/poll
/kpoll
support and subsequently support for recognition from which stream the value came. This recognition of origin is sometimes done with the typing system, but that might be too cumbersome for Til. In Til I'd probably try to abuse the extraction syntax 😉.Maybe even rudimentary things like this could be the starting point (because they can be presumably wrapped by a procedure):
The latter solution sounds more reasonable for Til. It partially overlaps with own pipe primitives.
The text was updated successfully, but these errors were encountered: