-
-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use persistent Node processes for compilation #381
Comments
This is a great idea and it will save us from creating / disposing Process overhead for each compiler call. We can launch node in interactive in a process and perform the desired operations. That would be a different approach from the (current) "sending CLI arguments to compiler" one. It would be like sending automated JS input to Process running |
We would need an HTTP server that can accept requests to compile files & serve HTTP responses with the results, as well as a heartbeat check & a shutdown command. We should also consider grunt compatibility / integration. |
Perhaps with OWIN we can architect the desired HTTP stack; since the web tooling required to function WE already has OWIN assemblies. Nonetheless, It would be a dilemma: as OWIN is a counterpart of NodeJS and we would be encapsulating NodeJS server in OWIN. But some people have already done this kind of thing. |
No; we need to run an HTTP server in the Node.js processes, so that WE can send requests to them (and get HTTP responses with compilation results). None of that would do any good; we simply need to use Node's built-in HTTP server (perhaps with higher-level modules like Express for routing) |
On which port number we should be running? Also, how would it work with in restrictive environments, where most of the unused ports are blocked? Some workplaces have such policies. |
Each Node process will need to choose a random unused port to listen on. (the C# code can choose a port number, then pass it on the command line) Firewalls block ports from the outside; I've never heard of anything restricting communications within the machine. |
We can consider a load-balancer to invoke fixed / maximum number of workers (say 8), where each worker is capable of entertaining all kinds of compilation requests: https://mazira.com/blog/introduction-load-balancing-nodejs (also see part 2 for seaport module) Ideally, if we select maximum workers topology (as opposed to fixed one), we may want establish a logic to spawn a worker on demand and turn off when its inactive. |
Here is another idea. We can devise a {
"type": "compile",
"filename": "path\\to\\some\\file.coffee",
"target": "path\\to\\target.js",
"module": "coffeescript"
} And then store these (JSON-base) jobs files in a directory. Then using For callback; on completion, we can use Edge.js (see Hanselman's blog post) to notify the .NET code when a task is finished. (perhaps we can use a DB instead of a directory for queuing jobs) |
Neither of those options offer any advantages over plain HTTP, and introduce a lot of complexity that HTTP handles for you. I don't know whether it would be easier to maintain & communicate with multiple Node processes directly within the C# code (as I was saying) or using a separate Node.js load balancer. |
The load-balancer will make it easy to manage workers; as opposed to keeping track of various HTTP node server processes in C# code. Only one process would be initiated, which will further generate workers. |
In the interest of simplicity, I'd suggest initially going with a single Node process rather than a process per core. We're yet to have an issue with running a single Node process to handle our compilation at the moment (as mentioned in another thread issue, it's ~10 secs to compile with WE, <500ms to compile on straight Node). Obviously still need to manage respawning this when it falls over... |
If it takes 10 seconds to compile with WE, something else is wrong. What processes use CPU during those 10 seconds? (check the per-process CPU graphs in Process Explorer) |
Have taken a quick video here, showing the saving, task manager, and output window. Takes 7 seconds for this to complete at the moment (reboot seems to have improved things a bit): https://dl.dropboxusercontent.com/u/3274103/lesssaving.mp4 Here is my LESS settings: Should point out too that this project is on a spinning rust drive. |
Giving this a bump to hopefully get some life into this issue. We're still unable to use WE's Less option due to the perf issues - we're currently using Gulp to watch the files and compile on save. |
Here is what I have got so far:
The URL format is simple. I used GET method and pass everything via query string:
Next thing is error handling and preparing the JSON output. Then it will be ready to replace our LESS compilation (only). That's the first part, now after this there are eight more compilers to go. After that refactoring WE code. If anyone want to chip in, I can push the code to a feature branch.. :) |
This is actually a security hole; any webpage can send a request to the server and end up reading any file on disk. You can do that by requiring a custom HTTP header in every request to the service. C# can send those easily, but foreign JS cannot (unless you allow CORS) |
I haven't started working on client yet; using browser for debugging. Here are some security checks I have (currently commented): // Unauthorized access checks
if (!request.headers["web-essentials"] || request.headers["origin"] !== "web essentials" || request.headers["user-agent"] !== "web essentials")
{
response.writeHead(404, { "Content-Type": "text/plain" });
response.write("404 Not found");
response.end();
} Which we can call from C# like this: // using System.Net.Http;
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Add("origin", "web essentials");
client.DefaultRequestHeaders.Add("user-agent", "web essentials");
client.DefaultRequestHeaders.Add("web-essentials", "web essentials");
//HttpResponseMessage response = client.GetAsync("http://127.0.0.1:1337").Result; Plus we will be running on random port, each time Web Essentials is loaded. In order to take more robust security measures, we can intermittently check something random on the file-system which doesn't change frequently and both C# and nodejs have access too (for instance send the name of a random dll file and its hash via request headers and let njs calculate and compare its SHA-1 hash in a discontinuous manner); since JavaScript in browser doesn't have access to FS. |
Let's release 2.3 before this makes it in. Seems like a change that could cause unforeseen issues if not properly tested |
Adding a header should be enough. |
"Something went wrong reaching:" in sass is a safeguard for C++ I don't know when they will merge in. Their release cycle is usually bit slower. As you can see there, it can happen when we save (to compile) before completing the statement. (for instance I have also spotted the bugs in their upstream libsass and awaiting for my existing PRs to merge in, before I can do more. Meanwhile, currently there are number of hacks in our code, just to prevent node-sass and its upstream from crashing during the lifetime of WE's NodeServer. We should expect other services to work normal; Autoprefixer, LESS and JS compilers and linters. |
As an interim workaround, we could try to parse the text with VS's SASS parser, and not compile if that fails. On the other hand, that will completely prevent people from using features that VS cannot parse. (I don't know if there are any such) |
But what about completely valid Scss files? |
Since VS's parser would (hopefully) not give an error, they would be compiled. |
Can you share some code which is causing issue? I tried with some zurb-foundation sass code from a rails project and can't reproduce the bug. FWIW, it has the import depth of five. Also, if the code is valid and compiling well with node-sass, it should compile with Web Essentials. If its a valid SASS 3.x code, and node-sass / libsass is yet to support that particular feature, then it will throw, we will catch and show the pretty error. |
Sorry I can't share the code, but let me know when you have a new vsix file for me to test with. |
Well that's a pickle. Without reproduction steps, we can't do much. The current build is at version 2.2.6.1 (released today). @SLaks, can you please test if you can reproduce this 404 with SASS or LESS? Thanks! Sent from my Windows Phone From: Tyson Matanichmailto:[email protected] Sorry I can't share the code, but let me know when you have a new vsix file for me to test with. Reply to this email directly or view it on GitHub: |
Figured it out, looks like it must have been a change in the Sass compiler. The Scss built fine the other day with WE but stopped with the upgrade. What was strange however was that unrelated files would fail to compile. |
* Set server foundation: * Established a Server. * Controller. * File/Folder structure for services. * Added LESS service. * Set client foundation: * Build-time actions.
* Server: Adds SASS service. * Server: Adds autoprefixer service. * Server: Integrates autoprefixer with LESS & SASS. * Server: Using "commander" for CLI options. * Server: Adds option for PID so a cron-job can collect the detached server process. * Server: Do not perform any fs.write operation on server side. Instead send the JSON response to server. * Client: Empty node-sass' middleware in PreBuild * Client: Disposing logic. * Client: Send self PID, so cron-job can do its job. * Client: Adds a method to test heart-beat. * Client: Launch node-server On package load. * Misc. fixes for both clients and server.
* Server: Adds CoffeeScript service. * Server: Fixes falsy conditions and null checks. * Server: Fixes SASS sourceMapURL comment logic. * Server: Removes commented code.
* Server: Adds LiveScript service. * Server: Adds SweetJS service. * Server: Curates autoprefixer maps for LESS. * Server: End response on throw.
* Server: Adds IcedCoffeeScript service. * Server: Checks only presence for all boolean options.
* Server: Adds services: CoffeeLint, JSCS, JsHint and TSLint. * Server: Fixes error messages. * Server: Uses Regex for error parsing. * Server: Removes debug messages. * Server: Robust port discovery method. * Server: Removed Dispose(). * Client: Adds AsyncLock to NodeServer. * PreBuild: Adds XRegExp package for named capturing groups.
* Client: Wiring-up with new base. * Server: Fixed error handling. * Test: Updated to FluentAssertions 3.0.
I personally think this should be set to |
OK, setting Strict Math to true fixed the issue. Unfortunately however, migrating to a persistent node process hasn't resolved the performance issues entirely. After a reboot and running a save several times (so everything is warm), this is the best case scenario I could get: 22/07/2014 11:41:15 AM: LESS: Compiling Soda.less 5 seconds. Much better than the ~10 it was previously, however gulp reports times of ~1ms. This is perhaps for another ticket. |
@ctolkien, wait for next nightly we have fixed a handful of issues; related to performance as well. |
Instead of spawning a separate Node process for each file, we should maintain a pool of persistent processes and invoke the compilers in them.
Without this, Node needs to re-compile all of the JS files for each compiler on each invocation; this can be costly, especially on non-SSDs.
Here are some implementation notes:
We can make the Node background runner listen for JSON instruction payloads on stdin, and have it write resulting filepaths (also in JSON) to stdout; the C# compiler would then wait for a single line of output)
Alternatively, we can expose an HTTP server; this would make much of the code simpler & potentially more reliable.
The text was updated successfully, but these errors were encountered: