Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Threading/Isolate issue ->Spontaneous JVM crash on creation of V8 #306

Closed
matiwinnetou opened this issue Jul 14, 2017 · 15 comments
Closed

Comments

@matiwinnetou
Copy link
Contributor

matiwinnetou commented Jul 14, 2017

Let me explain a bit setup.

We have a new page running, 100% J2V8 on the server with react and react on the client.

This renderer consists of J2V8 with apache commons pool and evictor running every 10 minutes. There is minimum 5 j2v8 that thread has to keep in pool. Generally it works but every now and then JVM keeps crashing.

drwxr-xr-x  3 root   root             4096 Nov 10  2015 ..
mszczap@pubse47-15:/var/tmp/backend-server/pubse47-15_b01_3606$ ls -lat *.log
-rw-rw-rw- 1 mobile mobile_local 212586 Jul 13 11:55 hs_err_pid9112.log
-rw-rw-rw- 1 mobile mobile_local  95270 Jul 11 08:52 hs_err_pid9816.log
-rw-rw-rw- 1 mobile mobile_local  89220 Jul 11 05:31 hs_err_pid2823.log
-rw-rw-rw- 1 mobile mobile_local 211672 Jul 10 06:20 hs_err_pid21594.log
-rw-rw-rw- 1 mobile mobile_local 109212 Jul  7 19:09 hs_err_pid9514.log
-rw-rw-rw- 1 mobile mobile_local 220100 Jul  4 22:36 hs_err_pid10330.log
-rw-rw-rw- 1 mobile mobile_local  98499 Jul  4 20:05 hs_err_pid8388.log
-rw-rw-rw- 1 mobile mobile_local 106891 Jul  4 19:04 hs_err_pid2222.log

and here info on stack trace, seems to be in native code

---------------  T H R E A D  ---------------

Current thread (0x00007f3224780800):  JavaThread "commons-pool-EvictionTimer" daemon [_thread_in_native, id=10358, stack(0x00007f3271027000,0x00007f3271128000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000

Register to memory mapping:

RAX=0x0000000000000000 is an unknown value
RBX=0x00007f321c261710 is an unknown value
RCX=0x00007f3299f0c2e0: <offset 0x2182e0> in /lib/x86_64-linux-gnu/libpthread.so.0 at 0x00007f3299cf4000
RDX=0x00007f3271127a50 is pointing into the stack for thread: 0x00007f3224780800
RSP=0x00007f32711260c0 is pointing into the stack for thread: 0x00007f3224780800
RBP=0x00007f32711260e0 is pointing into the stack for thread: 0x00007f3224780800
RSI=0x0000000000000001 is an unknown value
RDI=0x0000000000000040 is an unknown value
R8 =0x0000000000000002 is an unknown value
R9 =0x00007f321c082ff0 is an unknown value
R10=0x0000000000000001 is an unknown value
R11=0x00000000173aaf7d is an unknown value
R12=0x00007f321c24d6d0 is an unknown value
R13=0x00007f321c261710 is an unknown value
R14=0x00007f32711264b0 is pointing into the stack for thread: 0x00007f3224780800
R15=0x00007f3224780800 is a thread


Stack: [0x00007f3271027000,0x00007f3271128000],  sp=0x00007f32711260c0,  free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C  [libj2v8_linux_x86_64.so+0x8ca0c7]  v8::internal::Isolate::Enter()+0x87
C  [libj2v8_linux_x86_64.so+0x546281]  v8::Isolate::Scope::Scope(v8::Isolate*)+0x27
C  [libj2v8_linux_x86_64.so+0x52c92b]  Java_com_eclipsesource_v8_V8__1createIsolate+0xde
j  com.eclipsesource.v8.V8._createIsolate(Ljava/lang/String;)J+0
j  com.eclipsesource.v8.V8.<init>(Ljava/lang/String;)V+87
j  com.eclipsesource.v8.V8.createV8Runtime(Ljava/lang/String;Ljava/lang/String;)Lcom/eclipsesource/v8/V8;+56
j  pubse.ecs.system.renderer.servlet.j2v8.V8Factory.createV8()Lcom/eclipsesource/v8/V8;+7
j  pubse.ecs.system.renderer.servlet.j2v8.V8PooledObjectPageFactory.makeObject()Lorg/apache/commons/pool2/PooledObject;+0
j  org.apache.commons.pool2.impl.GenericObjectPool.create()Lorg/apache/commons/pool2/PooledObject;+47
j  org.apache.commons.pool2.impl.GenericObjectPool.ensureIdle(IZ)V+39
j  org.apache.commons.pool2.impl.GenericObjectPool.ensureMinIdle()V+6
j  org.apache.commons.pool2.impl.BaseGenericObjectPool$Evictor.run()V+89
j  java.util.TimerThread.mainLoop()V+221
j  java.util.TimerThread.run()V+1
v  ~StubRoutines::call_stub
V  [libjvm.so+0x681a56]
V  [libjvm.so+0x681f61]
V  [libjvm.so+0x682407]
V  [libjvm.so+0x7182b0]
V  [libjvm.so+0xa5c96f]
V  [libjvm.so+0xa5ca9c]
V  [libjvm.so+0x910ee8]
C  [libpthread.so.0+0x80a4]  start_thread+0xc4

Make object is acquiring lock and releasing after v8 is created, all boxes have 5 GB ram for which 1-2 GB is reserved for JVM and the rest for native memory (V8). Since it was risky for us to move to J2V8 we decided to host a new use case on new set of boxes in production and observe what happens. On various machines we see sporadic crashes from time to time.

Commons pool evictor for the moment is ran every 10 mins and destroys and then creates J2V8 based on traffic on demand.

Make object is implemented as follows:

    @Override
    public PooledObject<CompiledV8Bundle> makeObject() throws Exception {
        V8 v8 = createV8();
        V8Locker locker = v8.getLocker();
        try {
            locker.acquire();
            logger.info("makeV8, thread:" + locker.getThread().getName());
            CompiledV8Bundle compiledV8Bundle = compileV8Bundle(v8, compileTimeGlobals, assetMapping);

            return new DefaultPooledObject<>(compiledV8Bundle);
        } finally {
            locker.release();
        }
    }

We observe those crashes on at least two JVM 1.8 versions:
1.8.0_40 and 1.8.0_131 64 bit on intel both running on linux ubuntu (xenial)

Any idea what could cause this? Is this some bug in our code or more likely in native C++ JNI bridge to V8?

Last but not least, the version of j2v8 we use is 4.8.0.

@matiwinnetou matiwinnetou changed the title Regular JVM crashes on no activity Regular JVM crashes on creation of J2V8 (sometimes) Jul 14, 2017
@matiwinnetou
Copy link
Contributor Author

To some extend those issues do not surprise me, J2V8 has so far been primarily used for UI development and we operate a cluster of servers. Some errors occur only under heavy load with lots of users hitting (we have 1.500 req/s to the cluster). No bug can easily slip through this.

@matiwinnetou matiwinnetou changed the title Regular JVM crashes on creation of J2V8 (sometimes) Spontanious JVM crash on creation of V8 Jul 18, 2017
@matiwinnetou matiwinnetou changed the title Spontanious JVM crash on creation of V8 Threading issue ->Spontaneous JVM crash on creation of V8 Jul 19, 2017
@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 19, 2017

Looks like Isolate::Current or Isolate::GetCurrent is deprecated and anti-pattern:

Isolate* Isolate::New(const Isolate::CreateParams& params) {
  i::Isolate* isolate = new i::Isolate(false);
  Isolate* v8_isolate = reinterpret_cast<Isolate*>(isolate);
  CHECK(params.array_buffer_allocator != NULL);
  isolate->set_array_buffer_allocator(params.array_buffer_allocator);
  if (params.snapshot_blob != NULL) {
    isolate->set_snapshot_blob(params.snapshot_blob);
  } else {
    isolate->set_snapshot_blob(i::Snapshot::DefaultSnapshotBlob());
  }
  if (params.entry_hook) {
    isolate->set_function_entry_hook(params.entry_hook);
  }
  auto code_event_handler = params.code_event_handler;
#ifdef ENABLE_GDB_JIT_INTERFACE
  if (code_event_handler == nullptr && i::FLAG_gdbjit) {
    code_event_handler = i::GDBJITInterface::EventHandler;
  }
#endif  // ENABLE_GDB_JIT_INTERFACE
  if (code_event_handler) {
    isolate->InitializeLoggingAndCounters();
    isolate->logger()->SetCodeEventHandler(kJitCodeEventDefault,
                                           code_event_handler);
  }
  if (params.counter_lookup_callback) {
    v8_isolate->SetCounterFunction(params.counter_lookup_callback);
  }

  if (params.create_histogram_callback) {
    v8_isolate->SetCreateHistogramFunction(params.create_histogram_callback);
  }

  if (params.add_histogram_sample_callback) {
    v8_isolate->SetAddHistogramSampleFunction(
        params.add_histogram_sample_callback);
  }

  isolate->set_api_external_references(params.external_references);
  SetResourceConstraints(isolate, params.constraints);
  // TODO(jochen): Once we got rid of Isolate::Current(), we can remove this.
  Isolate::Scope isolate_scope(v8_isolate);
  if (params.entry_hook || !i::Snapshot::Initialize(isolate)) {
    isolate->Init(NULL);
  }
  return v8_isolate;
}

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 19, 2017

Most likely this exception is happening somewhere here:

private internal isolate:

isolate.cc

void Isolate::Enter() {
  Isolate* current_isolate = NULL;
  PerIsolateThreadData* current_data = CurrentPerIsolateThreadData();
  if (current_data != NULL) {
    current_isolate = current_data->isolate_;
    DCHECK(current_isolate != NULL);
    if (current_isolate == this) {
      DCHECK(Current() == this);
      DCHECK(entry_stack_ != NULL);
      DCHECK(entry_stack_->previous_thread_data == NULL ||
             entry_stack_->previous_thread_data->thread_id().Equals(
                 ThreadId::Current()));
      // Same thread re-enters the isolate, no need to re-init anything.
      entry_stack_->entry_count++;
      return;
    }
  }

  PerIsolateThreadData* data = FindOrAllocatePerThreadDataForThisThread();
  DCHECK(data != NULL);
  DCHECK(data->isolate_ == this);

  EntryStackItem* item = new EntryStackItem(current_data,
                                            current_isolate,
                                            entry_stack_);
  entry_stack_ = item;

  SetIsolateThreadLocals(this, data);

  // In case it's the first time some thread enters the isolate.
  set_thread_id(data->thread_id());
}

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 19, 2017

Reproduced on DEV BOX.

The way we are able to reproduce is the following, we create and destroy J2V8 in a pool by 20 threads, previously allocating locks accordingly.

Started in mode:DEVELOPMENT
15:09:27,674 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:27,891 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:28,154 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:28,361 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:28,561 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:28,766 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:28,971 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:29,174 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:29,375 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:29,580 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:29,786 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:30,011 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:30,210 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:30,412 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:30,618 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:30,819 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:36,135 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
15:09:36,137 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
15:09:36,138 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
15:09:36,176 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:36,391 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:36,623 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   makeV8, thread:commons-pool-EvictionTimer
15:09:46,135 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
15:09:46,138 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
15:09:46,140 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00000001254a8624, pid=47554, tid=0x0000000000005f03
#
# JRE version: Java(TM) SE Runtime Environment (8.0_112-b16) (build 1.8.0_112-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.112-b16 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C  [libj2v8_macosx_x86_64.dylib+0x517624]  v8::internal::Isolate::Enter()+0x34
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/mszczap/Devel/mobile/public-search-germany-webapp/hs_err_pid47554.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

@matiwinnetou
Copy link
Contributor Author

I got a core file, it has 26 GB though:
W git:(upstream ⚡ j2v8_crash_init) 16≡ 1M ls -lah /cores/core.81741
-r-------- 1 mszczap admin 26G Jul 19 20:07 /cores/core.81741

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 19, 2017

void Isolate::Enter() {
  printf("1\n");
  fflush(stdout);
  Isolate* current_isolate = NULL;
  PerIsolateThreadData* current_data = CurrentPerIsolateThreadData();
  if (current_data != NULL) {
    printf("2\n");
  fflush(stdout);
    current_isolate = current_data->isolate_;
    DCHECK(current_isolate != NULL);
    printf("3\n");
  fflush(stdout);
    if (current_isolate == this) {
      printf("4\n");
  fflush(stdout);
      DCHECK(Current() == this);
      DCHECK(entry_stack_ != NULL);
      printf("5\n");
  fflush(stdout);
      DCHECK(entry_stack_->previous_thread_data == NULL ||
             entry_stack_->previous_thread_data->thread_id().Equals(
                 ThreadId::Current()));

      printf("6\n");
  fflush(stdout);
      // Same thread re-enters the isolate, no need to re-init anything.
      entry_stack_->entry_count++;
      printf("6a\n");
  fflush(stdout);

      return;
    }
  }

  printf("7\n");
  fflush(stdout);

  PerIsolateThreadData* data = FindOrAllocatePerThreadDataForThisThread();

  printf("8\n");
  fflush(stdout);

  DCHECK(data != NULL);
  DCHECK(data->isolate_ == this);

  printf("9\n");
  fflush(stdout);

  EntryStackItem* item = new EntryStackItem(current_data,
                                            current_isolate,
                                            entry_stack_);
  entry_stack_ = item;

  printf("10\n");
  fflush(stdout);

  SetIsolateThreadLocals(this, data);

  printf("11\n");
  fflush(stdout);

  // In case it's the first time some thread enters the isolate.
  set_thread_id(data->thread_id());

  printf("12\n");
  fflush(stdout);
}

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 19, 2017

21:07:52,831 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
21:07:52,833 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
21:07:52,835 [commons-pool-EvictionTimer] INFO  p.e.s.r.s.j.V8PooledObjectPageFactory   destroyV8, thread:commons-pool-EvictionTimer
1
2
3
7
8
9
10
11
12
1
2
3
4
5
6
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x000000012b1fb5ac, pid=6396, tid=0x0000000000006103
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C  [libj2v8_macosx_x86_64.dylib+0x5175ac]  _ZN2v88internal7Isolate5EnterEv+0xbc
#
# Core dump written. Default location: /cores/core or core.6396
#
# An error report file with more information is saved as:
# /Users/mszczap/Devel/mobile/public-search-germany-webapp/hs_err_pid6396.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 19, 2017

@irbull Most likely entry_stack_ is NULL, on creation of V8 we should not even enter this if statement:

if (current_data != NULL) {

in the first place. Again this is the case where CurrentIsolate is not pointing to the right data, see #308 and #310. The correct way to fix all those problems I think would be to really make sure binary semaphores of J2V8 and Integer semaphores of V8 are properly synchronised.

@matiwinnetou
Copy link
Contributor Author

I will try to run the build with assertions on.

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 20, 2017

12:50:00,833 [GeoLocationResolver RUNNING] INFO p.s.GeoLocationResolver Remote geo data loaded.
1
7
8
9
10
11
12
1
7
8
9
10
11
12

With assertions on it is not even possible to debug...


#
# Fatal error in ../deps/v8/src/api.h, line 490
# Check failed: blocks_.length() == 0.
#

==== C stack trace ===============================

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGILL (0x4) at pc=0x000000012b862001, pid=84230, tid=0x0000000000005f03
#
# JRE version: Java(TM) SE Runtime Environment (8.0_112-b16) (build 1.8.0_112-b16)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.112-b16 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C  [libj2v8_macosx_x86_64.dylib+0x1018001]  _ZN2v84base2OS5AbortEv+0x11
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/mszczap/Devel/mobile/public-search-germany-webapp/hs_err_pid84230.log
    0   libj2v8_macosx_x86_64.dylib         0x000000012b85d84e v8::base::debug::StackTrace::StackTrace() + 30
    1   libj2v8_macosx_x86_64.dylib         0x000000012b85d885 v8::base::debug::StackTrace::StackTrace() + 21
    2   libj2v8_macosx_x86_64.dylib         0x000000012b856124 V8_Fatal + 452
    3   libj2v8_macosx_x86_64.dylib         0x000000012a9aaaa2 v8::internal::HandleScopeImplementer::Free() + 98
    4   libj2v8_macosx_x86_64.dylib         0x000000012a9aaa35 v8::internal::HandleScopeImplementer::FreeThreadResources() + 21
    5   libj2v8_macosx_x86_64.dylib         0x000000012b68c774 v8::internal::ThreadManager::FreeThreadResources() + 276
    6   libj2v8_macosx_x86_64.dylib         0x000000012b68c622 v8::Locker::~Locker() + 146
    7   libj2v8_macosx_x86_64.dylib         0x000000012b68ca95 v8::Locker::~Locker() + 21
    8   libj2v8_macosx_x86_64.dylib         0x000000012a939215 Java_com_eclipsesource_v8_V8__1createIsolate + 3317
    9   ???                                 0x000000010c93e9f4 0x0 + 4505987572
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

@matiwinnetou matiwinnetou changed the title Threading issue ->Spontaneous JVM crash on creation of V8 Threading/Isolate issue ->Spontaneous JVM crash on creation od destroctionof V8 Jul 20, 2017
@matiwinnetou matiwinnetou changed the title Threading/Isolate issue ->Spontaneous JVM crash on creation od destroctionof V8 Threading/Isolate issue ->Spontaneous JVM crash on creation of destroctionof V8 Jul 20, 2017
@matiwinnetou matiwinnetou changed the title Threading/Isolate issue ->Spontaneous JVM crash on creation of destroctionof V8 Threading/Isolate issue ->Spontaneous JVM crash on creation of V8 Jul 20, 2017
@irbull
Copy link
Member

irbull commented Jul 21, 2017

I just returned from my summer vacation. I'm going through my backlog of emails, bugs, slack messages, etc.. I see you raised / commented on a bunch of issues @matiwinnetou. I will begin looking through these today.

@matiwinnetou
Copy link
Contributor Author

@irbull Thank you so much, I am willing to support you and help. I have been digging deep to this but my C++ knowledge is really rusty I must say. I provided lots of exceptions and stack traces and I hope insights into the problem

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 22, 2017

I have been thinking that perhaps while in PROD this bug does not produce many issues for us, with easy workarounds of not recreating V8s very often I have to say that this bug maybe in fact hides a real problem that we are facing in #313.

Anyway, for your info, I am able to reproduce this bug on a dev box (OSX, Sierra) on latest branch : "isolate_enter_branch" build quite quickly (as before).

@matiwinnetou
Copy link
Contributor Author

I will try to test this as well, tomorrow or on Monday if latest master build fixes this...

@matiwinnetou
Copy link
Contributor Author

matiwinnetou commented Jul 29, 2017

I tested this now number of minutes (like 1 hour), I cannot reproduce this anymore. This seems fixed. I will reopen when I discover this again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants