-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
models during enumeration omitted #80
Comments
The problem seems to be somehow related to splitting. For example, competition + recording works. Splitting with either backtracking or recording fails randomly. This happens in configs using splitting right from the start or ones that switch from competition to splitting for enumeration. One gets relatively short failing runs using #include <clasp/clasp_facade.h>
#include <clasp/solver.h>
#include <iostream>
#include <fstream>
int main() {
while (true) {
Clasp::ClaspConfig config;
config.solve.enumMode = Clasp::EnumOptions::enum_record;
config.solve.numModels = 0;
config.solve.setSolvers(4);
config.solve.algorithm.mode = Clasp::mt::ParallelSolveOptions::Algorithm::mode_split;
Clasp::ClaspFacade libclasp;
std::ifstream input("queens.aspif");
libclasp.start(config, input);
size_t num_models[] = { 1, 0, 0, 2, 10, 4, 40, 92, 352, 724 };
for (size_t i = 0; libclasp.read() && i < 6; ++i) {
libclasp.prepare();
for (size_t j = 0; j < 1; ++j) {
libclasp.solve();
size_t n = libclasp.summary().numEnum;
if (num_models[i] != n) {
std::cerr
<< "error expected " << num_models[i]
<< " models in call " << (i + 1)
<< " but got " << n << std::endl;
}
assert(num_models[i] == n);
}
}
}
} |
I added a few tracing commands and it seems like there is a terminate event send even though there are still solvers not having exhausted their search:
Let's see if I can find the reason. 😄 |
@rkaminsk Thanks for looking into this. Unfortunately, I don't have time this and next week to investigate further :( |
No worries. I'll continue looking a bit. And post here if I find something. |
I dug a little further and it seems like in the Question: Could spurious wake ups in It seems like the problem is indeed caused by spurious wake ups. With the patched condition variable below, all models are enumerated. I opened #82 to handle this problem right in the class condition_variable {
public:
void wait(std::unique_lock<std::mutex> &lock) {
{
std::lock_guard<std::mutex> guard_{mut_};
wait_ += 1;
}
var_.wait(lock, [this]() {
std::lock_guard<std::mutex> guard_{mut_};
if (notify_ > 0) {
--notify_;
--wait_;
return true;
}
return false;
});
}
bool wait_for(unique_lock<mutex>& lock, double timeInSecs) {
// TODO: spurious wakeups should be suppressed here, too.
return var_.wait_for(lock, std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::duration<double>(timeInSecs)))
== std::cv_status::no_timeout;
}
void notify_one() noexcept {
std::lock_guard<std::mutex> guard_{mut_};
if (notify_ < wait_) {
notify_ += 1;
}
var_.notify_one();
}
void notify_all() noexcept {
std::lock_guard<std::mutex> guard_{mut_};
notify_ = wait_;
var_.notify_all();
}
private:
std::mutex mut_;
std::condition_variable var_;
size_t notify_ = 0;
size_t wait_ = 0;
}; |
* BarrierSemaphore failed to handle spurious wakeups correctly. In particular, a spurious wakeup in BarrierSemaphore::down() could result in an early termination from requestWork() since one thread could decrease the semaphore's counter multiple times thereby activating the barrier condition although not all threads are blocked. * Drop unecessary BarrierSemaphore in favor of two specific wait ops on a shared condition variable and a wait counter. On sync reqeusts, waitSync() is used to wait until the sync flag is no longer set. When a thread needs work, it calls waitWork() waiting until the work queue becomes non-empty. In either case, the wait fails if the caller is the last non-waiting thread. Since all waits are now coupled to a specific condition, spurious wakeups are benign.
Many thanks for your help. Hopefully fixed in https://github.com/potassco/clasp/tree/dev |
The following program run with queens.aspif.gz sometimes omits models. Unfortunately, it's a bit hard to reproduce. It happens quite frequently in my Ubuntu 20.04 virtual machine with 2 cores. I managed to trigger the problem using between 3 and 64 threads. The higher the number of threads the higher the chance that solutions are omitted.
I'll try to dig a bit deeper. Just wanted to put the example somewhere safe.
The text was updated successfully, but these errors were encountered: