diff --git a/.gitignore b/.gitignore index 3ca8940..e57779d 100644 --- a/.gitignore +++ b/.gitignore @@ -4,4 +4,4 @@ **/obj/* **/bin/* -/.vs +**/.vs/* diff --git a/README.md b/README.md index 8e46231..8149f4b 100644 --- a/README.md +++ b/README.md @@ -9,18 +9,19 @@ Check out the SampleApp to see implementations of the following problems: * [ChooseSmallestProblem](src/SampleApp/ChooseSmallestProblem.cs) - a fun problem which searches small values in the sequence of random number seeds * [Knapsack](src/SampleApp/Knapsack.cs) - the famous {0, 1}-Knapsack, implemented using reversible search (allowing to undo moves), as well as non-reversible * [TSP](src/SampleApp/TSP.cs) - the Berlin52 instance of the TSPLIB + * [SchedulingProblem](src/SampleApp/SchedulingProblem.cs) - a very simple scheduling problem These samples should give you an idea on how to use the framework for problem modeling. The algorithms that are included are: - * Branch and bound (depth-first search) - * Breadth-first search - * Limited Discrepancy Search - * Beam Search - * Monotonic Beam Search - * Rake Search (and a Rake+Beam combination) - * Pilot Method - * Monte Carlo Tree Search + * Branch and bound (depth-first search), sequential and parallel + * Breadth-first search, sequential and parallel + * Limited Discrepancy Search, sequential only + * Beam Search, sequental and parallel + * Monotonic Beam Search, sequential only + * Rake Search (and a Rake+Beam combination), sequential and parallel + * Pilot Method, sequential and parallel + * Monte Carlo Tree Search, sequential only New hybrid algorithms can be implemented, also by making use of the existing algorithms. \ No newline at end of file diff --git a/src/SampleApp/Program.cs b/src/SampleApp/Program.cs index cc438c1..93575af 100644 --- a/src/SampleApp/Program.cs +++ b/src/SampleApp/Program.cs @@ -15,6 +15,8 @@ static void Main(string[] args) KnapsackProblem(); Console.WriteLine("======= TravelingSalesmanProblem ========"); TravelingSalesman(); + Console.WriteLine("========= SchedulingProblem ============="); + SchedulingProblem(); } private static void ChooseSmallestProblem() @@ -58,49 +60,125 @@ private static void KnapsackProblem() // The knapsack implementation aims to provide efficient states for reversible search (only DFS), as well as for non-reversible search var knapsack = new Knapsack(profits, weights, capacity); - var resultBS1 = Maximize.Start(knapsack).BeamSearch(10, state => state.Bound.Value, 2); - Console.WriteLine($"BeamSearch(10) {resultBS1.BestQuality} {resultBS1.VisitedNodes} ({(resultBS1.VisitedNodes / resultBS1.Elapsed.TotalSeconds):F2} nodes/sec)"); - - var resultBS10 = Maximize.Start(knapsack).BeamSearch(100, state => state.Bound.Value, 2); - Console.WriteLine($"BeamSearch(100) {resultBS10.BestQuality} {resultBS10.VisitedNodes} ({(resultBS10.VisitedNodes / resultBS10.Elapsed.TotalSeconds):F2} nodes/sec)"); - - var resultRS1 = Maximize.Start(knapsack).RakeSearch(10); - Console.WriteLine($"RakeSearch(10) {resultRS1.BestQuality} {resultRS1.VisitedNodes} ({(resultRS1.VisitedNodes / resultRS1.Elapsed.TotalSeconds):F2} nodes/sec)"); - - var resultRS10 = Maximize.Start(knapsack).RakeSearch(100); - Console.WriteLine($"RakeSearch(100) {resultRS10.BestQuality} {resultRS10.VisitedNodes} ({(resultRS10.VisitedNodes / resultRS10.Elapsed.TotalSeconds):F2} nodes/sec)"); + var resultBS10 = Maximize.Start(knapsack).BeamSearch(10, state => -state.Bound.Value); + Console.WriteLine($"{"BeamSearch(10)",55} {resultBS10.BestQuality,12} {resultBS10.VisitedNodes,6} ({(resultBS10.VisitedNodes / resultBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParBS1 = Maximize.Start(knapsack).ParallelBeamSearch(10, state => -state.Bound.Value); + Console.WriteLine($"{"ParallelBeamSearch(10)",55} {resultParBS1.BestQuality,12} {resultParBS1.VisitedNodes,6} ({(resultParBS1.VisitedNodes / resultParBS1.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultBS100 = Maximize.Start(knapsack).BeamSearch(100, state => -state.Bound.Value); + Console.WriteLine($"{"BeamSearch(100)",55} {resultBS10.BestQuality,12} {resultBS10.VisitedNodes,6} ({(resultBS10.VisitedNodes / resultBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParBS10 = Maximize.Start(knapsack).ParallelBeamSearch(100, state => -state.Bound.Value); + Console.WriteLine($"{"ParallelBeamSearch(100)",55} {resultParBS10.BestQuality,12} {resultParBS10.VisitedNodes,6} ({(resultParBS10.VisitedNodes / resultParBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + + var resultMonoBS1 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 1, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(1)",55} {resultMonoBS1.BestQuality,12} {resultMonoBS1.VisitedNodes,6} ({(resultMonoBS1.VisitedNodes / resultMonoBS1.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultMonoBS2 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 2, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(2)",55} {resultMonoBS2.BestQuality,12} {resultMonoBS2.VisitedNodes,6} ({(resultMonoBS2.VisitedNodes / resultMonoBS2.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultMonoBS5 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 5, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(5)",55} {resultMonoBS5.BestQuality,12} {resultMonoBS5.VisitedNodes,6} ({(resultMonoBS5.VisitedNodes / resultMonoBS5.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultMonoBS10 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 10, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(10)",55} {resultMonoBS10.BestQuality,12} {resultMonoBS10.VisitedNodes,6} ({(resultMonoBS10.VisitedNodes / resultMonoBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); - var resultRBS1 = Maximize.Start(knapsack).RakeAndBeamSearch(10, 10, state => state.Bound.Value, 2); - Console.WriteLine($"RakeAndBeamSearch(10,10) {resultRBS1.BestQuality} {resultRBS1.VisitedNodes} ({(resultRBS1.VisitedNodes / resultRBS1.Elapsed.TotalSeconds):F2} nodes/sec)"); + var resultRS10 = Maximize.Start(knapsack).RakeSearch(10); + Console.WriteLine($"{"RakeSearch(10)",55} {resultRS10.BestQuality,12} {resultRS10.VisitedNodes,6} ({(resultRS10.VisitedNodes / resultRS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParRS10 = Maximize.Start(knapsack).ParallelRakeSearch(10); + Console.WriteLine($"{"ParallelRakeSearch(10)",55} {resultParRS10.BestQuality,12} {resultParRS10.VisitedNodes,6} ({(resultParRS10.VisitedNodes / resultParRS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultRS100 = Maximize.Start(knapsack).RakeSearch(100); + Console.WriteLine($"{"RakeSearch(100)",55} {resultRS100.BestQuality,12} {resultRS100.VisitedNodes,6} ({(resultRS100.VisitedNodes / resultRS100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParRS100 = Maximize.Start(knapsack).ParallelRakeSearch(100); + Console.WriteLine($"{"ParallelRakeSearch(100)",55} {resultParRS100.BestQuality,12} {resultParRS100.VisitedNodes,6} ({(resultParRS100.VisitedNodes / resultParRS100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); - var resultRBS10 = Maximize.Start(knapsack).RakeAndBeamSearch(100, 100, state => state.Bound.Value, 2); - Console.WriteLine($"RakeAndBeamSearch(100,100) {resultRBS10.BestQuality} {resultRBS10.VisitedNodes} ({(resultRBS10.VisitedNodes / resultRBS10.Elapsed.TotalSeconds):F2} nodes/sec)"); + var resultRBS1010 = Maximize.Start(knapsack).RakeAndBeamSearch(10, 10, state => -state.Bound.Value); + Console.WriteLine($"{"RakeAndBeamSearch(10,10)",55} {resultRBS1010.BestQuality,12} {resultRBS1010.VisitedNodes,6} ({(resultRBS1010.VisitedNodes / resultRBS1010.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParRBS1010 = Maximize.Start(knapsack).ParallelRakeAndBeamSearch(10, 10, state => -state.Bound.Value); + Console.WriteLine($"{"ParallelRakeAndBeamSearch(10,10)",55} {resultParRBS1010.BestQuality,12} {resultParRBS1010.VisitedNodes,6} ({(resultParRBS1010.VisitedNodes / resultParRBS1010.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultRBS100100 = Maximize.Start(knapsack).RakeAndBeamSearch(100, 100, state => -state.Bound.Value); + Console.WriteLine($"{"RakeAndBeamSearch(100,100)",55} {resultRBS100100.BestQuality,12} {resultRBS100100.VisitedNodes,6} ({(resultRBS100100.VisitedNodes / resultRBS100100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParRBS100100 = Maximize.Start(knapsack).ParallelRakeAndBeamSearch(100, 100, state => -state.Bound.Value); + Console.WriteLine($"{"ParallelRakeAndBeamSearch(100,100)",55} {resultParRBS100100.BestQuality,12} {resultParRBS100100.VisitedNodes,6} ({(resultParRBS100100.VisitedNodes / resultParRBS100100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); var resultPM = Maximize.Start(knapsack).PilotMethod(); - Console.WriteLine($"Pilot Method {resultPM.BestQuality} {resultPM.VisitedNodes} ({(resultPM.VisitedNodes / resultPM.Elapsed.TotalSeconds):F2} nodes/sec)"); - - var resultNaiveLD = Maximize.Start(knapsack).NaiveLDSearch(3); - Console.WriteLine($"NaiveLDSearch(3) {resultNaiveLD.BestQuality} {resultNaiveLD.VisitedNodes} ({(resultNaiveLD.VisitedNodes / resultNaiveLD.Elapsed.TotalSeconds):F2} nodes/sec)"); + Console.WriteLine($"{"Pilot Method",55} {resultPM.BestQuality,12} {resultPM.VisitedNodes,6} ({(resultPM.VisitedNodes / resultPM.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParPM = Maximize.Start(knapsack).ParallelPilotMethod(); + Console.WriteLine($"{"Parallel Pilot Method",55} {resultParPM.BestQuality,12} {resultParPM.VisitedNodes,6} ({(resultParPM.VisitedNodes / resultParPM.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultPMBS10 = Maximize.Start(knapsack).PilotMethod(beamWidth: 10, rank: ksp => -ksp.Bound.Value, filterWidth: int.MaxValue); + Console.WriteLine($"{"Pilot Method with Beam Search(10)",55} {resultPMBS10.BestQuality,12} {resultPMBS10.VisitedNodes,6} ({(resultPMBS10.VisitedNodes / resultPMBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParPMBS10 = Maximize.Start(knapsack).ParallelPilotMethod(beamWidth: 10, rank: ksp => -ksp.Bound.Value, filterWidth: int.MaxValue); + Console.WriteLine($"{"Parallel Pilot Method with Beam Search(10)",55} {resultParPMBS10.BestQuality,12} {resultParPMBS10.VisitedNodes,6} ({(resultParPMBS10.VisitedNodes / resultParPMBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultNaiveLD = Maximize.Start(knapsack).NaiveLDSearch(3); + Console.WriteLine($"{"NaiveLDSearch(3)",55} {resultNaiveLD.BestQuality,12} {resultNaiveLD.VisitedNodes,6} ({(resultNaiveLD.VisitedNodes / resultNaiveLD.Elapsed.TotalSeconds),12:F2} nodes/sec)"); var resultAnytimeLD = Maximize.Start(knapsack).AnytimeLDSearch(3); - Console.WriteLine($"AnytimeLDSearch(3) {resultAnytimeLD.BestQuality} {resultAnytimeLD.VisitedNodes} ({(resultAnytimeLD.VisitedNodes / resultAnytimeLD.Elapsed.TotalSeconds):F2} nodes/sec)"); + Console.WriteLine($"{"AnytimeLDSearch(3)",55} {resultAnytimeLD.BestQuality,12} {resultAnytimeLD.VisitedNodes,6} ({(resultAnytimeLD.VisitedNodes / resultAnytimeLD.Elapsed.TotalSeconds),12:F2} nodes/sec)"); var resultDFS1 = Maximize.Start(knapsack).DepthFirst(); - Console.WriteLine($"DFSearch reversible {resultDFS1.BestQuality} {resultDFS1.VisitedNodes} ({(resultDFS1.VisitedNodes / resultDFS1.Elapsed.TotalSeconds):F2} nodes/sec)"); + Console.WriteLine($"{"DFSearch reversible",55} {resultDFS1.BestQuality,12} {resultDFS1.VisitedNodes,6} ({(resultDFS1.VisitedNodes / resultDFS1.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParDFS1 = Maximize.Start(knapsack).ParallelDepthFirst(); + Console.WriteLine($"{"Parallel DFSearch reversible",55} {resultParDFS1.BestQuality,12} {resultParDFS1.VisitedNodes,6} ({(resultParDFS1.VisitedNodes / resultParDFS1.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultBFS = Maximize.Start(knapsack).BreadthFirst(); + Console.WriteLine($"{"BFSearch reversible",55} {resultBFS.BestQuality,12} {resultBFS.VisitedNodes,6} ({(resultBFS.VisitedNodes / resultBFS.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParBFS = Maximize.Start(knapsack).ParallelBreadthFirst(); + Console.WriteLine($"{"Parallel BFSearch reversible",55} {resultParBFS.BestQuality,12} {resultParBFS.VisitedNodes,6} ({(resultParBFS.VisitedNodes / resultParBFS.Elapsed.TotalSeconds),12:F2} nodes/sec)"); var knapsackNoUndo = new KnapsackNoUndo(profits, weights, capacity); + + var resultNoUndoBS10 = Maximize.Start(knapsackNoUndo).BeamSearch(10, state => -state.Bound.Value); + Console.WriteLine($"{"BeamSearch(10) non-reversible",55} {resultNoUndoBS10.BestQuality,12} {resultNoUndoBS10.VisitedNodes,6} ({(resultNoUndoBS10.VisitedNodes / resultNoUndoBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoBS10 = Maximize.Start(knapsackNoUndo).ParallelBeamSearch(10, state => -state.Bound.Value); + Console.WriteLine($"{"Parallel BeamSearch(10) non-reversible",55} {resultParNoUndoBS10.BestQuality,12} {resultParNoUndoBS10.VisitedNodes,6} ({(resultParNoUndoBS10.VisitedNodes / resultParNoUndoBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultNoUndoBS100 = Maximize.Start(knapsackNoUndo).BeamSearch(100, state => -state.Bound.Value); + Console.WriteLine($"{"BeamSearch(100) non-reversible",55} {resultNoUndoBS100.BestQuality,12} {resultNoUndoBS100.VisitedNodes,6} ({(resultNoUndoBS100.VisitedNodes / resultNoUndoBS100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoBS100 = Maximize.Start(knapsackNoUndo).ParallelBeamSearch(100, state => -state.Bound.Value); + Console.WriteLine($"{"Parallel BeamSearch(100) non-reversible",55} {resultParNoUndoBS100.BestQuality,12} {resultParNoUndoBS100.VisitedNodes,6} ({(resultParNoUndoBS100.VisitedNodes / resultParNoUndoBS100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + + var resultParMonoBS1 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 1, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(1)",55} {resultParMonoBS1.BestQuality,12} {resultParMonoBS1.VisitedNodes,6} ({(resultParMonoBS1.VisitedNodes / resultParMonoBS1.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParMonoBS2 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 2, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(2)",55} {resultParMonoBS2.BestQuality,12} {resultParMonoBS2.VisitedNodes,6} ({(resultParMonoBS2.VisitedNodes / resultParMonoBS2.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParMonoBS5 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 5, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(5)",55} {resultParMonoBS5.BestQuality,12} {resultParMonoBS5.VisitedNodes,6} ({(resultParMonoBS5.VisitedNodes / resultParMonoBS5.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParMonoBS10 = Maximize.Start(knapsack).MonotonicBeamSearch(beamWidth: 10, rank: ksp => -ksp.Bound.Value); + Console.WriteLine($"{"MonoBeam(10)",55} {resultParMonoBS10.BestQuality,12} {resultParMonoBS10.VisitedNodes,6} ({(resultParMonoBS10.VisitedNodes / resultParMonoBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + + var resultNoUndoRS10 = Maximize.Start(knapsackNoUndo).RakeSearch(10); + Console.WriteLine($"{"RakeSearch(10) non-reversible",55} {resultNoUndoRS10.BestQuality,12} {resultNoUndoRS10.VisitedNodes,6} ({(resultNoUndoRS10.VisitedNodes / resultNoUndoRS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoRS10 = Maximize.Start(knapsackNoUndo).ParallelRakeSearch(10); + Console.WriteLine($"{"Parallel RakeSearch(10) non-reversible",55} {resultParNoUndoRS10.BestQuality,12} {resultParNoUndoRS10.VisitedNodes,6} ({(resultParNoUndoRS10.VisitedNodes / resultParNoUndoRS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultNoUndoRS100 = Maximize.Start(knapsackNoUndo).RakeSearch(100); + Console.WriteLine($"{"RakeSearch(100) non-reversible",55} {resultNoUndoRS100.BestQuality,12} {resultNoUndoRS100.VisitedNodes,6} ({(resultNoUndoRS100.VisitedNodes / resultNoUndoRS100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoRS100 = Maximize.Start(knapsackNoUndo).ParallelRakeSearch(100); + Console.WriteLine($"{"Parallel RakeSearch(100) non-reversible",55} {resultParNoUndoRS100.BestQuality,12} {resultParNoUndoRS100.VisitedNodes,6} ({(resultParNoUndoRS100.VisitedNodes / resultParNoUndoRS100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + + var resultNoUndoRBS1010 = Maximize.Start(knapsackNoUndo).RakeAndBeamSearch(10, 10, state => -state.Bound.Value); + Console.WriteLine($"{"RakeAndBeamSearch(10,10) non-reversible",55} {resultNoUndoRBS1010.BestQuality,12} {resultNoUndoRBS1010.VisitedNodes,6} ({(resultNoUndoRBS1010.VisitedNodes / resultNoUndoRBS1010.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoRBS1010 = Maximize.Start(knapsackNoUndo).ParallelRakeAndBeamSearch(10, 10, state => -state.Bound.Value); + Console.WriteLine($"{"Parallel RakeAndBeamSearch(10,10) non-reversible",55} {resultParNoUndoRBS1010.BestQuality,12} {resultParNoUndoRBS1010.VisitedNodes,6} ({(resultParNoUndoRBS1010.VisitedNodes / resultParNoUndoRBS1010.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultNoUndoRBS100100 = Maximize.Start(knapsackNoUndo).RakeAndBeamSearch(100, 100, state => -state.Bound.Value); + Console.WriteLine($"{"RakeAndBeamSearch(100,100) non-reversible",55} {resultNoUndoRBS100100.BestQuality,12} {resultNoUndoRBS100100.VisitedNodes,6} ({(resultNoUndoRBS100100.VisitedNodes / resultNoUndoRBS100100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoRBS100100 = Maximize.Start(knapsackNoUndo).ParallelRakeAndBeamSearch(100, 100, state => -state.Bound.Value); + Console.WriteLine($"{"Parallel RakeAndBeamSearch(100,100) non-reversible",55} {resultParNoUndoRBS100100.BestQuality,12} {resultParNoUndoRBS100100.VisitedNodes,6} ({(resultParNoUndoRBS100100.VisitedNodes / resultParNoUndoRBS100100.Elapsed.TotalSeconds),12:F2} nodes/sec)"); - var resultMonoBS1 = Maximize.Start(knapsackNoUndo).MonotonicBeamSearch(beamWidth: 1, rank: ksp => -ksp.Bound.Value); - Console.WriteLine($"MonoBeam(1) {resultMonoBS1.BestQuality} {resultMonoBS1.VisitedNodes} ({(resultMonoBS1.VisitedNodes / resultMonoBS1.Elapsed.TotalSeconds):F2} nodes/sec)"); - var resultMonoBS2 = Maximize.Start(knapsackNoUndo).MonotonicBeamSearch(beamWidth: 2, rank: ksp => -ksp.Bound.Value); - Console.WriteLine($"MonoBeam(2) {resultMonoBS2.BestQuality} {resultMonoBS2.VisitedNodes} ({(resultMonoBS2.VisitedNodes / resultMonoBS2.Elapsed.TotalSeconds):F2} nodes/sec)"); - var resultMonoBS5 = Maximize.Start(knapsackNoUndo).MonotonicBeamSearch(beamWidth: 5, rank: ksp => -ksp.Bound.Value); - Console.WriteLine($"MonoBeam(5) {resultMonoBS5.BestQuality} {resultMonoBS5.VisitedNodes} ({(resultMonoBS5.VisitedNodes / resultMonoBS5.Elapsed.TotalSeconds):F2} nodes/sec)"); - var resultMonoBS10 = Maximize.Start(knapsackNoUndo).MonotonicBeamSearch(beamWidth: 10, rank: ksp => -ksp.Bound.Value); - Console.WriteLine($"MonoBeam(10) {resultMonoBS10.BestQuality} {resultMonoBS10.VisitedNodes} ({(resultMonoBS10.VisitedNodes / resultMonoBS10.Elapsed.TotalSeconds):F2} nodes/sec)"); - - var resultDFS2 = Maximize.Start(knapsackNoUndo).DepthFirst(); - Console.WriteLine($"DFSearch non-reversible {resultDFS2.BestQuality} {resultDFS2.VisitedNodes} ({(resultDFS2.VisitedNodes / resultDFS2.Elapsed.TotalSeconds):F2} nodes/sec)"); + var resultNoUndoPM = Maximize.Start(knapsackNoUndo).PilotMethod(); + Console.WriteLine($"{"PilotMethod non-reversible",55} {resultNoUndoPM.BestQuality,12} {resultNoUndoPM.VisitedNodes,6} ({(resultNoUndoPM.VisitedNodes / resultNoUndoPM.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoPM = Maximize.Start(knapsackNoUndo).ParallelPilotMethod(); + Console.WriteLine($"{"Parallel PilotMethod non-reversible",55} {resultParNoUndoPM.BestQuality,12} {resultParNoUndoPM.VisitedNodes,6} ({(resultParNoUndoPM.VisitedNodes / resultParNoUndoPM.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultNoUndoPMBS10 = Maximize.Start(knapsackNoUndo).PilotMethod(10, state => -state.Bound.Value, filterWidth: int.MaxValue); + Console.WriteLine($"{"PilotMethod with BeamSearch(10) non-reversible",55} {resultNoUndoPMBS10.BestQuality,12} {resultNoUndoPMBS10.VisitedNodes,6} ({(resultNoUndoPMBS10.VisitedNodes / resultNoUndoPMBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoPMBS10 = Maximize.Start(knapsackNoUndo).ParallelPilotMethod(10, state => -state.Bound.Value, filterWidth: int.MaxValue); + Console.WriteLine($"{"Parallel PilotMethod with BeamSearch(10) non-reversible",55} {resultParNoUndoPMBS10.BestQuality,12} {resultParNoUndoPMBS10.VisitedNodes,6} ({(resultParNoUndoPMBS10.VisitedNodes / resultParNoUndoPMBS10.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + + var resultNoUndoLD = Maximize.Start(knapsackNoUndo).NaiveLDSearch(3); + Console.WriteLine($"{"NaiveLDSearch(3) non-reversible",55} {resultNoUndoLD.BestQuality,12} {resultNoUndoLD.VisitedNodes,6} ({(resultNoUndoLD.VisitedNodes / resultNoUndoLD.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultNoUndoALD = Maximize.Start(knapsackNoUndo).AnytimeLDSearch(3); + Console.WriteLine($"{"AnytimeLDSearch(3) non-reversible",55} {resultNoUndoALD.BestQuality,12} {resultNoUndoALD.VisitedNodes,6} ({(resultNoUndoALD.VisitedNodes / resultNoUndoALD.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + + var resultNoUndoDFS = Maximize.Start(knapsackNoUndo).DepthFirst(); + Console.WriteLine($"{"DepthFirst non-reversible",55} {resultNoUndoDFS.BestQuality,12} {resultNoUndoDFS.VisitedNodes,6} ({(resultNoUndoDFS.VisitedNodes / resultNoUndoDFS.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoDFS = Maximize.Start(knapsackNoUndo).ParallelDepthFirst(); + Console.WriteLine($"{"Parallel DepthFirst non-reversible",55} {resultParNoUndoDFS.BestQuality,12} {resultParNoUndoDFS.VisitedNodes,6} ({(resultParNoUndoDFS.VisitedNodes / resultParNoUndoDFS.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultNoUndoBFS = Maximize.Start(knapsackNoUndo).BreadthFirst(); + Console.WriteLine($"{"BreadthFirst non-reversible",55} {resultNoUndoBFS.BestQuality,12} {resultNoUndoBFS.VisitedNodes,6} ({(resultNoUndoBFS.VisitedNodes / resultNoUndoBFS.Elapsed.TotalSeconds),12:F2} nodes/sec)"); + var resultParNoUndoBFS = Maximize.Start(knapsackNoUndo).ParallelBreadthFirst(); + Console.WriteLine($"{"Parallel BreadthFirst non-reversible",55} {resultParNoUndoBFS.BestQuality,12} {resultParNoUndoBFS.VisitedNodes,6} ({(resultParNoUndoBFS.VisitedNodes / resultParNoUndoBFS.Elapsed.TotalSeconds),12:F2} nodes/sec)"); } private static void TravelingSalesman() @@ -136,6 +214,85 @@ private static void TravelingSalesman() var resultAnytimeLD = Minimize.Start(tsp).AnytimeLDSearch(3); Console.WriteLine($"AnytimeLDSearch(3) {resultAnytimeLD.BestQuality} {resultAnytimeLD.VisitedNodes} ({(resultAnytimeLD.VisitedNodes / resultAnytimeLD.Elapsed.TotalSeconds):F2} nodes/sec)"); + + var resultParallelDF = Minimize.Start(tsp) + .WithRuntimeLimit(TimeSpan.FromSeconds(5)) + .ParallelDepthFirst(filterWidth: 2); + Console.WriteLine($"ParallelDepthFirst(16) {resultParallelDF.BestQuality} {resultParallelDF.VisitedNodes} ({(resultParallelDF.VisitedNodes / resultParallelDF.Elapsed.TotalSeconds):F2} nodes/sec)"); + + var resultParallelBS100 = Minimize.Start(tsp) + .ParallelBeamSearch(100, state => state.Bound.Value, 3); + Console.WriteLine($"ParallelBeamSearch(100,3) {resultParallelBS100.BestQuality} {resultParallelBS100.VisitedNodes} ({(resultParallelBS100.VisitedNodes / resultParallelBS100.Elapsed.TotalSeconds):F2} nodes/sec)"); + + var resultParallelPilot = Minimize.Start(tsp) + .WithRuntimeLimit(TimeSpan.FromSeconds(5)) + .ParallelPilotMethod(filterWidth: 2); + Console.WriteLine($"ParallelPilotMethod(16) {resultParallelPilot.BestQuality} {resultParallelPilot.VisitedNodes} ({(resultParallelPilot.VisitedNodes / resultParallelPilot.Elapsed.TotalSeconds):F2} nodes/sec)"); + } + + private static void SchedulingProblem() + { + // generate sample data for jobs and machines + var random = new Random(13); + var now = DateTime.Now.Date.AddHours(7.5); + var jobs = Enumerable.Range(0, 10).Select(x => new Job + { + Id = x + 1, + Name = $"Job {x+1}", + ReadyDate = now.AddMinutes(random.Next(0, 100)), + Duration = TimeSpan.FromMinutes(random.Next(10, 20)) + }).ToList(); + var machines = Enumerable.Range(0, 3).Select(x => new Machine + { + Id = x + 1, + Name = $"Machine {x+1}", + Start = now + }).ToList(); + + var state = new SchedulingProblem(SampleApp.SchedulingProblem.ObjectiveType.Makespan, jobs, machines); + var control = Minimize.Start(state).DepthFirst(); + var result = control.BestQualityState; + Console.WriteLine("===== Makespan ====="); + Console.WriteLine($"Objective: {result.Quality}"); + Console.WriteLine($"Nodes: {control.VisitedNodes}"); + foreach (var group in result.Choices.GroupBy(c => c.Machine)) + { + Console.WriteLine(group.Key.Name); + foreach (var c in group.OrderBy(c => c.ScheduledDate)) + { + Console.WriteLine($" {c.Job.Name} {c.Job.ReadyDate} {c.Job.Duration} {c.ScheduledDate}"); + } + } + + state = new SchedulingProblem(SampleApp.SchedulingProblem.ObjectiveType.Delay, jobs, machines); + control = Minimize.Start(state).DepthFirst(); + result = control.BestQualityState; + Console.WriteLine("===== Job Delay ====="); + Console.WriteLine($"Objective: {result.Quality}"); + Console.WriteLine($"Nodes: {control.VisitedNodes}"); + foreach (var group in result.Choices.GroupBy(c => c.Machine)) + { + Console.WriteLine(group.Key.Name); + foreach (var c in group.OrderBy(c => c.ScheduledDate)) + { + Console.WriteLine($" {c.Job.Name} {c.Job.ReadyDate} {c.Job.Duration} {c.ScheduledDate}"); + } + } + + state = new SchedulingProblem(SampleApp.SchedulingProblem.ObjectiveType.TotalCompletionTime, jobs, machines); + control = Minimize.Start(state).DepthFirst(); + result = control.BestQualityState; + Console.WriteLine("===== Total Completion Time ====="); + Console.WriteLine($"Objective: {result.Quality}"); + Console.WriteLine($"Nodes: {control.VisitedNodes}"); + foreach (var group in result.Choices.GroupBy(c => c.Machine)) + { + Console.WriteLine(group.Key.Name); + foreach (var c in group.OrderBy(c => c.ScheduledDate)) + { + Console.WriteLine($" {c.Job.Name} {c.Job.ReadyDate} {c.Job.Duration} {c.ScheduledDate}"); + } + } } } } diff --git a/src/SampleApp/SchedulingProblem.cs b/src/SampleApp/SchedulingProblem.cs new file mode 100644 index 0000000..058ae28 --- /dev/null +++ b/src/SampleApp/SchedulingProblem.cs @@ -0,0 +1,174 @@ +using System; +using System.Collections.Generic; +using System.Linq; +using TreesearchLib; + +namespace SampleApp +{ + public class SchedulingProblem : IMutableState + { + public enum ObjectiveType { Makespan, Delay, TotalCompletionTime } + public ObjectiveType Objective { get; } + + public bool IsTerminal => remainingJobs.Count == 0; + + public Minimize Bound => + Objective switch + { + ObjectiveType.Makespan => new Minimize((int)Math.Max(makespan.TotalSeconds, (maxJobEndDate - baseDate).TotalSeconds)), + ObjectiveType.Delay => new Minimize((int)delay.TotalSeconds), + ObjectiveType.TotalCompletionTime => new Minimize((int)(totalCompletionTime + totalCompletionBound).TotalSeconds), + _ => throw new NotImplementedException(), + }; + + public Minimize? Quality => IsTerminal ? ( + Objective switch + { + ObjectiveType.Makespan => new Minimize((int)makespan.TotalSeconds), + ObjectiveType.Delay => new Minimize((int)delay.TotalSeconds), + ObjectiveType.TotalCompletionTime => new Minimize((int)totalCompletionTime.TotalSeconds), + _ => throw new NotImplementedException(), + } + ) : (Minimize?)null; + + private DateTime[] nextAvailableTime; + public IReadOnlyList NextAvailableTime => nextAvailableTime; + + public IEnumerable Choices => choices.Reverse(); + + private DateTime baseDate; + private HashSet remainingJobs; + private List machines; + private Stack choices; + private TimeSpan makespan, delay, totalCompletionTime; + public TimeSpan Makespan => makespan; + public TimeSpan Delay => Delay; + + private DateTime maxJobEndDate; + private TimeSpan totalCompletionBound; + + public SchedulingProblem(ObjectiveType objective, List jobs, List machines) + { + Objective = objective; + this.machines = machines; + remainingJobs = new HashSet(jobs); + choices = new Stack(); + makespan = TimeSpan.Zero; + delay = TimeSpan.Zero; + totalCompletionTime = TimeSpan.Zero; + nextAvailableTime = new DateTime[machines.Max(m => m.Id) + 1]; + baseDate = DateTime.MaxValue; + foreach (var m in machines) + { + nextAvailableTime[m.Id] = m.Start; + if (m.Start < baseDate) + { + baseDate = m.Start; + } + } + maxJobEndDate = jobs.Max(x => x.ReadyDate + x.Duration); + totalCompletionBound = TimeSpan.FromSeconds(jobs.Sum(j => ((j.ReadyDate + j.Duration) - baseDate).TotalSeconds)); + } + public SchedulingProblem(SchedulingProblem other) + { + this.Objective = other.Objective; + this.baseDate = other.baseDate; + this.machines = other.machines; + this.remainingJobs = new HashSet(other.remainingJobs); + this.choices = new Stack(other.choices.Reverse()); + this.makespan = other.makespan; + this.delay = other.delay; + this.totalCompletionTime = other.totalCompletionTime; + this.nextAvailableTime = (DateTime[])other.nextAvailableTime.Clone(); + this.maxJobEndDate = other.maxJobEndDate; + this.totalCompletionBound = other.totalCompletionBound; + } + + public void Apply(ScheduleChoice choice) + { + remainingJobs.Remove(choice.Job); + choices.Push(choice); + var endDate = choice.ScheduledDate + choice.Job.Duration; + nextAvailableTime[choice.Machine.Id] = endDate; + if (endDate - baseDate > makespan) + { + makespan = endDate - baseDate; + } + delay += (choice.ScheduledDate - choice.Job.ReadyDate); + totalCompletionTime += (endDate - baseDate); + totalCompletionBound -= (choice.Job.ReadyDate + choice.Job.Duration) - baseDate; + } + + public void UndoLast() + { + var choice = choices.Pop(); + remainingJobs.Add(choice.Job); + nextAvailableTime[choice.Machine.Id] = choice.PreviousAvailableTime; + makespan = choice.PreviousMakespan; + delay -= (choice.ScheduledDate - choice.Job.ReadyDate); + totalCompletionTime -= (choice.ScheduledDate + choice.Job.Duration) - baseDate; + totalCompletionBound += (choice.Job.ReadyDate + choice.Job.Duration) - baseDate; + } + + public object Clone() + { + return new SchedulingProblem(this); + } + + public IEnumerable GetChoices() + { + foreach (var job in remainingJobs.OrderBy(x => x.ReadyDate)) + { + foreach (var machine in machines.OrderBy(x => nextAvailableTime[x.Id])) + { + yield return new ScheduleChoice(this, job, machine); + } + } + } + } + + public class Machine + { + public int Id { get; internal set; } + public string Name { get; internal set; } + public DateTime Start { get; internal set; } + public override string ToString() => Name; + } + + + public class Job + { + public int Id { get; internal set; } + public string Name { get; internal set; } + public DateTime ReadyDate { get; internal set; } + public TimeSpan Duration { get; internal set; } + public override string ToString() => Name; + } + + public class ScheduleChoice + { + public Job Job { get; } + public Machine Machine { get; } + + public DateTime ScheduledDate { get; } + public DateTime PreviousAvailableTime { get; } + public TimeSpan PreviousMakespan { get; } + + public ScheduleChoice(SchedulingProblem state, Job job, Machine machine) + { + Job = job; + Machine = machine; + var availTime = state.NextAvailableTime[machine.Id]; + PreviousAvailableTime = availTime; + if (job.ReadyDate < availTime) + { + ScheduledDate = availTime; + } else + { + ScheduledDate = job.ReadyDate; + } + PreviousMakespan = state.Makespan; + } + } + +} \ No newline at end of file diff --git a/src/TreesearchLib/Algorithms.cs b/src/TreesearchLib/Algorithms.cs index 98de123..1b2ebfe 100644 --- a/src/TreesearchLib/Algorithms.cs +++ b/src/TreesearchLib/Algorithms.cs @@ -164,17 +164,35 @@ public static void DoDepthSearch(ISearchControl control, T state, in where Q : struct, IQuality { if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 0", nameof(filterWidth)); + var searchState = new LIFOCollection<(int depth, T state)>((0, state)); - while (searchState.TryGetNext(out var c) && !control.ShouldStop()) + DoDepthSearch(control, searchState, filterWidth, depthLimit); + } + + /// + /// This method performs a depth-first search given the collection of states and depths. + /// + /// The runtime control and tracking + /// The search state collection which captures the visited nodes so far + /// Limits the number of branches per node + /// Limits the depth of the search + /// The state type + /// The type of quality (Minimize, Maximize) + /// + public static void DoDepthSearch(ISearchControl control, LIFOCollection<(int depth, T state)> searchState, int filterWidth, int depthLimit) + where T : IState + where Q : struct, IQuality + { + while (!control.ShouldStop() && searchState.TryGetNext(out var c)) { var (depth, currentState) = c; - foreach (var next in currentState.GetBranches().Reverse().Take(filterWidth)) + foreach (var next in currentState.GetBranches().Take(filterWidth).Reverse()) { if (control.VisitNode(next) == VisitResult.Discard) { continue; } - if (depth + 1 < depthLimit) // do not branch further + if (depth + 1 < depthLimit) // do not branch further otherwise { searchState.Store((depth + 1, next)); } @@ -192,8 +210,8 @@ public static void DoDepthSearch(ISearchControl control, T state, in /// Expands up to a depth with at least this many nodes. /// The state type /// The type of quality (Minimize, Maximize) - /// The remaining nodes (e.g., if aborted by depthLimit or nodesReached) - public static IStateCollection DoBreadthSearch(ISearchControl control, T state, int filterWidth, int depthLimit, int nodesReached) + /// The depth and remaining nodes (e.g., if aborted by depthLimit or nodesReached) + public static (int depth, IStateCollection states) DoBreadthSearch(ISearchControl control, T state, int filterWidth, int depthLimit, int nodesReached) where T : IState where Q : struct, IQuality { @@ -202,9 +220,29 @@ public static IStateCollection DoBreadthSearch(ISearchControl con if (nodesReached <= 0) throw new ArgumentException($"{nodesReached} needs to be breater or equal than 0", nameof(nodesReached)); var searchState = new BiLevelFIFOCollection(state); var depth = 0; + depth = DoBreadthSearch(control, searchState, depth, filterWidth, depthLimit, nodesReached); + return (depth, searchState.ToSingleLevel()); + } + + /// + /// This method performs a breadth-first search starting from . + /// + /// The runtime control and tracking + /// The initial state from which the search should start + /// The current depth + /// Limits the number of branches per node + /// Limits the depth up to which the breadth-first search expands. + /// Expands up to a depth with at least this many nodes. + /// The state type + /// The type of quality (Minimize, Maximize) + /// The depth reached + public static int DoBreadthSearch(ISearchControl control, BiLevelFIFOCollection searchState, int depth, int filterWidth, int depthLimit, int nodesReached) + where T : IState + where Q : struct, IQuality + { while (searchState.GetQueueNodes > 0 && depth < depthLimit && searchState.GetQueueNodes < nodesReached && !control.ShouldStop()) { - while (searchState.TryFromGetQueue(out var currentState) && !control.ShouldStop()) + while (!control.ShouldStop() && searchState.TryFromGetQueue(out var currentState)) { foreach (var next in currentState.GetBranches().Take(filterWidth)) { @@ -216,12 +254,15 @@ public static IStateCollection DoBreadthSearch(ISearchControl con searchState.ToPutQueue(next); } } - depth++; - searchState.SwapQueues(); + if (searchState.GetQueueNodes == 0) + { + depth++; + searchState.SwapQueues(); + } } - return searchState.ToSingleLevel(); + return depth; } - + /// /// This method performs a depth-first search starting from . /// @@ -249,7 +290,32 @@ public static int DoDepthSearch(ISearchControl control, T state, searchState.Store(entry); } - while (searchState.TryGetNext(out var next) && !control.ShouldStop()) + stateDepth = DoDepthSearch(control, state, searchState, stateDepth, filterWidth, depthLimit); + return stateDepth; + } + + /// + /// This method performs a depth-first search starting from . + /// + /// + /// Because the state is mutable, the is mutated. You should + /// consider to clone the state if a change is undesired. + /// + /// The runtime control and tracking + /// The initial state from which the search should start + /// The search state + /// The current depth of the state + /// Limits the number of branches per node + /// Limits the depth of the search + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The depth of the state in number of moves applied from the initial given state (assumed depth 0) + public static int DoDepthSearch(ISearchControl control, T state, LIFOCollection<(int, C)> searchState, int stateDepth, int filterWidth, int depthLimit) + where T : class, IMutableState + where Q : struct, IQuality + { + while (!control.ShouldStop() && searchState.TryGetNext(out var next)) { var (depth, choice) = next; while (depth < stateDepth) @@ -275,6 +341,7 @@ public static int DoDepthSearch(ISearchControl control, T state, searchState.Store(entry); } } + return stateDepth; } @@ -289,8 +356,8 @@ public static int DoDepthSearch(ISearchControl control, T state, /// The state type /// The choice type /// The type of quality (Minimize, Maximize) - /// The remaining nodes (e.g., if aborted by depthLimit or nodesReached) - public static IStateCollection DoBreadthSearch(ISearchControl control, T state, int filterWidth, int depthLimit, int nodesReached) + /// The depth and remaining nodes (e.g., if aborted by depthLimit or nodesReached) + public static (int depth, IStateCollection states) DoBreadthSearch(ISearchControl control, T state, int filterWidth, int depthLimit, int nodesReached) where T : class, IMutableState where Q : struct, IQuality { @@ -298,10 +365,30 @@ public static IStateCollection DoBreadthSearch(ISearchControl if (depthLimit <= 0) throw new ArgumentException($"{depthLimit} needs to be breater or equal than 0", nameof(depthLimit)); if (nodesReached <= 0) throw new ArgumentException($"{nodesReached} needs to be breater or equal than 0", nameof(nodesReached)); var searchState = new BiLevelFIFOCollection(state); - var depth = 0; + var depth = DoBreadthSearch(control, searchState, 0, filterWidth, depthLimit, nodesReached); + return (depth, searchState.ToSingleLevel()); + } + + /// + /// This method performs a breadth-first search starting from . + /// + /// The runtime control and tracking + /// The initial collection from which the search should start/continue + /// The current depth of the search + /// Limits the number of branches per node + /// Limits the depth up to which the breadth-first search expands. + /// Expands up to a depth with at least this many nodes. + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The depth reached + public static int DoBreadthSearch(ISearchControl control, BiLevelFIFOCollection searchState, int depth, int filterWidth, int depthLimit, int nodesReached) + where T : class, IMutableState + where Q : struct, IQuality + { while (searchState.GetQueueNodes > 0 && depth < depthLimit && searchState.GetQueueNodes < nodesReached && !control.ShouldStop()) { - while (searchState.TryFromGetQueue(out var currentState) && !control.ShouldStop()) + while (!control.ShouldStop() && searchState.TryFromGetQueue(out var currentState)) { foreach (var next in currentState.GetChoices().Take(filterWidth)) { @@ -316,10 +403,14 @@ public static IStateCollection DoBreadthSearch(ISearchControl searchState.ToPutQueue(clone); } } - depth++; - searchState.SwapQueues(); + if (searchState.GetQueueNodes == 0) + { + depth++; + searchState.SwapQueues(); + } } - return searchState.ToSingleLevel(); + + return depth; } } diff --git a/src/TreesearchLib/ConcurrentAlgorithms.cs b/src/TreesearchLib/ConcurrentAlgorithms.cs new file mode 100644 index 0000000..1c83a78 --- /dev/null +++ b/src/TreesearchLib/ConcurrentAlgorithms.cs @@ -0,0 +1,669 @@ +using System; +using System.Collections.Concurrent; +using System.Collections.Generic; +using System.Linq; +using System.Threading; +using System.Threading.Tasks; + +namespace TreesearchLib +{ + public static class ConcurrentAlgorithms + { + /// + /// Performs an exhaustive depth-first search in a new Task. The search can be confined, by + /// choosing only the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelDepthFirstAsync(this SearchControl control, + int filterWidth = int.MaxValue, int depthLimit = int.MaxValue, + int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + return Task.Run(() => ParallelDepthFirst(control, filterWidth: filterWidth, depthLimit: depthLimit, maxDegreeOfParallelism: maxDegreeOfParallelism)); + } + + /// + /// Performs an exhaustive depth-first search. The search can be confined, by choosing only + /// the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelDepthFirst(this SearchControl control, + int filterWidth = int.MaxValue, int depthLimit = int.MaxValue, + int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + DoParallelDepthSearch(control, control.InitialState, filterWidth, depthLimit); + return control; + } + + /// + /// Performs an exhaustive breadth-first search in a new Task. The search can be confined, by + /// choosing only the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelBreadthFirstAsync(this SearchControl control, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + return Task.Run(() => ParallelBreadthFirst(control, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// Performs an exhaustive breadth-first search. The search can be confined, by choosing only + /// the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelBreadthFirst(this SearchControl control, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + DoParallelBreadthSearch(control, control.InitialState, filterWidth, int.MaxValue, int.MaxValue, maxDegreeOfParallelism); + return control; + } + + /// + /// Performs an exhaustive depth-first search in a new Task. The search can be confined, by + /// choosing only the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelDepthFirstAsync(this SearchControl control, + int filterWidth = int.MaxValue, int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + return Task.Run(() => ParallelDepthFirst(control, filterWidth, depthLimit, maxDegreeOfParallelism)); + } + + /// + /// Performs an exhaustive depth-first search. The search can be confined, by choosing only + /// the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelDepthFirst(this SearchControl control, + int filterWidth = int.MaxValue, int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + var state = (T)control.InitialState.Clone(); + DoParallelDepthSearch(control, state, filterWidth, depthLimit, maxDegreeOfParallelism); + return control; + } + + /// + /// Performs an exhaustive breadth-first search in a new Task. The search can be confined, by + /// choosing only the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelBreadthFirstAsync(this SearchControl control, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + return Task.Run(() => ParallelBreadthFirst(control, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// Performs an exhaustive breadth-first search. The search can be confined, by choosing only + /// the first branches. + /// + /// The runtime control and tracking + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelBreadthFirst(this SearchControl control, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + var state = (T)control.InitialState.Clone(); + DoParallelBreadthSearch(control, state, filterWidth, int.MaxValue, int.MaxValue, maxDegreeOfParallelism); + return control; + } + + /// + /// This method performs a sequential breadth-first search starting from + /// until at least nodes have been reached + /// and then performs depth-first search in parallel. + /// + /// The runtime control and tracking + /// The initial state from which the search should start + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The state type + /// The type of quality (Minimize, Maximize) + /// + public static void DoParallelDepthSearch(ISearchControl control, T state, + int filterWidth = int.MaxValue, int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 0", nameof(filterWidth)); + if (depthLimit <= 0) throw new ArgumentException($"{depthLimit} needs to be breater or equal than 0", nameof(depthLimit)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var (depth, states) = Algorithms.DoBreadthSearch(control, state, filterWidth, depthLimit, maxDegreeOfParallelism < 0 ? Environment.ProcessorCount : maxDegreeOfParallelism); + var remainingNodes = control.NodeLimit - control.VisitedNodes; + if (depth >= depthLimit || states.Nodes == 0 || remainingNodes <= 0) + { + return; + } + + var locker = new object(); + Parallel.ForEach(states.AsEnumerable(), + new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, s => + { + var localDepth = depth; + var searchState = new LIFOCollection<(int, T)>((localDepth, s)); + while (!control.ShouldStop()) + { + var localControl = SearchControl.Start(s) + .WithRuntimeLimit(TimeSpan.FromSeconds(1)) + .WithCancellationToken(control.Cancellation); + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingNodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + Algorithms.DoDepthSearch(localControl, searchState, filterWidth, depthLimit - localDepth); + localControl.Finish(); + + lock (locker) + { + control.Merge(localControl); + remainingNodes = control.NodeLimit - control.VisitedNodes; + } + if (searchState.Nodes == 0) + { + break; + } + } + } + ); + } + + /// + /// This method performs a sequential breadth-first search starting from + /// until at least nodes have been reached + /// and then performs breadth-first search in parallel. + /// + /// The runtime control and tracking + /// The initial state from which the search should start + /// Limits the number of branches per node + /// Limits the depth up to which the breadth-first search expands. + /// Expands up to a depth with at least this many nodes. + /// Limits the number of parallel tasks + /// The state type + /// The type of quality (Minimize, Maximize) + /// The remaining nodes (e.g., if aborted by depthLimit or nodesReached) + public static IStateCollection DoParallelBreadthSearch(ISearchControl control, T state, + int filterWidth, int depthLimit, int nodesReached, int maxDegreeOfParallelism) + where T : IState + where Q : struct, IQuality + { + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 0", nameof(filterWidth)); + if (depthLimit <= 0) throw new ArgumentException($"{depthLimit} needs to be breater or equal than 0", nameof(depthLimit)); + if (nodesReached <= 0) throw new ArgumentException($"{nodesReached} needs to be breater or equal than 0", nameof(nodesReached)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var (depth, states) = Algorithms.DoBreadthSearch(control, state, filterWidth, depthLimit, Math.Min(nodesReached, maxDegreeOfParallelism < 0 ? Environment.ProcessorCount : maxDegreeOfParallelism)); + var remainingnodes = control.NodeLimit - control.VisitedNodes; + if (depth >= depthLimit || states.Nodes == 0 || states.Nodes >= nodesReached || remainingnodes <= 0) + { + return states; + } + + var queue = new ConcurrentQueue>(); + var locker = new object(); + Parallel.ForEach(states.AsEnumerable(), + new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, s => + { + var localDepth = depth; + var searchState = new BiLevelFIFOCollection(s); + while (!control.ShouldStop()) + { + var localControl = SearchControl.Start(s) + .WithRuntimeLimit(TimeSpan.FromSeconds(1)) + .WithCancellationToken(control.Cancellation); + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingnodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + + localDepth = Algorithms.DoBreadthSearch(localControl, searchState, localDepth, filterWidth, depthLimit - localDepth, nodesReached); + localControl.Finish(); + + lock (locker) + { + control.Merge(localControl); + remainingnodes = control.NodeLimit - control.VisitedNodes; + } + + if (searchState.GetQueueNodes + searchState.PutQueueNodes == 0 + || localDepth >= depthLimit + || searchState.GetQueueNodes >= nodesReached) + { + break; + } + } + queue.Enqueue(searchState.ToSingleLevel().AsEnumerable().ToList()); + } + ); + return new FIFOCollection(queue.SelectMany(x => x)); + } + + /// + /// This method performs a sequential breadth-first search starting from + /// until at least nodes have been reached + /// and then performs depth-first search in parallel. + /// + /// + /// Because the state is mutable, the is mutated. You should + /// consider to clone the state if a change is undesired. + /// + /// The runtime control and tracking + /// The initial state from which the search should start + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + public static void DoParallelDepthSearch(ISearchControl control, T state, + int filterWidth = int.MaxValue, int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 0", nameof(filterWidth)); + if (depthLimit <= 0) throw new ArgumentException($"{depthLimit} needs to be breater or equal than 0", nameof(depthLimit)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var (depth, states) = Algorithms.DoBreadthSearch(control, (T)state.Clone(), filterWidth, depthLimit, maxDegreeOfParallelism < 0 ? Environment.ProcessorCount : maxDegreeOfParallelism); + var remainingnodes = control.NodeLimit - control.VisitedNodes; + if (depth >= depthLimit || states.Nodes == 0 || remainingnodes <= 0) + { + return; + } + + var locker = new object(); + Parallel.ForEach(states.AsEnumerable(), + new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + s => + { + var localDepth = depth; + var searchState = new LIFOCollection<(int, C)>(); + foreach (var choice in s.GetChoices().Take(filterWidth).Reverse()) + { + searchState.Store((localDepth, choice)); + } + + while (!control.ShouldStop()) + { + var localControl = SearchControl.Start(s) + .WithRuntimeLimit(TimeSpan.FromSeconds(1)) + .WithCancellationToken(control.Cancellation); + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingnodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + + localDepth = Algorithms.DoDepthSearch(localControl, s, searchState, localDepth, filterWidth, depthLimit - localDepth); + localControl.Finish(); + + lock (locker) + { + control.Merge(localControl); + remainingnodes = control.NodeLimit - control.VisitedNodes; + } + if (searchState.Nodes == 0) + { + break; + } + } + } + ); + } + + /// + /// This method performs a breadth-first search starting from . + /// + /// The runtime control and tracking + /// The initial state from which the search should start + /// Limits the number of branches per node + /// Limits the depth up to which the breadth-first search expands. + /// Expands up to a depth with at least this many nodes. + /// Limits the number of parallel tasks + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The remaining nodes (e.g., if aborted by depthLimit or nodesReached) + public static IStateCollection DoParallelBreadthSearch(ISearchControl control, T state, + int filterWidth, int depthLimit, int nodesReached, int maxDegreeOfParallelism) + where T : class, IMutableState + where Q : struct, IQuality + { + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 0", nameof(filterWidth)); + if (depthLimit <= 0) throw new ArgumentException($"{depthLimit} needs to be breater or equal than 0", nameof(depthLimit)); + if (nodesReached <= 0) throw new ArgumentException($"{nodesReached} needs to be breater or equal than 0", nameof(nodesReached)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var (depth, states) = Algorithms.DoBreadthSearch(control, (T)state.Clone(), filterWidth, depthLimit, Math.Min(nodesReached, maxDegreeOfParallelism < 0 ? Environment.ProcessorCount : maxDegreeOfParallelism)); + var remainingnodes = control.NodeLimit - control.VisitedNodes; + if (depth >= depthLimit || states.Nodes == 0 || states.Nodes >= nodesReached || remainingnodes <= 0) + { + return states; + } + + var queue = new ConcurrentQueue>(); + var locker = new object(); + Parallel.ForEach(states.AsEnumerable(), + new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, s => + { + var localDepth = depth; + var searchState = new BiLevelFIFOCollection(s); + while (!control.ShouldStop()) + { + var localControl = SearchControl.Start(s) + .WithRuntimeLimit(TimeSpan.FromSeconds(1)) + .WithCancellationToken(control.Cancellation); + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingnodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + + localDepth = Algorithms.DoBreadthSearch(localControl, searchState, localDepth, filterWidth, depthLimit - localDepth, nodesReached); + localControl.Finish(); + + lock (locker) + { + control.Merge(localControl); + remainingnodes = control.NodeLimit - control.VisitedNodes; + } + + if (searchState.GetQueueNodes + searchState.PutQueueNodes == 0 + || localDepth >= depthLimit + || searchState.GetQueueNodes >= nodesReached) + { + break; + } + } + queue.Enqueue(searchState.ToSingleLevel().AsEnumerable().ToList()); + } + ); + return new FIFOCollection(queue.SelectMany(x => x)); + } + } + + public static class ConcurrentAlgorithmStateExtensions + { + /// + /// Performs a depth-first search with the given options in a new Task. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the depth of the search + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static Task ParallelDepthFirstAsync(this IState state, int filterWidth = int.MaxValue, + int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : IState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelDepthFirst((TState)state, filterWidth, depthLimit, maxDegreeOfParallelism, runtime, callback, nodeLimit, token)); + } + /// + /// Performs a depth-first search with the given options. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static TState ParallelDepthFirst(this IState state, int filterWidth = int.MaxValue, + int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : IState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + if (nodeLimit.HasValue) control = control.WithNodeLimit(nodeLimit.Value); + return control.ParallelDepthFirst(filterWidth, depthLimit, maxDegreeOfParallelism).BestQualityState; + } + + /// + /// Performs a depth-first search with the given options in a new Task. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static Task ParallelDepthFirstAsync(this IMutableState state, int filterWidth = int.MaxValue, + int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelDepthFirst((TState)state, filterWidth, depthLimit, maxDegreeOfParallelism, runtime, callback, nodeLimit, token)); + } + + /// + /// Performs a depth-first search with the given options. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the depth of the search + /// Limits the number of parallel tasks + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static TState ParallelDepthFirst(this IMutableState state, int filterWidth = int.MaxValue, + int depthLimit = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + if (nodeLimit.HasValue) control = control.WithNodeLimit(nodeLimit.Value); + return control.ParallelDepthFirst(filterWidth, depthLimit, maxDegreeOfParallelism).BestQualityState; + } + + /// + /// Performs a breadth-first search with the given options in a new Task. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static Task ParallelBreadthFirstAsync(this IState state, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : IState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelBreadthFirst((TState)state, filterWidth, maxDegreeOfParallelism, runtime, callback, nodeLimit, token)); + } + + /// + /// Performs a breadth-first search with the given options. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static TState ParallelBreadthFirst(this IState state, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : IState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + if (nodeLimit.HasValue) control = control.WithNodeLimit(nodeLimit.Value); + return control.ParallelBreadthFirst(filterWidth, maxDegreeOfParallelism).BestQualityState; + } + + /// + /// Performs a breadth-first search with the given options in a new Task. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static Task ParallelBreadthFirstAsync(this IMutableState state, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelBreadthFirst((TState)state, filterWidth, maxDegreeOfParallelism, runtime, callback, nodeLimit, token)); + } + + /// + /// Performs a breadth-first search with the given options. + /// + /// The state to start from + /// Limits the number of branches per node + /// Limits the number of parallel tasks + /// The maximum runtime + /// A callback when an improving solution has been found + /// A limit on the number of nodes to visit + /// The cancellation token + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The resulting best state that has been found (or none) + public static TState ParallelBreadthFirst(this IMutableState state, + int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1, + TimeSpan? runtime = null, QualityCallback callback = null, long? nodeLimit = null, + CancellationToken token = default(CancellationToken)) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + if (nodeLimit.HasValue) control = control.WithNodeLimit(nodeLimit.Value); + return control.ParallelBreadthFirst(filterWidth, maxDegreeOfParallelism).BestQualityState; + } + } +} \ No newline at end of file diff --git a/src/TreesearchLib/ConcurrentHeuristics.cs b/src/TreesearchLib/ConcurrentHeuristics.cs new file mode 100644 index 0000000..0dc7e53 --- /dev/null +++ b/src/TreesearchLib/ConcurrentHeuristics.cs @@ -0,0 +1,1036 @@ +using System; +using System.Collections.Generic; +using System.Linq; +using System.Threading; +using System.Threading.Tasks; + +namespace TreesearchLib +{ + public static class ConcurrentHeuristics { + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// This is the concurrent version + /// The runtime control and tracking + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The maximum number of threads to use + /// The state type + /// The quality type + /// The runtime control and tracking instance after the search + public static Task> ParallelBeamSearchAsync(this SearchControl control, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + return Task.Run(() => ParallelBeamSearch(control, beamWidth, rank, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// This is the concurrent version + /// The runtime control and tracking + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The maximum number of threads to use + /// The state type + /// The quality type + /// The runtime control and tracking instance after the search + public static SearchControl ParallelBeamSearch(this SearchControl control, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + DoParallelBeamSearch(control, control.InitialState, beamWidth, rank, filterWidth, maxDegreeOfParallelism); + return control; + } + + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// This is the concurrent version + /// The runtime control and tracking + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The maximum number of threads to use + /// The state type + /// The quality type + /// The runtime control and tracking instance after the search + public static Task> ParallelBeamSearchAsync(this SearchControl control, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + return Task.Run(() => ParallelBeamSearch(control, beamWidth, rank, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// This is the concurrent version + /// The runtime control and tracking + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The maximum number of threads to use + /// The state type + /// The quality type + /// The runtime control and tracking instance after the search + public static SearchControl ParallelBeamSearch(this SearchControl control, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + DoParallelBeamSearch(control, control.InitialState, beamWidth, rank, filterWidth, maxDegreeOfParallelism); + return control; + } + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// This is the concurrent version + /// The runtime control and tracking + /// The state to start the search from + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The maximum number of threads to use + /// The state type + /// The quality type + /// + public static void DoParallelBeamSearch(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth, int maxDegreeOfParallelism) + where T : IState + where Q : struct, IQuality + { + if (beamWidth <= 0) throw new ArgumentException("A beam width of 0 or less is not possible"); + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 1", nameof(filterWidth)); + if (rank == null) throw new ArgumentNullException(nameof(rank)); + if (filterWidth == 1 && beamWidth > 1) throw new ArgumentException($"{nameof(beamWidth)} cannot exceed 1 when {nameof(filterWidth)} equals 1."); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var currentLayer = new List(); + currentLayer.Add(state); + while (!control.ShouldStop() && currentLayer.Count > 0) + { + var nextlayer = new List<(float rank, T state)>(); + + var reaminingTime = control.Runtime - control.Elapsed; + if (reaminingTime < TimeSpan.Zero) + { + break; + } + var remainingNodes = control.NodeLimit - control.VisitedNodes; + var locker = new object(); + Parallel.ForEach(currentLayer, new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + (currentState) => + { + var localNextLayer = new Queue<(float rank, T state)>(); + var localControl = SearchControl.Start(currentState) + .WithCancellationToken(control.Cancellation) + .WithRuntimeLimit(reaminingTime); + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingNodes); + if (control.BestQuality.HasValue) + { + // to discard certain nodes + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + foreach (var next in currentState.GetBranches().Take(filterWidth)) + { + if (localControl.VisitNode(next) == VisitResult.Discard) + { + continue; + } + + localNextLayer.Enqueue((rank(next), next)); + } + localControl.Finish(); + lock (locker) + { + control.Merge(localControl); + nextlayer.AddRange(localNextLayer); + remainingNodes = control.NodeLimit - control.VisitedNodes; + } + }); + + currentLayer.Clear(); + currentLayer.AddRange(nextlayer + .OrderBy(x => x.rank) + .Take(beamWidth) + .Select(x => x.state)); + } + } + + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// This is the concurrent version + /// The runtime control and tracking + /// The state to start the search from + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The maximum number of threads to use + /// The state type + /// The quality type + /// + public static void DoParallelBeamSearch(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth, int maxDegreeOfParallelism) + where T : class, IMutableState + where Q : struct, IQuality + { + if (beamWidth <= 0) throw new ArgumentException("A beam width of 0 or less is not possible"); + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 1", nameof(filterWidth)); + if (rank == null) throw new ArgumentNullException(nameof(rank)); + if (filterWidth == 1 && beamWidth > 1) throw new ArgumentException($"{nameof(beamWidth)} cannot exceed 1 when {nameof(filterWidth)} equals 1."); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var currentLayer = new List(beamWidth); + currentLayer.Add(state); + while (!control.ShouldStop() && currentLayer.Count > 0) + { + var nextlayer = new List<(float rank, T state)>(); + + var locker = new object(); + var remainingNodes = control.NodeLimit - control.VisitedNodes; + var remainingRuntime = control.Runtime - control.Elapsed; + Parallel.ForEach(currentLayer, new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + (currentState) => + { + var localNextLayer = new Queue<(float rank, T state)>(); + var localControl = SearchControl.Start(currentState) + .WithCancellationToken(control.Cancellation) + .WithRuntimeLimit(remainingRuntime); + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingNodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + foreach (var choice in currentState.GetChoices().Take(filterWidth)) + { + var next = (T)currentState.Clone(); + next.Apply(choice); + + if (localControl.VisitNode(next) == VisitResult.Discard) + { + continue; + } + + localNextLayer.Enqueue((rank(next), next)); + } + localControl.Finish(); + lock (locker) + { + control.Merge(localControl); + nextlayer.AddRange(localNextLayer); + remainingNodes = control.NodeLimit - control.VisitedNodes; + } + }); + + currentLayer.Clear(); + currentLayer.AddRange(nextlayer + .OrderBy(x => x.rank) + .Take(beamWidth) + .Select(x => x.state)); + } + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a depth-first search by just taking the first branch (i.e., a greedy heuristic). + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// The maximum number of threads to use + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelRakeSearchAsync(this SearchControl control, int rakeWidth, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + return Task.Run(() => ParallelRakeSearch(control, rakeWidth, maxDegreeOfParallelism)); + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a depth-first search by just taking the first branch (i.e., a greedy heuristic). + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// The maximum number of threads to use + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelRakeSearch(this SearchControl control, int rakeWidth, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + if (rakeWidth <= 0) throw new ArgumentException($"{rakeWidth} needs to be greater or equal than 1", nameof(rakeWidth)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var rake = ConcurrentAlgorithms.DoParallelBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth, maxDegreeOfParallelism); + var remainingTime = control.Runtime - control.Elapsed; + var remainingNodes = control.NodeLimit - control.VisitedNodes; + if (control.ShouldStop() || remainingTime < TimeSpan.Zero || remainingNodes <= 0) + { + return control; + } + var locker = new object(); + Parallel.ForEach(rake.AsEnumerable(), new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + next => + { + var localControl = SearchControl.Start(next) + .WithCancellationToken(control.Cancellation) + .WithRuntimeLimit(remainingTime); + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingNodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + Algorithms.DoDepthSearch(localControl, next, 1); + lock (locker) + { + control.Merge(localControl); + remainingNodes = control.NodeLimit - control.VisitedNodes; + } + }); + return control; + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a depth-first search by just taking the first branch (i.e., a greedy heuristic). + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// The maximum number of threads to use + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelRakeSearchAsync(this SearchControl control, int rakeWidth, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + return Task.Run(() => ParallelRakeSearch(control, rakeWidth, maxDegreeOfParallelism)); + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a depth-first search by just taking the first branch (i.e., a greedy heuristic). + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// The maximum number of threads to use + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelRakeSearch(this SearchControl control, int rakeWidth, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + if (rakeWidth <= 0) throw new ArgumentException($"{rakeWidth} needs to be greater or equal than 1", nameof(rakeWidth)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var rake = ConcurrentAlgorithms.DoParallelBreadthSearch(control, (T)control.InitialState.Clone(), int.MaxValue, int.MaxValue, rakeWidth, maxDegreeOfParallelism); + var remainingTime = control.Runtime - control.Elapsed; + var remainingNodes = control.NodeLimit - control.VisitedNodes; + if (control.ShouldStop() || remainingTime < TimeSpan.Zero || remainingNodes <= 0) // the last two are just safety checks, control.ShouldStop() should terminate in these cases too + { + return control; + } + var locker = new object(); + Parallel.ForEach(rake.AsEnumerable(), new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + next => + { + var localControl = SearchControl.Start(next) + .WithCancellationToken(control.Cancellation) + .WithRuntimeLimit(remainingTime); // each thread gets the same time + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingNodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + Algorithms.DoDepthSearch(localControl, next, filterWidth: 1); + lock (locker) + { + control.Merge(localControl); + remainingNodes = control.NodeLimit - control.VisitedNodes; + } + }); + return control; + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a beam search is performed. + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// Used in the beam search to determine the number of beams + /// The ranking function used by the beam search (lower is better) + /// To limit the number of branches per node + /// The maximum number of threads to use + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelRakeAndBeamSearchAsync(this SearchControl control, int rakeWidth, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + return Task.Run(() => ParallelRakeAndBeamSearch(control, rakeWidth, beamWidth, rank, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a beam search is performed. + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// Used in the beam search to determine the number of beams + /// The ranking function used by the beam search (lower is better) + /// To limit the number of branches per node + /// The maximum number of threads to use + /// The state type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelRakeAndBeamSearch(this SearchControl control, int rakeWidth, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + if (rakeWidth <= 0) throw new ArgumentException($"{rakeWidth} needs to be greater or equal than 1", nameof(rakeWidth)); + if (beamWidth <= 0) throw new ArgumentException("A beam width of 0 or less is not possible"); + if (filterWidth <= 0) throw new ArgumentException("A filter width of 0 or less is not possible"); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var rake = ConcurrentAlgorithms.DoParallelBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth, maxDegreeOfParallelism); + var remainingTime = control.Runtime - control.Elapsed; + var remainingNodes = control.NodeLimit - control.VisitedNodes; + if (control.ShouldStop() || remainingTime < TimeSpan.Zero || remainingNodes <= 0) // the last two are just safety checks, control.ShouldStop() should terminate in these cases too + { + return control; + } + var locker = new object(); + Parallel.ForEach(rake.AsEnumerable(), new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + next => + { + var localControl = SearchControl.Start(next) + .WithCancellationToken(control.Cancellation) + .WithRuntimeLimit(remainingTime); // each thread gets the same time + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingNodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + Heuristics.DoBeamSearch(localControl, next, beamWidth, rank, filterWidth); + lock (locker) + { + control.Merge(localControl); + remainingNodes = control.NodeLimit - control.VisitedNodes; + } + }); + return control; + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a beam search is performed. + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// Used in the beam search to determine the number of beams + /// The ranking function used by the beam search (lower is better) + /// To limit the number of branches per node + /// The maximum number of threads to use + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static Task> ParallelRakeAndBeamSearchAsync(this SearchControl control, int rakeWidth, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + return Task.Run(() => ParallelRakeAndBeamSearch(control, rakeWidth, beamWidth, rank, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// Rake search performs a breadth-first search until a level is reached with + /// nodes and then from each node a beam search is performed. + /// + /// The runtime control and tracking + /// The number of nodes to reach, before proceeding with a simple greedy heuristic + /// Used in the beam search to determine the number of beams + /// The ranking function used by the beam search (lower is better) + /// To limit the number of branches per node + /// The maximum number of threads to use + /// The state type + /// The choice type + /// The type of quality (Minimize, Maximize) + /// The runtime control instance + public static SearchControl ParallelRakeAndBeamSearch(this SearchControl control, int rakeWidth, int beamWidth, Func rank, int filterWidth = int.MaxValue, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + if (rakeWidth <= 0) throw new ArgumentException($"{rakeWidth} needs to be greater or equal than 1", nameof(rakeWidth)); + if (beamWidth <= 0) throw new ArgumentException("A beam width of 0 or less is not possible"); + if (filterWidth <= 0) throw new ArgumentException("A filter width of 0 or less is not possible"); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} needs to be -1 or greater or equal than 0", nameof(maxDegreeOfParallelism)); + + var rake = ConcurrentAlgorithms.DoParallelBreadthSearch(control, (T)control.InitialState.Clone(), int.MaxValue, int.MaxValue, rakeWidth, maxDegreeOfParallelism); + var remainingTime = control.Runtime - control.Elapsed; + var remainingNodes = control.NodeLimit - control.VisitedNodes; + if (control.ShouldStop() || remainingTime < TimeSpan.Zero || remainingNodes <= 0) // the last two are just safety checks, control.ShouldStop() should terminate in these cases too + { + return control; + } + var locker = new object(); + Parallel.ForEach(rake.AsEnumerable(), new ParallelOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + next => + { + var localControl = SearchControl.Start(next) + .WithCancellationToken(control.Cancellation) + .WithRuntimeLimit(remainingTime); // each thread gets the same time + lock (locker) + { + localControl = localControl.WithNodeLimit(remainingNodes); + if (control.BestQuality.HasValue) + { + localControl = localControl.WithUpperBound(control.BestQuality.Value); + } + } + Heuristics.DoBeamSearch(localControl, next, beamWidth, rank, filterWidth); + lock (locker) + { + control.Merge(localControl); + remainingNodes = control.NodeLimit - control.VisitedNodes; + } + }); + return control; + } + + /// + /// In the PILOT method a lookahead is performed to determine the most promising branch to continue. + /// In this implementation, an efficient beam search may be used for the lookahead. + /// The lookahead depth is not configurable, instead a full solution must be achieved. + /// + /// + /// Voßs, S., Fink, A. & Duin, C. Looking Ahead with the Pilot Method. Ann Oper Res 136, 285–302 (2005). + /// https://doi.org/10.1007/s10479-005-2060-2 + /// + /// Runtime control and best solution tracking. + /// The parameter that governs how many parallel lines through the search tree should be considered during lookahead. For values > 1, rank must be defined as BeamSearch will be used. + /// A function that ranks states (lower is better), if it is null the rank is implicit by the order in which the branches are generated. + /// How many descendents will be considered per node (in case beamWidth > 1) + /// The maximum number of threads to use for the lookahead. + /// The state type + /// The type of the objective + /// The control object with the tracking. + public static Task> ParallelPilotMethodAsync(this SearchControl control, int beamWidth = 1, Func rank = null, int filterWidth = 1, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + return Task.Run(() => ParallelPilotMethod(control, beamWidth, rank, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// In the PILOT method a lookahead is performed to determine the most promising branch to continue. + /// In this implementation, an efficient beam search may be used for the lookahead. + /// The lookahead depth is not configurable, instead a full solution must be achieved. + /// + /// + /// Voßs, S., Fink, A. & Duin, C. Looking Ahead with the Pilot Method. Ann Oper Res 136, 285–302 (2005). + /// https://doi.org/10.1007/s10479-005-2060-2 + /// + /// Runtime control and best solution tracking. + /// The parameter that governs how many parallel lines through the search tree should be considered during lookahead. For values > 1, rank must be defined as BeamSearch will be used. + /// A function that ranks states (lower is better), if it is null the rank is implicit by the order in which the branches are generated. + /// How many descendents will be considered per node (in case beamWidth > 1) + /// The maximum number of threads to use for the search. + /// The state type + /// The type of the objective + /// The control object with the tracking. + public static SearchControl ParallelPilotMethod(this SearchControl control, int beamWidth = 1, Func rank = null, int filterWidth = 1, int maxDegreeOfParallelism = -1) + where T : IState + where Q : struct, IQuality + { + var state = control.InitialState; + DoParallelPilotMethod(control, state, beamWidth, rank, filterWidth, maxDegreeOfParallelism); + return control; + } + + /// + /// In the PILOT method a lookahead is performed to determine the most promising branch to continue. + /// In this implementation, an efficient beam search may be used for the lookahead. + /// The lookahead depth is not configurable, instead a full solution must be achieved. + /// + /// + /// Voßs, S., Fink, A. & Duin, C. Looking Ahead with the Pilot Method. Ann Oper Res 136, 285–302 (2005). + /// https://doi.org/10.1007/s10479-005-2060-2 + /// + /// Runtime control and best solution tracking. + /// The state from which the pilot method should start operating + /// The parameter that governs how many parallel lines through the search tree should be considered during lookahead. For values > 1, rank must be defined as BeamSearch will be used. + /// A function that ranks states (lower is better), if it is null the rank is implicit by the order in which the branches are generated. + /// How many descendents will be considered per node (in case beamWidth > 1) + /// The maximum number of threads to use + /// The state type + /// The type of the objective + /// + public static void DoParallelPilotMethod(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth, int maxDegreeOfParallelism) + where T : IState + where Q : struct, IQuality + { + if (rank != null && beamWidth <= 0) throw new ArgumentException($"{beamWidth} needs to be greater or equal than 1 when beam search is used ({nameof(rank)} is non-null)", nameof(beamWidth)); + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 1", nameof(filterWidth)); + if (filterWidth == 1 && beamWidth > 1) throw new ArgumentException($"{nameof(beamWidth)} parameter has no effect if {nameof(filterWidth)} is equal to 1", nameof(beamWidth)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{maxDegreeOfParallelism} is not a valid value for {nameof(maxDegreeOfParallelism)}", nameof(maxDegreeOfParallelism)); + + var locker = new object(); + while (true) + { + T bestBranch = default(T); + Q? bestBranchQuality = null; + var branches = state.GetBranches().ToList(); + if (branches.Count == 0) return; + Parallel.ForEach(branches, + new ParallelOptions() { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + next => + { + Q? quality = default; + if (next.IsTerminal) + { + // no lookahead required + quality = next.Quality; + } else + { + if (rank == null) + { + // the depth search state is a stack + var searchState = new LIFOCollection<(int depth, T state)>((0, next)); + while (!control.ShouldStop() && searchState.Nodes > 0) + { + var localControl = SearchControl.Start(next).WithRuntimeLimit(TimeSpan.FromSeconds(1)); + Algorithms.DoDepthSearch(localControl, searchState, filterWidth: filterWidth, depthLimit: int.MaxValue); + localControl.Finish(); + if (localControl.BestQuality.HasValue) quality = localControl.BestQuality; + lock (locker) + { + control.Merge(localControl); + } + } + } else + { + // the beam search state is a special collection + var searchState = new PriorityBiLevelFIFOCollection(next); + while (!control.ShouldStop() && searchState.CurrentLayerNodes > 0) + { + var localControl = SearchControl.Start(next).WithRuntimeLimit(TimeSpan.FromSeconds(1)); + Heuristics.DoBeamSearch(localControl, searchState, beamWidth: beamWidth, rank: rank, filterWidth: filterWidth); + localControl.Finish(); + if (localControl.BestQuality.HasValue) quality = localControl.BestQuality; + lock (locker) + { + control.Merge(localControl); + } + } + } + } + + if (!quality.HasValue) return; // no solution achieved + lock (locker) + { + if (!bestBranchQuality.HasValue || quality.Value.IsBetter(bestBranchQuality.Value)) + { + bestBranch = next; + bestBranchQuality = quality; + } + } + } + ); + if (!bestBranchQuality.HasValue) return; + state = bestBranch; + if (state.IsTerminal) return; + } + } + + /// + /// In the PILOT method a lookahead is performed to determine the most promising branch to continue. + /// In this implementation, an efficient beam search may be used for the lookahead. + /// The lookahead depth is not configurable, instead a full solution must be achieved. + /// + /// + /// Voßs, S., Fink, A. & Duin, C. Looking Ahead with the Pilot Method. Ann Oper Res 136, 285–302 (2005). + /// https://doi.org/10.1007/s10479-005-2060-2 + /// + /// Runtime control and best solution tracking. + /// The parameter that governs how many parallel lines through the search tree should be considered during lookahead. For values > 1, rank must be defined as BeamSearch will be used. + /// A function that ranks states (lower is better), if it is null the rank is implicit by the order in which the branches are generated. + /// How many descendents will be considered per node (in case beamWidth > 1) + /// The maximum number of threads to use + /// The state type + /// The choice type + /// The type of the objective + /// The control object with the tracking. + public static Task> ParallelPilotMethodAsync(this SearchControl control, int beamWidth = 1, Func rank = null, int filterWidth = 1, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + return Task.Run(() => ParallelPilotMethod(control, beamWidth, rank, filterWidth, maxDegreeOfParallelism)); + } + + /// + /// In the PILOT method a lookahead is performed to determine the most promising branch to continue. + /// In this implementation, an efficient beam search may be used for the lookahead. + /// The lookahead depth is not configurable, instead a full solution must be achieved. + /// + /// + /// Voßs, S., Fink, A. & Duin, C. Looking Ahead with the Pilot Method. Ann Oper Res 136, 285–302 (2005). + /// https://doi.org/10.1007/s10479-005-2060-2 + /// + /// Runtime control and best solution tracking. + /// The parameter that governs how many parallel lines through the search tree should be considered during lookahead. For values > 1, rank must be defined as BeamSearch will be used. + /// A function that ranks states (lower is better), if it is null the rank is implicit by the order in which the branches are generated. + /// How many descendents will be considered per node (in case beamWidth > 1) + /// The maximum number of threads to use + /// The state type + /// The choice type + /// The type of the objective + /// The control object with the tracking. + public static SearchControl ParallelPilotMethod(this SearchControl control, int beamWidth = 1, Func rank = null, int filterWidth = 1, int maxDegreeOfParallelism = -1) + where T : class, IMutableState + where Q : struct, IQuality + { + var state = (T)control.InitialState.Clone(); + DoParallelPilotMethod(control, state, beamWidth, rank, filterWidth, maxDegreeOfParallelism); + return control; + } + + /// + /// In the PILOT method a lookahead is performed to determine the most promising branch to continue. + /// In this implementation, an efficient beam search may be used for the lookahead. + /// The lookahead depth is not configurable, instead a full solution must be achieved. + /// + /// + /// Voßs, S., Fink, A. & Duin, C. Looking Ahead with the Pilot Method. Ann Oper Res 136, 285–302 (2005). + /// https://doi.org/10.1007/s10479-005-2060-2 + /// + /// Runtime control and best solution tracking. + /// The state from which the pilot method should start operating + /// The parameter that governs how many parallel lines through the search tree should be considered during lookahead. For values > 1, rank must be defined as BeamSearch will be used. + /// A function that ranks states (lower is better), if it is null the rank is implicit by the order in which the branches are generated. + /// How many descendents will be considered per node (in case beamWidth > 1) + /// The maximum number of threads to use + /// The state type + /// The choice type + /// The type of the objective + /// + public static void DoParallelPilotMethod(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth, int maxDegreeOfParallelism) + where T : class, IMutableState + where Q : struct, IQuality + { + if (rank != null && beamWidth <= 0) throw new ArgumentException($"{beamWidth} needs to be greater or equal than 1 when beam search is used ({nameof(rank)} is non-null)", nameof(beamWidth)); + if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 1", nameof(filterWidth)); + if (filterWidth == 1 && beamWidth > 1) throw new ArgumentException($"{nameof(beamWidth)} parameter has no effect if {nameof(filterWidth)} is equal to 1", nameof(beamWidth)); + if (maxDegreeOfParallelism == 0 || maxDegreeOfParallelism < -1) throw new ArgumentException($"{nameof(maxDegreeOfParallelism)} needs to be -1 or greater or equal to 1", nameof(maxDegreeOfParallelism)); + + var locker = new object(); + while (true) + { + T bestBranch = default(T); + Q? bestBranchQuality = null; + var branches = state.GetChoices().Select(c => { var clone = (T)state.Clone(); clone.Apply(c); return clone; }).ToList(); + if (branches.Count == 0) return; + Parallel.ForEach(branches, + new ParallelOptions() { MaxDegreeOfParallelism = maxDegreeOfParallelism }, + next => + { + Q? quality = default; + if (next.IsTerminal) + { + // no lookahead required + quality = next.Quality; + } else + { + if (rank == null) + { + // the depth search state is a stack + var localDepth = 0; + var searchState = new LIFOCollection<(int depth, C choice)>(); + foreach (var choice in next.GetChoices().Take(filterWidth).Reverse()) { + searchState.Store((localDepth, choice)); + } + while (!control.ShouldStop() && searchState.Nodes > 0) + { + var localControl = SearchControl.Start(next).WithRuntimeLimit(TimeSpan.FromSeconds(1)); + localDepth = Algorithms.DoDepthSearch(localControl, next, searchState, localDepth, filterWidth: filterWidth, depthLimit: int.MaxValue); + localControl.Finish(); + if (localControl.BestQuality.HasValue) quality = localControl.BestQuality; + lock (locker) + { + control.Merge(localControl); + } + } + // reset next to the initial state + while (localDepth > 0) + { + next.UndoLast(); + localDepth--; + } + } else + { + // the beam search state is a special collection + var searchState = new PriorityBiLevelFIFOCollection(next); + while (!control.ShouldStop() && searchState.CurrentLayerNodes > 0) + { + var localControl = SearchControl.Start(next).WithRuntimeLimit(TimeSpan.FromSeconds(1)); + Heuristics.DoBeamSearch(localControl, searchState, beamWidth: beamWidth, rank: rank, filterWidth: filterWidth); + localControl.Finish(); + if (localControl.BestQuality.HasValue) quality = localControl.BestQuality; + lock (locker) + { + control.Merge(localControl); + } + } + } + } + + if (!quality.HasValue) return; // no solution achieved + lock (locker) + { + if (!bestBranchQuality.HasValue || quality.Value.IsBetter(bestBranchQuality.Value)) + { + bestBranch = next; + bestBranchQuality = quality; + } + } + } + ); + if (!bestBranchQuality.HasValue) return; + state = bestBranch; + if (state.IsTerminal) return; + } + } + } + + public static class ParallelHeuristicStateExtensions + { + public static Task ParallelBeamSearchAsync(this IState state, Func rank, + int beamWidth = 100, int filterWidth = int.MaxValue, TimeSpan? runtime = null, + long? nodelimit = null, QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelBeamSearch((TState)state, rank, beamWidth, filterWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + public static TState ParallelBeamSearch(this IState state, Func rank, + int beamWidth = 100, int filterWidth = int.MaxValue, TimeSpan? runtime = null, + long? nodelimit = null, QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelBeamSearch(beamWidth, rank, filterWidth, maxDegreeOfParallelism).BestQualityState; + } + + public static Task ParallelBeamSearchAsync(this IMutableState state, Func rank, + int beamWidth = 100, int filterWidth = int.MaxValue, TimeSpan? runtime = null, + long? nodelimit = null, QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelBeamSearch((TState)state, rank, beamWidth, filterWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + + public static TState ParallelBeamSearch(this IMutableState state, Func rank, + int beamWidth = 100, int filterWidth = int.MaxValue, TimeSpan? runtime = null, + long? nodelimit = null, QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelBeamSearch(beamWidth, rank, filterWidth, maxDegreeOfParallelism).BestQualityState; + } + + public static Task ParallelRakeSearchAsync(this IState state, int rakeWidth = 100, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelRakeSearch((TState)state, rakeWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + + public static TState ParallelRakeSearch(this IState state, int rakeWidth = 100, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelRakeSearch(rakeWidth, maxDegreeOfParallelism: maxDegreeOfParallelism).BestQualityState; + } + + public static Task ParallelRakeSearchAsync(this IMutableState state, int rakeWidth = 100, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelRakeSearch((TState)state, rakeWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + + public static TState ParallelRakeSearch(this IMutableState state, int rakeWidth = 100, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelRakeSearch(rakeWidth, maxDegreeOfParallelism: maxDegreeOfParallelism).BestQualityState; + } + + public static Task ParallelRakeAndBeamSearchAsync(this IState state, Func rank, + int rakeWidth = 100, int beamWidth = 100, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelRakeAndBeamSearch((TState)state, rank, rakeWidth, beamWidth, filterWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + + public static TState ParallelRakeAndBeamSearch(this IState state, Func rank, + int rakeWidth = 100, int beamWidth = 100, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelRakeAndBeamSearch(rakeWidth, beamWidth, rank, filterWidth, maxDegreeOfParallelism: maxDegreeOfParallelism).BestQualityState; + } + + public static Task ParallelRakeAndBeamSearchAsync(this IMutableState state, Func rank, + int rakeWidth = 100, int beamWidth = 100, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelRakeAndBeamSearch((TState)state, rank, rakeWidth, beamWidth, filterWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + + public static TState ParallelRakeAndBeamSearch(this IMutableState state, Func rank, + int rakeWidth = 100, int beamWidth = 100, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelRakeAndBeamSearch(rakeWidth, beamWidth, rank, filterWidth, maxDegreeOfParallelism: maxDegreeOfParallelism).BestQualityState; + } + + public static Task ParallelPilotMethodAsync(this IState state, + int beamWidth = 1, Func rank = null, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelPilotMethod((TState)state, beamWidth, rank, filterWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + + public static TState ParallelPilotMethod(this IState state, + int beamWidth = 1, Func rank = null, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : IState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelPilotMethod(beamWidth, rank, filterWidth, maxDegreeOfParallelism).BestQualityState; + } + + public static Task ParallelPilotMethodAsync(this IMutableState state, + int beamWidth = 1, Func rank = null, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + return Task.Run(() => ParallelPilotMethod((TState)state, beamWidth, rank, filterWidth, runtime, nodelimit, callback, token, maxDegreeOfParallelism)); + } + + public static TState ParallelPilotMethod(this IMutableState state, + int beamWidth = 1, Func rank = null, int filterWidth = int.MaxValue, + TimeSpan? runtime = null, long? nodelimit = null, + QualityCallback callback = null, + CancellationToken token = default(CancellationToken), int maxDegreeOfParallelism = -1) + where TState : class, IMutableState + where TQuality : struct, IQuality + { + var control = SearchControl.Start((TState)state).WithCancellationToken(token); + if (runtime.HasValue) control = control.WithRuntimeLimit(runtime.Value); + if (nodelimit.HasValue) control = control.WithNodeLimit(nodelimit.Value); + if (callback != null) control = control.WithImprovementCallback(callback); + return control.ParallelPilotMethod(beamWidth, rank, filterWidth, maxDegreeOfParallelism).BestQualityState; + } + } +} \ No newline at end of file diff --git a/src/TreesearchLib/DataTypes.cs b/src/TreesearchLib/DataTypes.cs index 99602f5..c4c6d05 100644 --- a/src/TreesearchLib/DataTypes.cs +++ b/src/TreesearchLib/DataTypes.cs @@ -1,5 +1,6 @@ using System; using System.Collections.Generic; +using System.Linq; using System.Runtime.CompilerServices; namespace TreesearchLib @@ -11,11 +12,6 @@ public interface IStateCollection /// /// int Nodes { get; } - /// - /// The number of successful TryGetNext calls that have been performed on the collection - /// - /// - long RetrievedNodes { get; } /// /// Obtains the next node, or none if the collection is empty @@ -28,6 +24,11 @@ public interface IStateCollection /// /// The node to store void Store(T state); + /// + /// Returns all stored states as an enumerable in no particular order + /// + /// The stored states + IEnumerable AsEnumerable(); } /// @@ -37,13 +38,11 @@ public interface IStateCollection public class LIFOCollection : IStateCollection { public int Nodes => states.Count; - public long RetrievedNodes { get; private set; } private Stack states = new Stack(); public LIFOCollection() { - RetrievedNodes = 0; } public LIFOCollection(T initial) : this() @@ -58,12 +57,13 @@ public bool TryGetNext(out T next) next = default(T); return false; } - RetrievedNodes++; next = states.Pop(); return true; } public void Store(T state) => states.Push(state); + + public IEnumerable AsEnumerable() => states; } /// @@ -73,13 +73,11 @@ public bool TryGetNext(out T next) public class FIFOCollection : IStateCollection { public int Nodes => states.Count; - public long RetrievedNodes { get; private set; } private Queue states = new Queue(); public FIFOCollection() { - RetrievedNodes = 0; } public FIFOCollection(T initial) : this() @@ -87,12 +85,16 @@ public FIFOCollection(T initial) : this() Store(initial); } - internal FIFOCollection(Queue other, long retrievedNodes) + internal FIFOCollection(Queue other) { - RetrievedNodes = retrievedNodes; states = other; } + internal FIFOCollection(IEnumerable other) + { + states = new Queue(other); + } + public bool TryGetNext(out T next) { if (states.Count == 0) @@ -100,32 +102,33 @@ public bool TryGetNext(out T next) next = default(T); return false; } - RetrievedNodes++; next = states.Dequeue(); return true; } public void Store(T state) => states.Enqueue(state); + + public IEnumerable AsEnumerable() => states; } -/// -/// This collection maintains two queues, the first queue (aka the get-queue) is to retrieve items, -/// the second queue (aka the put-queue) to store items. Using the queues -/// may be swapped and their roles switch. -/// -/// Sometimes in breadth-first search, a level should be completed, before the next level is started. -/// This collection supports that case in that the next level is maintained in a separate collection. -/// The point to the number of states in the get-queue. -/// -/// -/// Because of the peculiar behaviour, BiLevelFIFOCollection does not implement -/// . It allows to "export" as a regular -/// , by calling . -/// -/// Also, when calling , any remaining nodes in the get-queue are prepended to the -/// nodes in the put-queue before swapping. -/// -/// The type of node to store + /// + /// This collection maintains two queues, the first queue (aka the get-queue) is to retrieve items, + /// the second queue (aka the put-queue) to store items. Using the queues + /// may be swapped and their roles switch. + /// + /// Sometimes in breadth-first search, a level should be completed, before the next level is started. + /// This collection supports that case in that the next level is maintained in a separate collection. + /// The point to the number of states in the get-queue. + /// + /// + /// Because of the peculiar behaviour, BiLevelFIFOCollection does not implement + /// . It allows to "export" as a regular + /// , by calling . + /// + /// Also, when calling , any remaining nodes in the get-queue are prepended to the + /// nodes in the put-queue before swapping. + /// + /// The type of node to store public class BiLevelFIFOCollection { public int GetQueueNodes => getQueue.Count; @@ -191,9 +194,71 @@ public void SwapQueues() public FIFOCollection ToSingleLevel() { while (putQueue.Count > 0) getQueue.Enqueue(putQueue.Dequeue()); - var result = new FIFOCollection(getQueue, RetrievedNodes); + var result = new FIFOCollection(getQueue); getQueue = new Queue(); return result; } } + + public class PriorityBiLevelFIFOCollection + { + public int CurrentLayerNodes => currentLayerQueue.Count; + public int NextLayerNodes => nextLayerQueue.Count; + public long RetrievedNodes { get; private set; } + + private Queue currentLayerQueue = new Queue(); + private LinkedList<(float priority, T state)> nextLayerQueue = new LinkedList<(float, T)>(); + + public PriorityBiLevelFIFOCollection() + { + RetrievedNodes = 0; + } + + public PriorityBiLevelFIFOCollection(T initial) : this() + { + currentLayerQueue.Enqueue(initial); // initially, the items are put into the get-queue + } + + public PriorityBiLevelFIFOCollection(IEnumerable initial) : this() + { + foreach (var i in initial) + { + currentLayerQueue.Enqueue(i); // initially, the items are put into the get-queue + } + } + + public bool TryFromCurrentLayerQueue(out T next) + { + if (currentLayerQueue.Count == 0) + { + next = default(T); + return false; + } + RetrievedNodes++; + next = currentLayerQueue.Dequeue(); + return true; + } + + /// + /// + /// + /// + /// + public void ToNextLayerQueue(T state, float priority) => nextLayerQueue.AddLast((priority, state)); + + /// + /// Discards any remaining states in the current layer queue and replaces it with the + /// best states from the next layer queue. + /// + /// The number of states to choose for the next layer + public void AdvanceLayer(int bestN) + { + currentLayerQueue.Clear(); + foreach (var (priority, state) in nextLayerQueue.OrderBy(x => x.priority).Take(bestN)) + { + currentLayerQueue.Enqueue(state); + } + nextLayerQueue.Clear(); + } + } } \ No newline at end of file diff --git a/src/TreesearchLib/Heuristics.cs b/src/TreesearchLib/Heuristics.cs index 8c94258..abae13a 100644 --- a/src/TreesearchLib/Heuristics.cs +++ b/src/TreesearchLib/Heuristics.cs @@ -94,7 +94,7 @@ public static SearchControl BeamSearch(this SearchControlThe state type /// The quality type /// - public static void DoBeamSearch(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth) + public static void DoBeamSearch(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth) where T : IState where Q : struct, IQuality { @@ -102,17 +102,31 @@ public static void DoBeamSearch(ISearchControl control, T state, int if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 1", nameof(filterWidth)); if (rank == null) throw new ArgumentNullException(nameof(rank)); if (filterWidth == 1 && beamWidth > 1) throw new ArgumentException($"{nameof(beamWidth)} cannot exceed 1 when {nameof(filterWidth)} equals 1."); - - var currentLayer = new Queue(); - currentLayer.Enqueue(state); - var nextlayer = new List<(float rank, T state)>(); - while (!control.ShouldStop()) - { - nextlayer.Clear(); - while (currentLayer.Count > 0) + var searchState = new PriorityBiLevelFIFOCollection(state); + DoBeamSearch(control, searchState, beamWidth, rank, filterWidth); + } + + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// The runtime control and tracking + /// The algorithm's inner state + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The state type + /// The quality type + /// + public static void DoBeamSearch(ISearchControl control, PriorityBiLevelFIFOCollection searchState, int beamWidth, Func rank, int filterWidth) + where T : IState + where Q : struct, IQuality + { + while (!control.ShouldStop() && searchState.CurrentLayerNodes > 0) + { + while (!control.ShouldStop() && searchState.TryFromCurrentLayerQueue(out var currentState)) { - var currentState = currentLayer.Dequeue(); foreach (var next in currentState.GetBranches().Take(filterWidth)) { if (control.VisitNode(next) == VisitResult.Discard) @@ -120,24 +134,12 @@ public static void DoBeamSearch(ISearchControl control, T state, int continue; } - nextlayer.Add((rank(next), next)); - } - - if (control.ShouldStop()) - { - nextlayer.Clear(); - break; + searchState.ToNextLayerQueue(next, rank(next)); } } - - if (nextlayer.Count == 0) + if (searchState.CurrentLayerNodes == 0) { - break; - } - - foreach (var nextState in nextlayer.OrderBy(x => x.rank).Take(beamWidth).Select(x => x.state)) - { - currentLayer.Enqueue(nextState); + searchState.AdvanceLayer(beamWidth); } } } @@ -162,7 +164,7 @@ public static void DoBeamSearch(ISearchControl control, T state, if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 1", nameof(filterWidth)); if (rank == null) throw new ArgumentNullException(nameof(rank)); if (filterWidth == 1 && beamWidth > 1) throw new ArgumentException($"{nameof(beamWidth)} cannot exceed 1 when {nameof(filterWidth)} equals 1."); - + var currentLayer = new Queue(); currentLayer.Enqueue(state); var nextlayer = new List<(float rank, T state)>(); @@ -205,6 +207,46 @@ public static void DoBeamSearch(ISearchControl control, T state, } } + /// + /// Beam search uses several parallel traces. When called with a rank function, all nodes of the next layer are gathered + /// and then sorted by the rank function (using a stable sort). + /// + /// The runtime control and tracking + /// The algorithm's inner state + /// The maximum number of parallel traces + /// The rank function that determines the order of nodes (lower is better) + /// The maximum number of descendents per node + /// The state type + /// The choice type + /// The quality type + /// + public static void DoBeamSearch(ISearchControl control, PriorityBiLevelFIFOCollection searchState, int beamWidth, Func rank, int filterWidth) + where T : class, IMutableState + where Q : struct, IQuality + { + while (!control.ShouldStop() && searchState.CurrentLayerNodes > 0) + { + while (!control.ShouldStop() && searchState.TryFromCurrentLayerQueue(out var currentState)) + { + foreach (var choice in currentState.GetChoices().Take(filterWidth)) + { + var next = (T)currentState.Clone(); + next.Apply(choice); + if (control.VisitNode(next) == VisitResult.Discard) + { + continue; + } + + searchState.ToNextLayerQueue(next, rank(next)); + } + } + if (searchState.CurrentLayerNodes == 0) + { + searchState.AdvanceLayer(beamWidth); + } + } + } + /// /// Rake search performs a breadth-first search until a level is reached with /// nodes and then from each node a depth-first search by just taking the first branch (i.e., a greedy heuristic). @@ -234,7 +276,7 @@ public static SearchControl RakeSearch(this SearchControl cont where T : IState where Q : struct, IQuality { - var rake = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); + var (_, rake) = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); while (rake.TryGetNext(out var next) && !control.ShouldStop()) { Algorithms.DoDepthSearch(control, next, 1); @@ -273,7 +315,7 @@ public static SearchControl RakeSearch(this SearchControl where Q : struct, IQuality { - var rake = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); + var (_, rake) = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); while (rake.TryGetNext(out var next) && !control.ShouldStop()) { Algorithms.DoDepthSearch(control, next, filterWidth: 1); @@ -316,7 +358,7 @@ public static SearchControl RakeAndBeamSearch(this SearchControl where Q : struct, IQuality { - var rake = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); + var (_, rake) = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); while (rake.TryGetNext(out var next) && !control.ShouldStop()) { DoBeamSearch(control, next, beamWidth, rank, filterWidth); @@ -361,7 +403,7 @@ public static SearchControl RakeAndBeamSearch(this SearchContr where T : class, IMutableState where Q : struct, IQuality { - var rake = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); + var (_, rake) = Algorithms.DoBreadthSearch(control, control.InitialState, int.MaxValue, int.MaxValue, rakeWidth); while (rake.TryGetNext(out var next) && !control.ShouldStop()) { DoBeamSearch(control, next, beamWidth, rank, filterWidth); @@ -441,7 +483,7 @@ public static void DoPilotMethod(ISearchControl control, T state, in if (rank != null && beamWidth <= 0) throw new ArgumentException($"{beamWidth} needs to be greater or equal than 1 when beam search is used ({nameof(rank)} is non-null)", nameof(beamWidth)); if (filterWidth <= 0) throw new ArgumentException($"{filterWidth} needs to be greater or equal than 1", nameof(filterWidth)); if (filterWidth == 1 && beamWidth > 1) throw new ArgumentException($"{nameof(beamWidth)} parameter has no effect if {nameof(filterWidth)} is equal to 1", nameof(beamWidth)); - + while (true) { T bestBranch = default(T); @@ -564,8 +606,9 @@ public static void DoPilotMethod(ISearchControl control, T state, Q? quality; if (rank == null) { + state.Apply(choice); var wrappedControl = new WrappedSearchControl(control); - var depth = Algorithms.DoDepthSearch(wrappedControl, state, filterWidth: filterWidth); + var depth = 1 + Algorithms.DoDepthSearch(wrappedControl, state, filterWidth: filterWidth); quality = wrappedControl.BestQuality; while (depth > 0) { @@ -709,7 +752,7 @@ public static void DoNaiveLDSearch(ISearchControl control, T state, if (maxDiscrepancy < 0) throw new ArgumentException(nameof(maxDiscrepancy), $"{maxDiscrepancy} must be >= 0"); var searchState = new LIFOCollection<(T, int)>(); searchState.Store((state, 0)); - + while (searchState.TryGetNext(out var tup) && !control.ShouldStop()) { var (currentState, discrepancy) = tup; @@ -728,7 +771,7 @@ public static void DoNaiveLDSearch(ISearchControl control, T state, } } } - + /// /// The limited discrepancy (LD) search assumes branches are sorted according to a heuristic and generally taking the /// first branch leads to better outcomes. It assumes that there is a discrepancy, i.e., penalty, of N for visiting the @@ -841,8 +884,8 @@ public static void DoNaiveLDSearch(ISearchControl control, T stat { searchState.Store(entry); } - - while (searchState.TryGetNext(out var next) && !control.ShouldStop()) + + while (!control.ShouldStop() && searchState.TryGetNext(out var next)) { var (depth, choice, discrepancy) = next; while (depth < stateDepth) @@ -994,7 +1037,7 @@ public static void DoAnytimeLDSearch(ISearchControl control, T state } } } - + /// /// The limited discrepancy (LD) search assumes branches are sorted according to a heuristic and generally taking the /// first branch leads to better outcomes. It assumes that there is a discrepancy, i.e., penalty, of N for visiting the @@ -1134,7 +1177,7 @@ public static void DoAnytimeLDSearch(ISearchControl control, T st } } } - + /// /// Monotonic beam search uses several parallel beams which are iteratively updated. /// Each beam may only choose among branches that beams before it have made available. @@ -1208,7 +1251,7 @@ public static SearchControl MonotonicBeamSearch(this SearchControlThe state type /// The quality type /// - public static void DoMonotonicBeamSearch(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth) + public static void DoMonotonicBeamSearch(ISearchControl control, T state, int beamWidth, Func rank, int filterWidth) where T : IState where Q : struct, IQuality { @@ -1278,7 +1321,7 @@ public static void DoMonotonicBeamSearch(ISearchControl control, T st candidates.Clear(); } } - + /// /// Monotonic beam search uses several parallel beams which are iteratively updated. /// Each beam may only choose among branches that beams before it have made available. @@ -1426,31 +1469,31 @@ public static void DoMonotonicBeamSearch(ISearchControl control, candidates.Clear(); } } + } - private class StateNode : Priority_Queue.StablePriorityQueueNode - where TState : IState - where TQuality : struct, IQuality - { - private readonly TState state; - public TState State => state; + internal class StateNode : Priority_Queue.StablePriorityQueueNode + where TState : IState + where TQuality : struct, IQuality + { + private readonly TState state; + public TState State => state; - public StateNode(TState state) - { - this.state = state; - } - } - - private class StateNode : Priority_Queue.StablePriorityQueueNode - where TState : class, IMutableState - where TQuality : struct, IQuality + public StateNode(TState state) { - private readonly TState state; - public TState State => state; + this.state = state; + } + } - public StateNode(TState state) - { - this.state = state; - } + internal class StateNode : Priority_Queue.StablePriorityQueueNode + where TState : class, IMutableState + where TQuality : struct, IQuality + { + private readonly TState state; + public TState State => state; + + public StateNode(TState state) + { + this.state = state; } } @@ -1654,7 +1697,7 @@ public static TState PilotMethod(this IMutableState NaiveLDSearchAsync(this IState state, int maxDiscrepancy = 1, int? seed = null, TimeSpan? runtime = null, long? nodelimit = null, QualityCallback callback = null, @@ -1664,7 +1707,7 @@ public static Task NaiveLDSearchAsync(this IState NaiveLDSearch(state, maxDiscrepancy, seed, runtime, nodelimit, callback, token)); } - + public static TState NaiveLDSearch(this IState state, int maxDiscrepancy = 1, int? seed = null, TimeSpan? runtime = null, long? nodelimit = null, QualityCallback callback = null, @@ -1678,7 +1721,7 @@ public static TState NaiveLDSearch(this IState NaiveLDSearchAsync(this IMutableState state, int maxDiscrepancy = 1, int? seed = null, TimeSpan? runtime = null, long? nodelimit = null, QualityCallback callback = null, @@ -1688,7 +1731,7 @@ public static Task NaiveLDSearchAsync(this IM { return Task.Run(() => NaiveLDSearch(state, maxDiscrepancy, seed, runtime, nodelimit, callback, token)); } - + public static TState NaiveLDSearch(this IMutableState state, int maxDiscrepancy = 1, int? seed = null, TimeSpan? runtime = null, long? nodelimit = null, QualityCallback callback = null, @@ -1712,7 +1755,7 @@ public static Task AnytimeLDSearchAsync(this IState AnytimeLDSearch(state, maxDiscrepancy, seed, runtime, nodelimit, callback, token)); } - + public static TState AnytimeLDSearch(this IState state, int maxDiscrepancy = 1, int? seed = null, TimeSpan? runtime = null, long? nodelimit = null, QualityCallback callback = null, @@ -1726,7 +1769,7 @@ public static TState AnytimeLDSearch(this IState AnytimeLDSearchAsync(this IMutableState state, int maxDiscrepancy = 1, int? seed = null, TimeSpan? runtime = null, long? nodelimit = null, QualityCallback callback = null, @@ -1736,7 +1779,7 @@ public static Task AnytimeLDSearchAsync(this { return Task.Run(() => AnytimeLDSearch(state, maxDiscrepancy, seed, runtime, nodelimit, callback, token)); } - + public static TState AnytimeLDSearch(this IMutableState state, int maxDiscrepancy = 1, int? seed = null, TimeSpan? runtime = null, long? nodelimit = null, QualityCallback callback = null, diff --git a/src/TreesearchLib/SearchControl.cs b/src/TreesearchLib/SearchControl.cs index 2c7a0ed..9d1f41d 100644 --- a/src/TreesearchLib/SearchControl.cs +++ b/src/TreesearchLib/SearchControl.cs @@ -70,6 +70,11 @@ public interface ISearchControl /// The state that is visited /// Whether the node is ok, or whether it should be discarded, because the lower bound is already worse than the best upper bound VisitResult VisitNode(TState state); + /// + /// This operation merges the information in the other search control into this one. + /// + /// The search control with the information to be merged. + void Merge(ISearchControl other); } /// @@ -146,6 +151,20 @@ public VisitResult VisitNode(TState state) return result; } + public void Merge(ISearchControl other) + { + if (other.BestQuality.HasValue) + { + if (!BestQuality.HasValue || other.BestQuality.Value.IsBetter(BestQuality.Value)) + { + BestQuality = other.BestQuality; + BestQualityState = other.BestQualityState; // is already a clone + ImprovementCallback?.Invoke(this, other.BestQualityState, other.BestQuality.Value); + } + } + VisitedNodes += other.VisitedNodes; + } + public static SearchControl Start(IMutableState state) { return new SearchControl((TState)state); @@ -225,13 +244,27 @@ public VisitResult VisitNode(TState state) return result; } + public void Merge(ISearchControl other) + { + if (other.BestQuality.HasValue) + { + if (!BestQuality.HasValue || other.BestQuality.Value.IsBetter(BestQuality.Value)) + { + BestQuality = other.BestQuality; + BestQualityState = other.BestQualityState; // is already a clone + ImprovementCallback?.Invoke(this, other.BestQualityState, other.BestQuality.Value); + } + } + VisitedNodes += other.VisitedNodes; + } + public static SearchControl Start(TState state) { return new SearchControl(state); } } - public class WrappedSearchControl : ISearchControl + internal class WrappedSearchControl : ISearchControl where TState : IState where TQuality : struct, IQuality { @@ -278,9 +311,14 @@ public VisitResult VisitNode(TState state) } return control.VisitNode(state); } + + public void Merge(ISearchControl other) + { + control.Merge(other); + } } - public class WrappedSearchControl : ISearchControl + internal class WrappedSearchControl : ISearchControl where TState : class, IMutableState where TQuality : struct, IQuality { @@ -327,6 +365,11 @@ public VisitResult VisitNode(TState state) } return control.VisitNode(state); } + + public void Merge(ISearchControl other) + { + control.Merge(other); + } } public static class SearchControlExtensions diff --git a/src/TreesearchLib/TreesearchLib.csproj b/src/TreesearchLib/TreesearchLib.csproj index ae50ec7..0b35ce3 100644 --- a/src/TreesearchLib/TreesearchLib.csproj +++ b/src/TreesearchLib/TreesearchLib.csproj @@ -4,7 +4,7 @@ netstandard2.0;net472 true TreesearchLib.snk - 1.0.1 + 1.1.0 Sebastian Leitner, Andreas Beham TreesearchLib is a framework for modeling optimization problems that are to be solved by constructive heuristics. It includes a number of algorithms: exhaustive depth-first and breadth-first search, limited discrepancy search, the PILOT method, Beam search, Monotonic beam search, Rake search, and Monte Carlo Tree Search. @@ -13,7 +13,15 @@ Sebastian Leitner, Andreas Beham https://github.com/heal-research/TreesearchLib - Add additional extension methods for monotonic beam search and change some defaults. + Add parallel implementations of the following algorithms and heuristics: + - Depth-first Search + - Breadth-first Search + - Beam Search + - Rake Search + - Rake and Beam Search + - Pilot Method + + Add a new Scheduling example with three different objectives True