Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ggml : remove OpenCL #7735

Merged
merged 1 commit into from
Jun 4, 2024
Merged

ggml : remove OpenCL #7735

merged 1 commit into from
Jun 4, 2024

Conversation

ggerganov
Copy link
Owner

Superseded by Vulkan

@github-actions github-actions bot added build Compilation issues script Script related nix Issues specific to consuming flake.nix, or generally concerned with ❄ Nix-based llama.cpp deployment examples python python script changes devops improvements to build systems and github actions ggml changes relating to the ggml tensor library for machine learning SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language Apple Metal https://en.wikipedia.org/wiki/Metal_(API) labels Jun 4, 2024
Copy link
Contributor

github-actions bot commented Jun 4, 2024

📈 llama.cpp server for bench-server-baseline on Standard_NC4as_T4_v3 for phi-2-q4_0: 527 iterations 🚀

Expand details for performance related PR only
  • Concurrent users: 8, duration: 10m
  • HTTP request : avg=8888.59ms p(95)=21528.33ms fails=, finish reason: stop=465 truncated=62
  • Prompt processing (pp): avg=102.42tk/s p(95)=443.71tk/s
  • Token generation (tg): avg=34.38tk/s p(95)=46.81tk/s
  • ggml-org/models/phi-2/ggml-model-q4_0.gguf parallel=8 ctx-size=16384 ngl=33 batch-size=2048 ubatch-size=256 pp=1024 pp+tg=2048 branch=gg/remove-opencl commit=a0510370afcea6c2430b2232126db6f966a7fedf

prompt_tokens_seconds

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 527 iterations"
    y-axis "llamacpp:prompt_tokens_seconds"
    x-axis "llamacpp:prompt_tokens_seconds" 1717503940 --> 1717504568
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 267.59, 267.59, 267.59, 267.59, 267.59, 813.96, 813.96, 813.96, 813.96, 813.96, 808.35, 808.35, 808.35, 808.35, 808.35, 826.09, 826.09, 826.09, 826.09, 826.09, 891.93, 891.93, 891.93, 891.93, 891.93, 838.8, 838.8, 838.8, 838.8, 838.8, 832.69, 832.69, 832.69, 832.69, 832.69, 845.52, 845.52, 845.52, 845.52, 845.52, 856.37, 856.37, 856.37, 856.37, 856.37, 868.68, 868.68, 868.68, 868.68, 868.68, 868.22, 868.22, 868.22, 868.22, 868.22, 881.76, 881.76, 881.76, 881.76, 881.76, 843.85, 843.85, 843.85, 843.85, 843.85, 851.59, 851.59, 851.59, 851.59, 851.59, 866.77, 866.77, 866.77, 866.77, 866.77, 869.93, 869.93, 869.93, 869.93, 869.93, 868.38, 868.38, 868.38, 868.38, 868.38, 870.01, 870.01, 870.01, 870.01, 870.01, 861.89, 861.89, 861.89, 861.89, 861.89, 857.59, 857.59, 857.59, 857.59, 857.59, 856.36, 856.36, 856.36, 856.36, 856.36, 854.41, 854.41, 854.41, 854.41, 854.41, 858.1, 858.1, 858.1, 858.1, 858.1, 863.19, 863.19, 863.19, 863.19, 863.19, 877.58, 877.58, 877.58, 877.58, 877.58, 877.16, 877.16, 877.16, 877.16, 877.16, 878.58, 878.58, 878.58, 878.58, 878.58, 888.6, 888.6, 888.6, 888.6, 888.6, 888.47, 888.47, 888.47, 888.47, 888.47, 888.17, 888.17, 888.17, 888.17, 888.17, 891.93, 891.93, 891.93, 891.93, 891.93, 891.54, 891.54, 891.54, 891.54, 891.54, 888.06, 888.06, 888.06, 888.06, 888.06, 891.14, 891.14, 891.14, 891.14, 891.14, 901.62, 901.62, 901.62, 901.62, 901.62, 898.82, 898.82, 898.82, 898.82, 898.82, 902.82, 902.82, 902.82, 902.82, 902.82, 902.53, 902.53, 902.53, 902.53, 902.53, 898.56, 898.56, 898.56, 898.56, 898.56, 897.05, 897.05, 897.05, 897.05, 897.05, 899.75, 899.75, 899.75, 899.75, 899.75, 901.22, 901.22, 901.22, 901.22, 901.22, 904.88, 904.88, 904.88, 904.88, 904.88, 893.07, 893.07, 893.07, 893.07, 893.07, 883.27, 883.27, 883.27, 883.27, 883.27, 881.25, 881.25, 881.25, 881.25, 881.25, 878.45, 878.45, 878.45, 878.45, 878.45, 875.82, 875.82, 875.82, 875.82, 875.82, 880.31, 880.31, 880.31, 880.31, 880.31, 881.79, 881.79, 881.79, 881.79, 881.79, 883.82, 883.82, 883.82, 883.82, 883.82, 879.3, 879.3, 879.3, 879.3, 879.3, 879.4, 879.4, 879.4, 879.4, 879.4, 881.28, 881.28, 881.28, 881.28, 881.28, 874.5, 874.5, 874.5, 874.5, 874.5, 875.73, 875.73, 875.73, 875.73, 875.73, 880.5, 880.5, 880.5, 880.5, 880.5, 880.71, 880.71, 880.71, 880.71, 880.71, 880.05, 880.05, 880.05, 880.05, 880.05, 879.85, 879.85, 879.85, 879.85, 879.85, 879.99, 879.99, 879.99, 879.99, 879.99]
                    
Loading
predicted_tokens_seconds
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 527 iterations"
    y-axis "llamacpp:predicted_tokens_seconds"
    x-axis "llamacpp:predicted_tokens_seconds" 1717503940 --> 1717504568
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 41.3, 41.3, 41.3, 41.3, 41.3, 33.26, 33.26, 33.26, 33.26, 33.26, 30.85, 30.85, 30.85, 30.85, 30.85, 30.82, 30.82, 30.82, 30.82, 30.82, 32.65, 32.65, 32.65, 32.65, 32.65, 32.47, 32.47, 32.47, 32.47, 32.47, 33.63, 33.63, 33.63, 33.63, 33.63, 34.18, 34.18, 34.18, 34.18, 34.18, 34.5, 34.5, 34.5, 34.5, 34.5, 34.67, 34.67, 34.67, 34.67, 34.67, 34.17, 34.17, 34.17, 34.17, 34.17, 33.68, 33.68, 33.68, 33.68, 33.68, 33.84, 33.84, 33.84, 33.84, 33.84, 33.3, 33.3, 33.3, 33.3, 33.3, 32.88, 32.88, 32.88, 32.88, 32.88, 30.79, 30.79, 30.79, 30.79, 30.79, 29.81, 29.81, 29.81, 29.81, 29.81, 29.89, 29.89, 29.89, 29.89, 29.89, 30.31, 30.31, 30.31, 30.31, 30.31, 29.96, 29.96, 29.96, 29.96, 29.96, 29.93, 29.93, 29.93, 29.93, 29.93, 29.5, 29.5, 29.5, 29.5, 29.5, 29.46, 29.46, 29.46, 29.46, 29.46, 29.67, 29.67, 29.67, 29.67, 29.67, 29.68, 29.68, 29.68, 29.68, 29.68, 29.74, 29.74, 29.74, 29.74, 29.74, 29.98, 29.98, 29.98, 29.98, 29.98, 30.04, 30.04, 30.04, 30.04, 30.04, 30.28, 30.28, 30.28, 30.28, 30.28, 30.54, 30.54, 30.54, 30.54, 30.54, 30.74, 30.74, 30.74, 30.74, 30.74, 30.81, 30.81, 30.81, 30.81, 30.81, 30.91, 30.91, 30.91, 30.91, 30.91, 31.02, 31.02, 31.02, 31.02, 31.02, 31.06, 31.06, 31.06, 31.06, 31.06, 31.04, 31.04, 31.04, 31.04, 31.04, 30.83, 30.83, 30.83, 30.83, 30.83, 30.46, 30.46, 30.46, 30.46, 30.46, 30.12, 30.12, 30.12, 30.12, 30.12, 30.23, 30.23, 30.23, 30.23, 30.23, 30.37, 30.37, 30.37, 30.37, 30.37, 30.51, 30.51, 30.51, 30.51, 30.51, 30.67, 30.67, 30.67, 30.67, 30.67, 30.67, 30.67, 30.67, 30.67, 30.67, 30.66, 30.66, 30.66, 30.66, 30.66, 30.29, 30.29, 30.29, 30.29, 30.29, 29.59, 29.59, 29.59, 29.59, 29.59, 28.78, 28.78, 28.78, 28.78, 28.78, 28.88, 28.88, 28.88, 28.88, 28.88, 28.9, 28.9, 28.9, 28.9, 28.9, 29.01, 29.01, 29.01, 29.01, 29.01, 29.09, 29.09, 29.09, 29.09, 29.09, 29.12, 29.12, 29.12, 29.12, 29.12, 29.22, 29.22, 29.22, 29.22, 29.22, 29.08, 29.08, 29.08, 29.08, 29.08, 29.08, 29.08, 29.08, 29.08, 29.08, 29.06, 29.06, 29.06, 29.06, 29.06, 29.11, 29.11, 29.11, 29.11, 29.11, 29.17, 29.17, 29.17, 29.17, 29.17, 29.26, 29.26, 29.26, 29.26, 29.26, 29.32, 29.32, 29.32, 29.32, 29.32]
                    
Loading

Details

kv_cache_usage_ratio

More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 527 iterations"
    y-axis "llamacpp:kv_cache_usage_ratio"
    x-axis "llamacpp:kv_cache_usage_ratio" 1717503940 --> 1717504568
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.09, 0.09, 0.09, 0.09, 0.09, 0.41, 0.41, 0.41, 0.41, 0.41, 0.26, 0.26, 0.26, 0.26, 0.26, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.21, 0.21, 0.21, 0.21, 0.21, 0.13, 0.13, 0.13, 0.13, 0.13, 0.17, 0.17, 0.17, 0.17, 0.17, 0.11, 0.11, 0.11, 0.11, 0.11, 0.19, 0.19, 0.19, 0.19, 0.19, 0.21, 0.21, 0.21, 0.21, 0.21, 0.17, 0.17, 0.17, 0.17, 0.17, 0.29, 0.29, 0.29, 0.29, 0.29, 0.33, 0.33, 0.33, 0.33, 0.33, 0.41, 0.41, 0.41, 0.41, 0.41, 0.33, 0.33, 0.33, 0.33, 0.33, 0.14, 0.14, 0.14, 0.14, 0.14, 0.17, 0.17, 0.17, 0.17, 0.17, 0.3, 0.3, 0.3, 0.3, 0.3, 0.34, 0.34, 0.34, 0.34, 0.34, 0.32, 0.32, 0.32, 0.32, 0.32, 0.21, 0.21, 0.21, 0.21, 0.21, 0.11, 0.11, 0.11, 0.11, 0.11, 0.16, 0.16, 0.16, 0.16, 0.16, 0.33, 0.33, 0.33, 0.33, 0.33, 0.13, 0.13, 0.13, 0.13, 0.13, 0.14, 0.14, 0.14, 0.14, 0.14, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.12, 0.09, 0.09, 0.09, 0.09, 0.09, 0.14, 0.14, 0.14, 0.14, 0.14, 0.17, 0.17, 0.17, 0.17, 0.17, 0.12, 0.12, 0.12, 0.12, 0.12, 0.19, 0.19, 0.19, 0.19, 0.19, 0.31, 0.31, 0.31, 0.31, 0.31, 0.24, 0.24, 0.24, 0.24, 0.24, 0.35, 0.35, 0.35, 0.35, 0.35, 0.32, 0.32, 0.32, 0.32, 0.32, 0.2, 0.2, 0.2, 0.2, 0.2, 0.11, 0.11, 0.11, 0.11, 0.11, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.25, 0.25, 0.25, 0.25, 0.25, 0.45, 0.45, 0.45, 0.45, 0.45, 0.56, 0.56, 0.56, 0.56, 0.56, 0.69, 0.69, 0.69, 0.69, 0.69, 0.46, 0.46, 0.46, 0.46, 0.46, 0.11, 0.11, 0.11, 0.11, 0.11, 0.18, 0.18, 0.18, 0.18, 0.18, 0.19, 0.19, 0.19, 0.19, 0.19, 0.14, 0.14, 0.14, 0.14, 0.14, 0.12, 0.12, 0.12, 0.12, 0.12, 0.21, 0.21, 0.21, 0.21, 0.21, 0.2, 0.2, 0.2, 0.2, 0.2, 0.27, 0.27, 0.27, 0.27, 0.27, 0.12, 0.12, 0.12, 0.12, 0.12, 0.18, 0.18, 0.18, 0.18, 0.18, 0.13, 0.13, 0.13, 0.13, 0.13, 0.25, 0.25, 0.25, 0.25, 0.25, 0.1, 0.1, 0.1, 0.1, 0.1, 0.11, 0.11, 0.11, 0.11, 0.11]
                    
Loading
requests_processing
More
---
config:
    xyChart:
        titleFontSize: 12
        width: 900
        height: 600
    themeVariables:
        xyChart:
            titleColor: "#000000"
---
xychart-beta
    title "llama.cpp bench-server-baseline on Standard_NC4as_T4_v3
 duration=10m 527 iterations"
    y-axis "llamacpp:requests_processing"
    x-axis "llamacpp:requests_processing" 1717503940 --> 1717504568
    line [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 6.0, 6.0, 6.0, 6.0, 6.0, 2.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, 3.0, 3.0, 3.0, 8.0, 8.0, 8.0, 8.0, 8.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0, 5.0, 5.0, 5.0, 5.0, 5.0, 7.0, 7.0, 7.0, 7.0, 7.0, 4.0, 4.0, 4.0, 4.0, 4.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.0, 6.0, 6.0, 6.0, 6.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 5.0, 2.0, 2.0, 2.0, 2.0, 2.0, 5.0, 5.0, 5.0, 5.0, 5.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 3.0, 3.0, 3.0, 3.0, 3.0, 2.0, 2.0, 2.0, 2.0, 2.0, 8.0, 8.0, 8.0, 8.0, 8.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 3.0, 3.0, 3.0, 3.0, 3.0, 5.0, 5.0, 5.0, 5.0, 5.0, 2.0, 2.0, 2.0, 2.0, 2.0, 4.0, 4.0, 4.0, 4.0, 4.0, 2.0, 2.0, 2.0, 2.0, 2.0, 7.0, 7.0, 7.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 3.0, 3.0, 3.0, 3.0, 3.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 6.0, 2.0, 2.0, 2.0, 2.0, 2.0, 8.0, 8.0, 8.0, 8.0, 8.0, 3.0, 3.0, 3.0, 3.0, 3.0, 7.0, 7.0, 7.0, 7.0, 7.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 5.0, 6.0, 6.0, 6.0, 6.0, 6.0]
                    
Loading

martindevans added a commit to martindevans/LLamaSharp that referenced this pull request Jun 4, 2024
@ggerganov ggerganov merged commit 554c247 into master Jun 4, 2024
82 checks passed
@ggerganov ggerganov deleted the gg/remove-opencl branch June 4, 2024 18:23
@mofosyne mofosyne added the Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level label Jun 5, 2024
@MaggotHATE
Copy link
Contributor

MaggotHATE commented Jun 7, 2024

@ggerganov Fyi, Vulkan still suffers substantial memory allocation issue, especially with large context sizes (which are still within the model specifications), reported multiple times. Removing Clblast before this was fixed is a really bad move. It essentially locks users with 16GB RAM out of 7B models in good quantization, and larger models.

Judging by this PR, the footprint of Clblast was minimal, so removing it doesn't make sense. As much as I want it, current implementation of Vulkan does not supersede Clblast.

Additionally, you didn't even announce this change as you did previously with similar ones. I understand, that technically it's not a breaking change, but it's an important feature, even if it was partly (MoE, mostly) broken for a long time.

shibe2 added a commit to shibe2/llama.cpp that referenced this pull request Jun 8, 2024
Partially revert "ggml: remove OpenCL (ggerganov#7735)"
Restore functionality, skip documentation
Abhishek8394 added a commit to Abhishek8394/llama.cpp that referenced this pull request Jun 14, 2024
@Sur3
Copy link

Sur3 commented Jun 19, 2024

With CLBLAST I was able to run small models on my integrated Skylake GT2 [HD Graphics 520] with the full 8GB of RAM of my Laptop as VRAM. Will test Vulkan backend, hope it is no regression..

@Sur3
Copy link

Sur3 commented Jun 21, 2024

Yeah no way Vulkan is an adequate replacement for OpenCL, with CLBLAS my model loads almost instantly and with Vulkan I am stuck at the loading screen..

@Sur3
Copy link

Sur3 commented Jun 21, 2024

Also with Vulkan I run out of Memory with models that fully load fine with OpenCL..

Vulkan takes endless time and then runs out of memory:

ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: Intel(R) HD Graphics 520 (SKL GT2) (Intel open-source Mesa driver) | uma: 1 | fp16: 1 | warp size: 32
llm_load_tensors: ggml ctx size =    0,32 MiB
ggml_vulkan: Device memory allocation of size 1516179456 failed.
ggml_vulkan: vk::Device::allocateMemory: ErrorOutOfDeviceMemory
llama_model_load: error loading model: unable to allocate backend buffer
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'dolphin-2.9.2-qwen2-7b-Q6_K.gguf'
main: error: unable to load model

OpenCL loads fine instantly:

llm_load_print_meta: model ftype      = Q6_K
llm_load_print_meta: model params     = 7,62 B
llm_load_print_meta: model size       = 5,82 GiB (6,56 BPW) 
llm_load_print_meta: general.name     = dolphin-2.9.2-qwen2-7b
llm_load_print_meta: BOS token        = 11 ','
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_tensors: ggml ctx size =    0,32 MiB
llm_load_tensors: offloading 28 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 29/29 layers to GPU
llm_load_tensors:        CPU buffer size =   426,36 MiB
llm_load_tensors:     OpenCL buffer size =  5532,43 MiB

Only upside of the Vulkan backend is maybe that if it finally loads on a model it will use ~90% of my GPU compared of ~80% with OpenCL, but I didn't measure yet if this also results in a speedup or only consumes more resources.

@jetro30087
Copy link

jetro30087 commented Jun 22, 2024

I tried Vulkan again after ignoring for a few months but the latest build crashes shortly after loading. OpenCL just worked no matter what hardware I through my installation on. I'm not sure how vulkan can supersede OpenCL when I just can't trust it to work on alot of users' hardware.

It would be better just to say there is no unified library and tell devs they are on their own for compatibility.

@Sur3
Copy link

Sur3 commented Jun 27, 2024

Something is seriously wrong with the vulkan code, when using the vulkan code I get random output like *(($"$&"$%***$ but without vulkan I get normal text with the same prompt.

@vt-alt
Copy link

vt-alt commented Jul 7, 2024

CLBlast was versatile and supporting any OpenCL implementation including nvidia, amd, and cpu (for example via PoCL) via OpenCL ICD loader. So it's a pity the support is removed.

@okias
Copy link

okias commented Aug 10, 2024

Seriously, Vulkan isn't replacement of OpenCL. Can we get this code back?

okias added a commit to okias/llama.cpp that referenced this pull request Aug 11, 2024
Manually adjusted.

This reverts commit 554c247.
This reverts commit 257f8e4.

Signed-off-by: David Heidelberg <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Apple Metal https://en.wikipedia.org/wiki/Metal_(API) build Compilation issues devops improvements to build systems and github actions examples ggml changes relating to the ggml tensor library for machine learning nix Issues specific to consuming flake.nix, or generally concerned with ❄ Nix-based llama.cpp deployment python python script changes Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level script Script related SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants