From 2535a6d5b7a846ed1c0755f4e03a094ef353f79b Mon Sep 17 00:00:00 2001 From: nihui Date: Wed, 19 Feb 2025 18:48:29 +0800 Subject: [PATCH] update doc about vulkan-sdk (#5911) --- docs/faq.en.md | 569 +++++++++--------- docs/faq.md | 16 +- docs/how-to-build/how-to-build.md | 20 +- .../use-ncnn-with-own-project.md | 14 +- 4 files changed, 282 insertions(+), 337 deletions(-) diff --git a/docs/faq.en.md b/docs/faq.en.md index 44d0068263b..5248d26ec44 100644 --- a/docs/faq.en.md +++ b/docs/faq.en.md @@ -1,293 +1,276 @@ - - -# How to join the technical Community Groups with QQ ? - -- Open QQ -> click the group chat search-> search group number 637093648, enter the answer to the question: conv conv conv conv conv → join the group chat → ready to accept the Turing test(a joke) -- Open QQ -> search Pocky group: 677104663 (lots experts), the answer to the question - -# How to watch the author's on live in Bilibili? - -- nihui:[水竹院落](https://live.bilibili.com/1264617) - -# Compilation - -- ## How to download the full source code? - - git clone --recursive https://github.com/Tencent/ncnn/ - - or - - download [ncnn-xxxxx-full-source.zip](https://github.com/Tencent/ncnn/releases) - -- ## How to cross-compile?How to set the cmake toolchain? - - See https://github.com/Tencent/ncnn/wiki/how-to-build - -- ## The submodules were not downloaded! Please update submodules with "git submodule update --init" and try again - - As above, download the full source code. Or follow the prompts to execute: git submodule update --init - -- ## Could NOT find Protobuf (missing: Protobuf_INCLUDE_DIR) - - sudo apt-get install libprotobuf-dev protobuf-compiler - -- ## Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) - - https://github.com/Tencent/ncnn/issues/1873 - -- ## Could not find a package configuration file provided by "OpenCV" with any of the following names: OpenCVConfig.cmake opencv-config.cmake - - sudo apt-get install libopencv-dev - - or customized compile and install ,with set(OpenCV_DIR {the dir OpenCVConfig.cmake exist}) - -- ## Could not find a package configuration file provided by "ncnn" with any of the following names: ncnnConfig.cmake ncnn-config.cmake - - set(ncnn_DIR { the dir ncnnConfig.cmake exist}) - -- ## Vulkan not found, - - - cmake requires version >= 3.10, otherwise there is no FindVulkan.cmake - - - android-api >= 24 - - - macos has to run the install script first - -- ## How to install vulkan sdk - - - See https://www.vulkan.org/tools#download-these-essential-development-tools - - But There was a frequent problem that the project need glslang lib in ncnn not official vulkan - -- ## xxx.lib not found(be specified by system/compiler) - - undefined reference to __kmpc_for_static_init_4 __kmpc_for_static_fini __kmpc_fork_call ... - - Need to link openmp - - undefined reference to vkEnumerateInstanceExtensionProperties vkGetInstanceProcAddr vkQueueSubmit ... - - need vulkan-1.lib - - undefined reference to glslang::InitializeProcess() glslang::TShader::TShader(EShLanguage) ... - - need glslang.lib OGLCompiler.lib SPIRV.lib OSDependent.lib - - undefined reference to AAssetManager_fromJava AAssetManager_open AAsset_seek ... - - Add android to find_library and target_like_libraries - - find_package(ncnn) - -- ## undefined reference to typeinfo for ncnn::Layer - - opencv rtti -> opencv-mobile - -- ## undefined reference to __cpu_model - - upgrade compiler / libgcc_s libgcc - -- ## unrecognized command line option "-mavx2" - - upgrade gcc - -- ## Why is the compiled ncnn-android library so large? - - See https://github.com/Tencent/ncnn/wiki/build-for-android.zh and see How to trim smaller ncnn - -- ## ncnnoptimize and custom layer - - ncnnoptimize first before adding a custom layer to avoid ncnnoptimize not being able to handle custom layer saves. - - -- ## rtti/exceptions Conflict - - The reason for the conflict is that the libraries used in the project are configured differently, so analyze whether you need to turn them on or off according to your actual situation. ncnn is ON by default, add the following two parameters when recompiling ncnn. - - ON: -DNCNN_DISABLE_RTTI=OFF -DNCNN_DISABLE_EXCEPTION=OFF - - OFF: -DNCNN_DISABLE_RTTI=ON -DNCNN_DISABLE_EXCEPTION=ON - - -- ## error: undefined symbol: ncnn::Extractor::extract(char const*, ncnn::Mat&) - - Possible scenarios. - - Try upgrading the NDK version of Android Studio - - -# How do I add the ncnn library to my project and how does the cmake method work? - -Compile ncnn,and make install. linux/windows should set/export ncnn_DIR points to the directory containing ncnnConfig.cmake under the install directory - -- ## android - -- ## ios - -- ## linux - -- ## windows - -- ## macos - -- ## arm linux - - -# Convert model issues - -- ## caffe - - `./caffe2ncnn caffe.prototxt caffe.caffemodel ncnn.param ncnn.bin` - -- ## mxnet - - ` ./mxnet2ncnn mxnet-symbol.json mxnet.params ncnn.param ncnn.bin` - -- ## darknet - - [https://github.com/xiangweizeng/darknet2ncnn](https://github.com/xiangweizeng/darknet2ncnn) - -- ## pytorch - onnx - - [use ncnn with pytorch or onnx](https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx) - -- ## tensorflow 1.x/2.x - keras - - [https://github.com/MarsTechHAN/keras2ncnn](https://github.com/MarsTechHAN/keras2ncnn) **[@MarsTechHAN](https://github.com/MarsTechHAN)** - -- ## tensorflow 2.x - mlir - - [Converting tensorflow2 models to ncnn via MLIR](https://zhuanlan.zhihu.com/p/152535430) **@[nihui](https://www.zhihu.com/people/nihui-2)** - -- ## Shape not supported yet! Gather not supported yet! Cast not supported yet! - - onnx-simplifier shape - -- ## convertmodel - - [https://convertmodel.com/](https://convertmodel.com/) **[@大老师](https://github.com/daquexian)** - -- ## netron - - [https://github.com/lutzroeder/netron](https://github.com/lutzroeder/netron) - -- ## How to generate a model with fixed shape? - - Input 0=w 1=h 2=c - -- ## why gpu can speedup - -- ## How to convert ncnnoptimize to fp16 model - - `ncnnoptimize model.param model.bin yolov5s-opt.param yolov5s-opt.bin 65536` - -- ## How to use ncnnoptimize checking the FLOPS / memory usage of your model - -- ## How to modify the model to support dynamics shape? - - Interp Reshape - -- ## How to convert a model into code embedded in a program? - - use ncnn2mem - -- ## How to encrypt the model? - - See https://zhuanlan.zhihu.com/p/268327784 - -- ## The ncnn model transferred under Linux, Windows/MacOS/Android/... Can I use it directly? - - Yes, for all platforms - -- ## How to remove post-processing and export onnx? - - Ref: - - Referring to an article by UP , step 3 is to remove the post-processing and then export the onnx, where removing the post-processing can be the result of removing the subsequent steps when testing within the project. - -- ## pytorch layers can't export to onnx? - - Mode 1: - - ONNX_ATEN_FALLBACK -Fully customizable op, first change to one that can export (e.g. concat slice), go to ncnn and then modify param - - Way 2. - - You can try this with PNNX, see the following article for a general description: - - 1. [Windows/Linux/macOS steps for compiling PNNX](https://zhuanlan.zhihu.com/p/431833958) - - 2. [Learn in 5 minutes! Converting TorchScript models to ncnn models with PNNX](https://zhuanlan.zhihu.com/p/427512763) - -# Using - -- ## vkEnumeratePhysicalDevices failed -3 - -- ## vkCreateInstance failed -9 - - Please upgrade your GPU driver if you meet this crash or error. - Here are the download sites for some brands of GPU drivers. We have provided some driver download pages here. - [Intel](https://downloadcenter.intel.com/product/80939/Graphics-Drivers), [AMD](https://www.amd.com/en/support), [Nvidia](https://) www.nvidia.com/Download/index.aspx) - -- ## ModuleNotFoundError: No module named 'ncnn.ncnn' - - python setup.py develop - -- ## fopen nanodet-m.param failed - - path should be working dir - - File not found or not readable. Make sure that XYZ.param/XYZ.bin is accessible. - -- ## find_blob_index_by_name data / output / ... failed - - layer name vs blob name - - param.bin use xxx.id.h enum - -- ## parse magic failed - -- ## param is too old, please regenerate - - The model maybe has problems - - Your model file is being the old format converted by an old caffe2ncnn tool. - - Checkout the latest ncnn code, build it and regenerate param and model binary files, and that should work. - - Make sure that your param file starts with the magic number 7767517. - - you may find more info on use-ncnn-with-alexnet - - When adding the softmax layer yourself, you need to add 1=1 - -- ## set_vulkan_compute failed, network use_vulkan_compute disabled - - Set net.opt.use_vulkan_compute = true before load_param / load_model; - -- ## How to execute multiple blob inputs, multiple blob outputs? - Multiple execute `ex.input()` and `ex.extract()` like following - ``` - ex.input("data1", in_1); - ex.input("data2", in_2); - ex.extract("output1", out_1); - ex.extract("output2", out_2); - ``` -- ## Multiple executions of Extractor extract double the calculation? - - No - -- ## How to see the elapsed time for every layer? - - cmake -DNCNN_BENCHMARK=ON .. - -- ## How to convert a cv::Mat CV_8UC3 BGR image - - from_pixels to_pixels - -- ## How to convert float data to ncnn::Mat - - First of all, you need to manage the memory you request yourself, at this point ncnn::Mat will not automatically free up the float data you pass over to it - ``` c++ - std::vector testData(60, 1.0); // use std::vector to manage memory requests and releases yourself - ncnn::Mat in1 = ncnn::Mat(60, (void*)testData.data()).reshape(4, 5, 3); // just pass the pointer to the float data as a void*, and even specify the dimension (up says it's best to use reshape to solve the channel gap) - float* a = new float[60]; // New a piece of memory yourself, you need to release it later - ncnn::Mat in2 = ncnn::Mat(60, (void*)a).reshape(4, 5, 3).clone(); // use the same method as above, clone() to transfer data owner - ``` + + +# How to join the technical Community Groups with QQ ? + +- Open QQ -> click the group chat search-> search group number 637093648, enter the answer to the question: conv conv conv conv conv → join the group chat → ready to accept the Turing test(a joke) +- Open QQ -> search Pocky group: 677104663 (lots experts), the answer to the question + +# How to watch the author's on live in Bilibili? + +- nihui:[水竹院落](https://live.bilibili.com/1264617) + +# Compilation + +- ## How to download the full source code? + + git clone --recursive https://github.com/Tencent/ncnn/ + + or + + download [ncnn-xxxxx-full-source.zip](https://github.com/Tencent/ncnn/releases) + +- ## How to cross-compile?How to set the cmake toolchain? + + See https://github.com/Tencent/ncnn/wiki/how-to-build + +- ## The submodules were not downloaded! Please update submodules with "git submodule update --init" and try again + + As above, download the full source code. Or follow the prompts to execute: git submodule update --init + +- ## Could NOT find Protobuf (missing: Protobuf_INCLUDE_DIR) + + sudo apt-get install libprotobuf-dev protobuf-compiler + +- ## Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) + + https://github.com/Tencent/ncnn/issues/1873 + +- ## Could not find a package configuration file provided by "OpenCV" with any of the following names: OpenCVConfig.cmake opencv-config.cmake + + sudo apt-get install libopencv-dev + + or customized compile and install ,with set(OpenCV_DIR {the dir OpenCVConfig.cmake exist}) + +- ## Could not find a package configuration file provided by "ncnn" with any of the following names: ncnnConfig.cmake ncnn-config.cmake + + set(ncnn_DIR { the dir ncnnConfig.cmake exist}) + +- ## xxx.lib not found(be specified by system/compiler) + + undefined reference to __kmpc_for_static_init_4 __kmpc_for_static_fini __kmpc_fork_call ... + + Need to link openmp + + undefined reference to glslang::InitializeProcess() glslang::TShader::TShader(EShLanguage) ... + + need glslang.lib glslang-default-resource-limits.lib + + undefined reference to AAssetManager_fromJava AAssetManager_open AAsset_seek ... + + Add android to find_library and target_like_libraries + + find_package(ncnn) + +- ## undefined reference to typeinfo for ncnn::Layer + + opencv rtti -> opencv-mobile + +- ## undefined reference to __cpu_model + + upgrade compiler / libgcc_s libgcc + +- ## unrecognized command line option "-mavx2" + + upgrade gcc + +- ## Why is the compiled ncnn-android library so large? + + See https://github.com/Tencent/ncnn/wiki/build-for-android.zh and see How to trim smaller ncnn + +- ## ncnnoptimize and custom layer + + ncnnoptimize first before adding a custom layer to avoid ncnnoptimize not being able to handle custom layer saves. + + +- ## rtti/exceptions Conflict + + The reason for the conflict is that the libraries used in the project are configured differently, so analyze whether you need to turn them on or off according to your actual situation. ncnn is ON by default, add the following two parameters when recompiling ncnn. + - ON: -DNCNN_DISABLE_RTTI=OFF -DNCNN_DISABLE_EXCEPTION=OFF + - OFF: -DNCNN_DISABLE_RTTI=ON -DNCNN_DISABLE_EXCEPTION=ON + + +- ## error: undefined symbol: ncnn::Extractor::extract(char const*, ncnn::Mat&) + + Possible scenarios. + - Try upgrading the NDK version of Android Studio + + +# How do I add the ncnn library to my project and how does the cmake method work? + +Compile ncnn,and make install. linux/windows should set/export ncnn_DIR points to the directory containing ncnnConfig.cmake under the install directory + +- ## android + +- ## ios + +- ## linux + +- ## windows + +- ## macos + +- ## arm linux + + +# Convert model issues + +- ## caffe + + `./caffe2ncnn caffe.prototxt caffe.caffemodel ncnn.param ncnn.bin` + +- ## mxnet + + ` ./mxnet2ncnn mxnet-symbol.json mxnet.params ncnn.param ncnn.bin` + +- ## darknet + + [https://github.com/xiangweizeng/darknet2ncnn](https://github.com/xiangweizeng/darknet2ncnn) + +- ## pytorch - onnx + + [use ncnn with pytorch or onnx](https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx) + +- ## tensorflow 1.x/2.x - keras + + [https://github.com/MarsTechHAN/keras2ncnn](https://github.com/MarsTechHAN/keras2ncnn) **[@MarsTechHAN](https://github.com/MarsTechHAN)** + +- ## tensorflow 2.x - mlir + + [Converting tensorflow2 models to ncnn via MLIR](https://zhuanlan.zhihu.com/p/152535430) **@[nihui](https://www.zhihu.com/people/nihui-2)** + +- ## Shape not supported yet! Gather not supported yet! Cast not supported yet! + + onnx-simplifier shape + +- ## convertmodel + + [https://convertmodel.com/](https://convertmodel.com/) **[@大老师](https://github.com/daquexian)** + +- ## netron + + [https://github.com/lutzroeder/netron](https://github.com/lutzroeder/netron) + +- ## How to generate a model with fixed shape? + + Input 0=w 1=h 2=c + +- ## why gpu can speedup + +- ## How to convert ncnnoptimize to fp16 model + + `ncnnoptimize model.param model.bin yolov5s-opt.param yolov5s-opt.bin 65536` + +- ## How to use ncnnoptimize checking the FLOPS / memory usage of your model + +- ## How to modify the model to support dynamics shape? + + Interp Reshape + +- ## How to convert a model into code embedded in a program? + + use ncnn2mem + +- ## How to encrypt the model? + + See https://zhuanlan.zhihu.com/p/268327784 + +- ## The ncnn model transferred under Linux, Windows/MacOS/Android/... Can I use it directly? + + Yes, for all platforms + +- ## How to remove post-processing and export onnx? + + Ref: + + Referring to an article by UP , step 3 is to remove the post-processing and then export the onnx, where removing the post-processing can be the result of removing the subsequent steps when testing within the project. + +- ## pytorch layers can't export to onnx? + + Mode 1: + + ONNX_ATEN_FALLBACK +Fully customizable op, first change to one that can export (e.g. concat slice), go to ncnn and then modify param + + Way 2. + + You can try this with PNNX, see the following article for a general description: + + 1. [Windows/Linux/macOS steps for compiling PNNX](https://zhuanlan.zhihu.com/p/431833958) + + 2. [Learn in 5 minutes! Converting TorchScript models to ncnn models with PNNX](https://zhuanlan.zhihu.com/p/427512763) + +# Using + +- ## vkEnumeratePhysicalDevices failed -3 + +- ## vkCreateInstance failed -9 + + Please upgrade your GPU driver if you meet this crash or error. + Here are the download sites for some brands of GPU drivers. We have provided some driver download pages here. + [Intel](https://downloadcenter.intel.com/product/80939/Graphics-Drivers), [AMD](https://www.amd.com/en/support), [Nvidia](https://) www.nvidia.com/Download/index.aspx) + +- ## ModuleNotFoundError: No module named 'ncnn.ncnn' + + python setup.py develop + +- ## fopen nanodet-m.param failed + + path should be working dir + + File not found or not readable. Make sure that XYZ.param/XYZ.bin is accessible. + +- ## find_blob_index_by_name data / output / ... failed + + layer name vs blob name + + param.bin use xxx.id.h enum + +- ## parse magic failed + +- ## param is too old, please regenerate + + The model maybe has problems + + Your model file is being the old format converted by an old caffe2ncnn tool. + + Checkout the latest ncnn code, build it and regenerate param and model binary files, and that should work. + + Make sure that your param file starts with the magic number 7767517. + + you may find more info on use-ncnn-with-alexnet + + When adding the softmax layer yourself, you need to add 1=1 + +- ## set_vulkan_compute failed, network use_vulkan_compute disabled + + Set net.opt.use_vulkan_compute = true before load_param / load_model; + +- ## How to execute multiple blob inputs, multiple blob outputs? + Multiple execute `ex.input()` and `ex.extract()` like following + ``` + ex.input("data1", in_1); + ex.input("data2", in_2); + ex.extract("output1", out_1); + ex.extract("output2", out_2); + ``` +- ## Multiple executions of Extractor extract double the calculation? + + No + +- ## How to see the elapsed time for every layer? + + cmake -DNCNN_BENCHMARK=ON .. + +- ## How to convert a cv::Mat CV_8UC3 BGR image + + from_pixels to_pixels + +- ## How to convert float data to ncnn::Mat + + First of all, you need to manage the memory you request yourself, at this point ncnn::Mat will not automatically free up the float data you pass over to it + ``` c++ + std::vector testData(60, 1.0); // use std::vector to manage memory requests and releases yourself + ncnn::Mat in1 = ncnn::Mat(60, (void*)testData.data()).reshape(4, 5, 3); // just pass the pointer to the float data as a void*, and even specify the dimension (up says it's best to use reshape to solve the channel gap) + float* a = new float[60]; // New a piece of memory yourself, you need to release it later + ncnn::Mat in2 = ncnn::Mat(60, (void*)a).reshape(4, 5, 3).clone(); // use the same method as above, clone() to transfer data owner + ``` diff --git a/docs/faq.md b/docs/faq.md index da3c4e0edc0..80bf819b0b2 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -45,29 +45,15 @@ set(ncnn_DIR {ncnnConfig.cmake所在目录}) -- ## 找不到 Vulkan, - - cmake版本 3.10,否则没有带 FindVulkan.cmake - - android-api >= 24 - - macos 要先执行安装脚本 - -- ## 如何安装 vulkan sdk - - ## 找不到库(需要根据系统/编译器指定) undefined reference to __kmpc_for_static_init_4 __kmpc_for_static_fini __kmpc_fork_call ... 需要链接openmp库 - undefined reference to vkEnumerateInstanceExtensionProperties vkGetInstanceProcAddr vkQueueSubmit ... - - 需要 vulkan-1.lib - undefined reference to glslang::InitializeProcess() glslang::TShader::TShader(EShLanguage) ... - 需要 glslang.lib OGLCompiler.lib SPIRV.lib OSDependent.lib + 需要 glslang.lib glslang-default-resource-limits.lib undefined reference to AAssetManager_fromJava AAssetManager_open AAsset_seek ... diff --git a/docs/how-to-build/how-to-build.md b/docs/how-to-build/how-to-build.md index bb69aba7800..363661d3f3e 100644 --- a/docs/how-to-build/how-to-build.md +++ b/docs/how-to-build/how-to-build.md @@ -40,33 +40,20 @@ Install required build dependencies: * g++ * cmake * protocol buffer (protobuf) headers files and protobuf compiler -* glslang * (optional) LLVM OpenMP header files # If building with Clang, and multithreaded CPU inference is desired -* (optional) vulkan header files and loader library # If building with Vulkan, without simplevk * (optional) opencv # For building examples Generally if you have Intel, AMD or Nvidia GPU from last 10 years, Vulkan can be easily used. On some systems there are no Vulkan drivers easily available at the moment (October 2020), so you might need to disable use of Vulkan on them. This applies to Raspberry Pi 3 (but there is experimental open source Vulkan driver in the works, which is not ready yet). Nvidia Tegra series devices (like Nvidia Jetson) should support Vulkan. Ensure you have most recent software installed for best experience. -On Debian 10+, Ubuntu 20.04+, or Raspberry Pi OS, you can install all required dependencies using: +On Debian, Ubuntu, or Raspberry Pi OS, you can install all required dependencies using: ```shell -sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler libomp-dev libvulkan-dev vulkan-tools libopencv-dev -``` -On earlier Debian or Ubuntu, you can install all required dependencies using: -```shell -sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler libomp-dev libvulkan-dev vulkan-utils libopencv-dev +sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler libomp-dev libopencv-dev ``` On Redhat or Centos, you can install all required dependencies using: ```shell -sudo yum install build-essential git cmake libprotobuf-dev protobuf-compiler libvulkan-dev vulkan-utils libopencv-dev -``` -To use Vulkan backend install Vulkan header files, a vulkan driver loader, GLSL to SPIR-V compiler and vulkaninfo tool. Preferably from your distribution repositories. Alternatively download and install full Vulkan SDK (about 200MB in size; it contains all header files, documentation and prebuilt loader, as well some extra tools and source code of everything) from https://vulkan.lunarg.com/sdk/home - -```shell -wget https://sdk.lunarg.com/sdk/download/1.2.189.0/linux/vulkansdk-linux-x86_64-1.2.189.0.tar.gz?Human=true -O vulkansdk-linux-x86_64-1.2.189.0.tar.gz -tar -xf vulkansdk-linux-x86_64-1.2.189.0.tar.gz -export VULKAN_SDK=$(pwd)/1.2.189.0/x86_64 +sudo yum install build-essential git cmake libprotobuf-dev protobuf-compiler libopencv-dev ``` To use Vulkan after building ncnn later, you will also need to have Vulkan driver for your GPU. For AMD and Intel GPUs these can be found in Mesa graphics driver, which usually is installed by default on all distros (i.e. `sudo apt install mesa-vulkan-drivers` on Debian/Ubuntu). For Nvidia GPUs the proprietary Nvidia driver must be downloaded and installed (some distros will allow easier installation in some way). After installing Vulkan driver, confirm Vulkan libraries and driver are working, by using `vulkaninfo` or `vulkaninfo | grep deviceType`, it should list GPU device type. If there are more than one GPU installed (including the case of integrated GPU and discrete GPU, commonly found in laptops), you might need to note the order of devices to use later on. @@ -205,7 +192,6 @@ cmake -A x64 -DCMAKE_INSTALL_PREFIX=%cd%/install -Dprotobuf_BUILD_TESTS=OFF -Dpr cmake --build . --config Release -j 2 cmake --build . --config Release --target install ``` -(optional) Download and install Vulkan SDK from https://vulkan.lunarg.com/sdk/home Build ncnn library (replace `` with a proper path): diff --git a/docs/how-to-use-and-FAQ/use-ncnn-with-own-project.md b/docs/how-to-use-and-FAQ/use-ncnn-with-own-project.md index 6b29506d5b4..03b1b1ccd4b 100644 --- a/docs/how-to-use-and-FAQ/use-ncnn-with-own-project.md +++ b/docs/how-to-use-and-FAQ/use-ncnn-with-own-project.md @@ -27,22 +27,12 @@ You may also manually specify ncnn library path and including directory. Note th For example, on Visual Studio debug mode with vulkan required, the lib paths are: ``` E:\github\ncnn\build\vs2019-x64\install\lib\ncnnd.lib -E:\lib\VulkanSDK\1.2.148.0\Lib\vulkan-1.lib -E:\github\ncnn\build\vs2019-x64\install\lib\SPIRVd.lib E:\github\ncnn\build\vs2019-x64\install\lib\glslangd.lib -E:\github\ncnn\build\vs2019-x64\install\lib\MachineIndependentd.lib -E:\github\ncnn\build\vs2019-x64\install\lib\OGLCompilerd.lib -E:\github\ncnn\build\vs2019-x64\install\lib\OSDependentd.lib -E:\github\ncnn\build\vs2019-x64\install\lib\GenericCodeGend.lib +E:\github\ncnn\build\vs2019-x64\install\lib\glslang-default-resource-limitsd.lib ``` And for its release mode, lib paths are: ``` E:\github\ncnn\build\vs2019-x64\install\lib\ncnn.lib -E:\lib\VulkanSDK\1.2.148.0\Lib\vulkan-1.lib -E:\github\ncnn\build\vs2019-x64\install\lib\SPIRV.lib E:\github\ncnn\build\vs2019-x64\install\lib\glslang.lib -E:\github\ncnn\build\vs2019-x64\install\lib\MachineIndependent.lib -E:\github\ncnn\build\vs2019-x64\install\lib\OGLCompiler.lib -E:\github\ncnn\build\vs2019-x64\install\lib\OSDependent.lib -E:\github\ncnn\build\vs2019-x64\install\lib\GenericCodeGen.lib +E:\github\ncnn\build\vs2019-x64\install\lib\glslang-default-resource-limits.lib ```