Skip to content

Commit

Permalink
Update setup.py with info for the PyPi website: installation via pip
Browse files Browse the repository at this point in the history
Update setup.py with info for the PyPi website:  installation via pip
  • Loading branch information
Sserpenthraxus-nv committed Aug 10, 2018
1 parent 4d71916 commit b7b8bab
Show file tree
Hide file tree
Showing 2 changed files with 46 additions and 29 deletions.
34 changes: 21 additions & 13 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,40 +7,48 @@ This project is a collection of Python scripts to help work with datasets for de
*Example of a dataset frame visualized using NVDU, showing axes and 3D cuboids for annotated objects.*

## Table of Contents
- [Nvidia Dataset Utilities](#nvidia-dataset-utilities)
- [Nvidia Dataset Utilities (NVDU)](#nvidia-dataset-utilities-nvdu)
- [Table of Contents](#table-of-contents)
- [Install](#install)
- [Install from repo](#install-from-repo)
- [Install from pip:](#install-from-pip)
- [Install from source code git repo:](#install-from-source-code-git-repo)
- [nvdu_ycb](#nvdu_ycb)
- [Usage](#usage)
- [nvdu_viz](#nvdu_viz)
- [Usage](#usage-1)
- [Examples](#examples)
- [Visualize a dataset generated by NDDS](#visualize-a-standard-nvdu-dataset-directory)
- [Visualize a set of images using different annotation data](#visualize-a-set-of-images-using-a-different-annotation-data)
- [Visualize a dataset generated by NDDS:](#visualize-a-dataset-generated-by-ndds)
- [Visualize a set of images using different annotation data:](#visualize-a-set-of-images-using-different-annotation-data)
- [Controls](#controls)
- [Visualization options](#visualization-options)
- [Other](#other)
- [Visualization options:](#visualization-options)
- [Other:](#other)

# Install
## Install from repo:
> **First, install git LFS (large file storage):** https://help.github.com/articles/installing-git-large-file-storage/ **
**LFS Clone the repo**
## Install from pip:
`pip install nvdu`

## Install from source code git repo:
**Clone the repo**

_Using ssh path:_
```
git clone ssh://[email protected]:12051/NVIDIA/Dataset_Utilities.git
```
_Using https path:_
```
git lfs clone https://github.com/NVIDIA/Dataset_Utilities.git
git clone https://github.com/NVIDIA/Dataset_Utilities.git
```
**Go inside the cloned repo**
**Go inside the cloned repo's directory**
```
cd nvdu
cd Dataset_Utilities
```

**Install locally**

`pip install -e .`

# nvdu_ycb
_nvdu_ycb_ command helps download, extract, and align the YCB 3d models used in the FAT dataset.
_nvdu_ycb_ command help download, extract and align the YCB 3d models (which are used in the FAT dataset: http://research.nvidia.com/publication/2018-06_Falling-Things).
## Usage
```
usage: nvdu_ycb [-h] [-s] [-l] [ycb_object_name]
Expand Down
41 changes: 25 additions & 16 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,15 @@
from os import path
import glob

__version_info__ = (1, 0, 0, 0)

# Utility function to read the README file.
# Used for the long_description. It's nice, because now 1) we have a top level
# README file and 2) it's easier to type in the README file than to put a raw
# string in below ...
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()

def get_all_files(find_dir):
all_files = []
for check_path in os.listdir(find_dir):
Expand All @@ -19,32 +28,35 @@ def get_all_files(find_dir):

_ROOT = os.path.abspath(os.path.dirname(__file__))
all_config_files = get_all_files(path.join(_ROOT, path.join('nvdu', 'config')))
print("all_config_files: {}".format(all_config_files))

__version__ = '.'.join(map(str, __version_info__))

__version__ = "0.0.2"
setup(
name = "nvdu",
version = __version__,
description = "Nvidia Dataset Utilities scripts",
long_description = "A collection of Python scripts to help working with the DeepLearning projects at Nvidia easier",
url = "https://gitlab-master.nvidia.com/thangt/nvdu",
author = "Thang To",
author_email = "[email protected]",
license = "MIT",
description = "Nvidia Dataset Utilities",
long_description = read('readme.md'),
long_description_content_type = 'text/markdown',
url = "https://github.com/NVIDIA/Dataset_Utilities",
author = "NVIDIA Corporation",
author_email = "[email protected]",
maintainer = "Thang To",
maintainer_email = "[email protected]",
license = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0. https://creativecommons.org/licenses/by-nc-sa/4.0/",
# See https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Topic :: Utilities",
"License :: MIT",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3.6",
"Topic :: Utilities",
"Topic :: Software Development :: Libraries :: Python Modules",
],
keywords = "nvdu, nvidia",
packages=find_packages(),
package_data = {
'nvdu_config': all_config_files,
},
# data_files=["nvdu/data/ycb/*"],
data_files=[('nvdu_config', all_config_files)],
install_requires = [
"numpy",
"opencv-python",
Expand All @@ -62,8 +74,5 @@ def get_all_files(find_dir):
"nvdu_ycb=nvdu.tools.nvdu_ycb:main",
]
},
# cmdclass = {
# "test":
# }
scripts=[],
)

0 comments on commit b7b8bab

Please sign in to comment.