Skip to content

Releases: nanoporetech/bonito

v0.8.1

21 May 17:29
Compare
Choose a tag to compare

v0.7.3

12 Dec 17:21
Compare
Choose a tag to compare

v0.7.2

31 Jul 15:16
Compare
Choose a tag to compare

v0.7.1

01 Jun 13:23
Compare
Choose a tag to compare

Highlights

Thanks @chAwater for his collection of bug fixes in this release.

Installation

$ pip install ont-bonito

Note: For anything other than basecaller training or method development please use dorado.

v0.7.0

03 Apr 13:08
Compare
Choose a tag to compare

Highlights

Installation

Torch 2.0 (from pypi.org) is now built using CUDA 11.7 so the default installation of ont-bonito can be used for Turing/Ampere GPUs.

$ pip install ont-bonito

Note: For anything other than basecaller training or method development please use dorado.

v0.6.2

13 Nov 23:28
Compare
Choose a tag to compare

da7fe39 upgrade to pod5 0.0.41.
c45905c add milliseconds to start_time + convert to UTC.
199a3f0 Adds duration as du tag to BAM output.
717f414 fix bug in fast5 read id subset pre-processing.

v0.6.1

12 Sep 15:45
Compare
Choose a tag to compare

Bugfixes

v0.6.0

05 Sep 13:47
Compare
Choose a tag to compare

Highlights

Bugfixes

  • fa56de1 skip over any fast5 files that cause runtime errors.
  • f0827d9 use stderr for all model download output to avoid issues with sequence output formats.
  • 3c8294b upgraded koi with py3.7 support.

Misc

  • Python 3.10 supported added.
  • Read tags added for signal scaling midpoint, dispersion and version.
  • 9f7614d support for exporting models to dorado.
  • 90b6d19 add estimated total time to basecaller progress
  • 8ba78ed export for guppy binary weights

Installation

$ pip install ont-bonito

By default pip will install torch which is build against CUDA 10.2. For CUDA 11.3 builds run:

$ pip install --extra-index-url https://download.pytorch.org/whl/cu113 ont-bonito

Note: packaging has been reworked and the ont-bonito-cuda111 and ont-bonito-cuda113 packages are now retired. The CUDA version of torch is handled exclusively with the use of pip install --extra-index-url now.

v0.5.3

19 May 16:38
Compare
Choose a tag to compare

Highlights

Bugfixes

  • 4585b74 fix for handling stitching of short reads (read < chunksize).
  • 9a4f98a fix for overly confidant qscores in repeat regions.
  • 3187198 scaling protection for short reads.

Misc

  • d57a658 training validation times improved.

Installation

The default version on PyTorch in PyPI supports Volta and below (SM70) and can be installed like so -

$ pip install ont-bonito

For newer GPUs (Turing, Ampere) please use -

$ pip install -f https://download.pytorch.org/whl/torch_stable.html ont-bonito-cuda113

v0.5.1

11 Feb 16:04
Compare
Choose a tag to compare

Highlights

  • There is no longer a requirement for a CUDA toolkit on the target system, which significantly improves the ease of installation1.
  • BAM spec 0.0.2 (+move table, numbers of samples, trimming information).

Features

  • 241e622 record the move table into the SAM/BAM.
  • a6a3ed2 ont-koi replaces seqdist + cupy1.

Bugfixes

  • c8417b7 handle datatimes with subsecond resolution.
  • 6f23467 fix the mappy preset.
  • 737d9a2 better management of mappy's memory usage.
  • 2bbd711 remora 0.1.2 - fixes bonito/remora hanging #216.
  • 6e91a9d sensible minimum scaling factor - fixes #209.

Misc

  • Upgrade to the latest Mappy.
  • Python3.6 support dropped (EOL).

Installation

The default version on PyTorch in PyPI supports Volta and below (SM70) and can be installed like so -

$ pip install ont-bonito

For newer GPUs (Turing, Ampere) please use -

$ pip install -f https://download.pytorch.org/whl/torch_stable.html ont-bonito-cuda113