Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 17 additions & 3 deletions doc/get_started/install_sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ working not only at peak times but at all times, recovering more spikes close to

pip install hdbscan
pip install spikeinterface
pip install numba (or conda install numba as recommended by conda authors)
pip install numba (or conda install numba as recommended by conda authors)


Tridesclous2
Expand All @@ -294,11 +294,25 @@ This is an upgraded version of Tridesclous, natively written in SpikeInterface.


* Python
* Requires: HDBSCAN and Numba
* Requires: Numba
* Authors: Samuel Garcia
* Installation::

pip install hdbscan
pip install spikeinterface
pip install numba


Lupin
^^^^^

This is a representative spike sorting pipeline, natively written in SpikeInterface.


* Python
* Requires: Numba
* Authors: Samuel Garcia & Pierre Yger
* Installation::

pip install spikeinterface
pip install numba

Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/analyze_neuropixels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -516,7 +516,7 @@ pipeline, in SpikeInterface this is dead-simple: one function.
- most of sorters are wrapped from external tools (kilosort,
kisolort2.5, spykingcircus, montainsort4 …) that often also need
other requirements (e.g., MATLAB, CUDA)
- some sorters are internally developed (spyekingcircus2)
- some sorters are internally developed (spykingcircus2, tridesclous2, lupin)
- external sorter can be run inside a container (docker, singularity)
WITHOUT pre-installation

Expand Down
36 changes: 34 additions & 2 deletions doc/modules/sorters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -546,8 +546,9 @@ Internal sorters
In 2022, we started the :py:mod:`spikeinterface.sortingcomponents` module to break into components a sorting pipeline.
These components can be gathered to create a new sorter. We already have 2 sorters to showcase this new module:

* :code:`spykingcircus2` (experimental, but ready to be tested)
* :code:`tridesclous2` (experimental, not ready to be used)
* :code:`spykingcircus2`
* :code:`tridesclous2`
* :code:`lupin`

There are some benefits of using these sorters:
* they directly handle SpikeInterface objects, so they do not need any data copy.
Expand All @@ -560,6 +561,37 @@ From the user's perspective, they behave exactly like the external sorters:

sorting = run_sorter(sorter_name="spykingcircus2", recording=recording, folder="/tmp/folder")

These sorters are based on the :py:mod:`spikeinterface.sortingcomponents`, allowing fast and modular implementations
of various algorithms often encountered in spike-sorting.

SpyKING-CIRCUS 2
^^^^^^^^^^^^^^^^

This is an updated version of SpyKING-CIRCUS \cite{yger2018spike} based on the modular
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Citations should be [Yger]_ and added to the reference page: https://spikeinterface--4275.org.readthedocs.build/en/4275/references.html

Image

components. In summary, this spike sorting pipeline uses (when motion is present) the DREDGE motion
correction algorithm before filtering and whitening the data. On these whitened data, the chains of components
that are used are: matched filtering for peak detection, iterative splits for clustering (Iter-HDBSCAN),
and orthogonal matching pursuit for template reconstruction (Circus-OMP).


TriDesClous 2
^^^^^^^^^^^^^

This is an updated version of TriDesClous based on the modular components. In summary,
the code uses (when motion is present) the DREDGE motion correction algorithm before filtering the data
. On these filtered data, the chains of components that are used are: locally exclusive for peak detection,
iterative splits for clustering (Iter-ISOPLIT), and fast greedy partial deconvolution,
only applied at peak times for template reconstruction (TDC-peeler).

Lupin
^^^^^

In summary, the code uses (when motion is present) the DREDGE motion correction algorithm
before filtering and whitening the data. On these whitened data, the chains of components that are
used are: matched filtering for peak detection, iterative splits for clustering (Iter-ISOPLIT),
and augmented matching pursuit for the spike deconvolution (Wobble).


Read more in the :ref:`sorting-components-module` docs.

Contributing
Expand Down
28 changes: 28 additions & 0 deletions src/spikeinterface/sorters/internal/lupin.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,34 @@ def _run_from_folder(cls, sorter_output_folder, params, verbose):

apply_cmr = num_chans >= 32

if verbose:
version = cls.get_sorter_version()
lupin_ascii_art = f"""
.::----::..
..:-+#%*+#%@@@@@@@+..:+#+.
.+@%%@@@@@@@@@@@@@@#: .=@=
-@@@@@@@@@@@@@@@@@@%-. .=@*
.@@@@@@@@@@@@@@@@@@@*. .-%#
%@@@@@@@@@@@@@@@@@@*. :#%
=%%@@@@@@@@@@@@@@%%+: .+@.
-%#@@@@@@@@@@@@@@%*== .-@.
:#%@@@@@@@@@@@@@@%#+# .=@.
.+%@@@@@@@@@@@@@@@%#@. .+@.
.+*#@@@@@@@@@@@@@@@%@. .+@:
.=%+@@@@@@@@@@@@@@@@*. =%-.
:#%##@@@@@@@@@@@@%%-. :%+.
..:=**%@@@@@@@@@@@@@@@:. .%%......
.+#@%%%@**@%@@@@@@@@@@@*....=%@%@@@@@%=..
.*@@@@@@@@@@@=#@@@@@@@@@@*=:.:.#@@@@@@%%-.
.-%%@@@@@@@@@@@@@#+%@+-::-+=:=::+@@@@@@##-.
.:%@@@@@@@@@@@@@@@@@%+%##*%%%**%@@@@@@##+.
.+@@@@@@@@@@@@@@@@#***++***@@@@@@@@%%%=.
.:*@@@@@@@@%=-..... ....:-*%%%%%+-.
..:-:..
LUPIN version {version}
"""
print(lupin_ascii_art)

# preprocessing
if params["apply_preprocessing"]:
if params["apply_motion_correction"]:
Expand Down