Starling Class (ST)
- class starling.starling.ST(adata, dist_option='T', singlet_prop=0.6, model_cell_size=True, cell_size_col_name='area', model_zplane_overlap=True, model_regularizer=1.0, learning_rate=0.001)[source]
Bases:
LightningModule
The STARLING module
- Parameters:
adata (
AnnData
) – The sample to be analyzed, with clusters and annotations fromstarling.uility.init_clustering()
dist_option (
str
) – The distribution to use, one of ‘T’ for Student-T (df=2) or ‘N’ for Normal (Gaussian), defaults to Tsinglet_prop (
float
) – The proportion of anticipated segmentation error free cellsmodel_cell_size (
bool
) – Whether STARLING should incoporate cell size in the modelcell_size_col_name (
str
) – The column name inAnnData
(anndata.obs). Required only ifmodel_cell_size
isTrue
, otherwise ignored.model_zplane_overlap (
bool
) – If cell size is modelled, should STARLING model z-plane overlapmodel_regularizer (
float
) – Regularizer term impose on synethic doublet loss (BCE)learning_rate (
float
) – Learning rate of ADAM optimizer for STARLING
- configure_optimizers()[source]
Configure the Adam optimizer.
- Return type:
Adam
- Returns:
the optimizer
- forward(batch)[source]
The module’s forward pass
- Parameters:
batch (
list
[Tensor
]) – A list of tensors- Return type:
tuple
[Tensor
,Tensor
,Tensor
]- Returns:
Negative log loss, Binary Cross-Entropy Loss, singlet probability
- result(threshold=0.5)[source]
Retrieve the results and add them to
self.adata
- Parameters:
threshold (
float
) – minimum threshold for singlet probability- Return type:
AnnData
- train_and_fit(*, accelerator='auto', strategy='auto', devices='auto', num_nodes=1, precision=None, logger=True, callbacks=None, fast_dev_run=False, max_epochs=100, min_epochs=None, max_steps=-1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, overfit_batches=0.0, val_check_interval=None, check_val_every_n_epoch=1, num_sanity_val_steps=None, log_every_n_steps=None, enable_checkpointing=None, enable_progress_bar=None, enable_model_summary=None, accumulate_grad_batches=1, gradient_clip_val=None, gradient_clip_algorithm=None, deterministic=True, benchmark=None, inference_mode=True, use_distributed_sampler=True, profiler=None, detect_anomaly=False, barebones=False, plugins=None, sync_batchnorm=False, reload_dataloaders_every_n_epochs=0, default_root_dir=None)[source]
Train the model using lightning’s trainer. Param annotations (with defaults altered as needed) taken from https://lightning.ai/docs/pytorch/stable/_modules/lightning/pytorch/trainer/trainer.html#Trainer.__init__
- Parameters:
accelerator (
Union
[str
,Accelerator
]) – Supports passing different accelerator types (“cpu”, “gpu”, “tpu”, “ipu”, “hpu”, “mps”, “auto”) as well as custom accelerator instances. Defaults to"auto"
.strategy (
Union
[str
,Strategy
]) – Supports different training strategies with aliases as well custom strategies. Defaults to"auto"
.devices (
Union
[List
[int
],str
,int
]) – The devices to use. Can be set to a positive number (int or str), a sequence of device indices (list or str), the value-1
to indicate all available devices should be used, or"auto"
for automatic selection based on the chosen accelerator. Defaults to"auto"
.num_nodes (
int
) – Number of GPU nodes for distributed training. Defaults to1
.precision (
Union
[Literal
[64
,32
,16
],Literal
['transformer-engine'
,'transformer-engine-float16'
,'16-true'
,'16-mixed'
,'bf16-true'
,'bf16-mixed'
,'32-true'
,'64-true'
],Literal
['64'
,'32'
,'16'
,'bf16'
],None
]) – Double precision (64, ‘64’ or ‘64-true’), full precision (32, ‘32’ or ‘32-true’), 16bit mixed precision (16, ‘16’, ‘16-mixed’) or bfloat16 mixed precision (‘bf16’, ‘bf16-mixed’). Can be used on CPU, GPU, TPUs, HPUs or IPUs. Defaults to'32-true'
.logger (
Union
[Logger
,Iterable
[Logger
],bool
,None
]) – Logger (or iterable collection of loggers) for experiment tracking. ATrue
value uses the defaultTensorBoardLogger
if it is installed, otherwiseCSVLogger
.False
will disable logging. If multiple loggers are provided, local files (checkpoints, profiler traces, etc.) are saved in thelog_dir
of the first logger. Defaults toTrue
.callbacks (
Union
[List
[Callback
],Callback
,None
]) – Add a callback or list of callbacks. Defaults toNone
.fast_dev_run (
Union
[int
,bool
]) – Runs n if set ton
(int) else 1 if set toTrue
batch(es) of train, val and test to find any bugs (:param ie: a sort of unit test). Defaults toFalse
.max_epochs (
Optional
[int
]) – Stop training once this number of epochs is reached. Disabled by default (None). If both max_epochs and max_steps are not specified, defaults tomax_epochs = 100
. To enable infinite training, setmax_epochs = -1
.min_epochs (
Optional
[int
]) – Force training for at least these many epochs. Disabled by default (None).max_steps (
int
) – Stop training after this number of steps. Disabled by default (-1). Ifmax_steps = -1
andmax_epochs = None
, will default tomax_epochs = 1000
. To enable infinite training, setmax_epochs
to-1
.min_steps (
Optional
[int
]) – Force training for at least these number of steps. Disabled by default (None
).max_time (
Union
[str
,timedelta
,Dict
[str
,int
],None
]) – Stop training after this amount of time has passed. Disabled by default (None
). The time duration can be specified in the format DD:HH:MM:SS (days, hours, minutes seconds), as adatetime.timedelta
, or a dictionary with keys that will be passed todatetime.timedelta
.limit_train_batches (
Union
[int
,float
,None
]) – How much of training dataset to check (float = fraction, int = num_batches). Defaults to1.0
.limit_val_batches (
Union
[int
,float
,None
]) – How much of validation dataset to check (float = fraction, int = num_batches). Defaults to1.0
.limit_test_batches (
Union
[int
,float
,None
]) – How much of test dataset to check (float = fraction, int = num_batches). Defaults to1.0
.limit_predict_batches (
Union
[int
,float
,None
]) – How much of prediction dataset to check (float = fraction, int = num_batches). Defaults to1.0
.overfit_batches (
Union
[int
,float
]) – Overfit a fraction of training/validation data (float) or a set number of batches (int). Defaults to0.0
.val_check_interval (
Union
[int
,float
,None
]) – How often to check the validation set. Pass afloat
in the range [0.0, 1.0] to check after a fraction of the training epoch. Pass anint
to check after a fixed number of training batches. Anint
value can only be higher than the number of training batches whencheck_val_every_n_epoch=None
, which validates after everyN
training batches across epochs or during iteration-based training. Defaults to1.0
.check_val_every_n_epoch (
Optional
[int
]) – Perform a validation loop every after every N training epochs. IfNone
, validation will be done solely based on the number of training batches, requiringval_check_interval
to be an integer value. Defaults to1
.num_sanity_val_steps (
Optional
[int
]) – Sanity check runs n validation batches before starting the training routine. Set it to -1 to run all batches in all validation dataloaders. Defaults to2
.log_every_n_steps (
Optional
[int
]) – How often to log within steps. Defaults to50
.enable_checkpointing (
Optional
[bool
]) – IfTrue
, enable checkpointing. It will configure a default ModelCheckpoint callback if there is no user-defined ModelCheckpoint in lightning.pytorch.trainer.trainer.Trainer.callbacks. Defaults toTrue
.enable_progress_bar (
Optional
[bool
]) – Whether to enable to progress bar by default. Defaults toTrue
.enable_model_summary (
Optional
[bool
]) – Whether to enable model summarization by default. Defaults toTrue
.accumulate_grad_batches (
int
) – Accumulates gradients over k batches before stepping the optimizer. Defaults to 1.gradient_clip_val (
Union
[int
,float
,None
]) – The value at which to clip gradients. Passinggradient_clip_val=None
disables gradient clipping. If using Automatic Mixed Precision (AMP), the gradients will be unscaled before. Defaults toNone
.gradient_clip_algorithm (
Optional
[str
]) – The gradient clipping algorithm to use. Passgradient_clip_algorithm="value"
to clip by value, andgradient_clip_algorithm="norm"
to clip by norm. By default it will be set to"norm"
.deterministic (
Union
[bool
,Literal
['warn'
],None
]) – IfTrue
, sets whether PyTorch operations must use deterministic algorithms. Set to"warn"
to use deterministic algorithms whenever possible, throwing warnings on operations that don’t support deterministic mode. If not set, defaults toFalse
. Defaults toTrue
.benchmark (
Optional
[bool
]) – The value (True
orFalse
) to settorch.backends.cudnn.benchmark
to. The value fortorch.backends.cudnn.benchmark
set in the current session will be used (False
if not manually set). If deterministic is set toTrue
, this will default toFalse
. Override to manually set a different value. Defaults toNone
.inference_mode (
bool
) – Whether to use torch.inference_mode or torch.no_grad during evaluation (validate
/test
/predict
).use_distributed_sampler (
bool
) – Whether to wrap the DataLoader’s sampler withtorch.utils.data.DistributedSampler
. If not specified this is toggled automatically for strategies that require it. By default, it will addshuffle=True
for the train sampler andshuffle=False
for validation/test/predict samplers. If you want to disable this logic, you can passFalse
and add your own distributed sampler in the dataloader hooks. IfTrue
and a distributed sampler was already added, Lightning will not replace the existing one. For iterable-style datasets, we don’t do this automatically.profiler (
Union
[Profiler
,str
,None
]) – To profile individual steps during training and assist in identifying bottlenecks. Defaults toNone
.detect_anomaly (
bool
) – Enable anomaly detection for the autograd engine. Defaults toFalse
.barebones (
bool
) – Whether to run in “barebones mode”, where all features that may impact raw speed are disabled. This is meant for analyzing the Trainer overhead and is discouraged during regular training runs.plugins (
Union
[Precision
,ClusterEnvironment
,CheckpointIO
,LayerSync
,List
[Union
[Precision
,ClusterEnvironment
,CheckpointIO
,LayerSync
]],None
]) – Plugins allow modification of core behavior like ddp and amp, and enable custom lightning plugins. Defaults toNone
.sync_batchnorm (
bool
) – Synchronize batch norm layers between process groups/whole world. Defaults toFalse
.reload_dataloaders_every_n_epochs (
int
) – Set to a positive integer to reload dataloaders every n epochs. Defaults to0
.default_root_dir (
Union
[str
,Path
,None
]) – Default path for logs and weights when no logger/ckpt_callback passed. Defaults toos.getcwd()
. Can be remote file paths such as s3://mybucket/path or ‘hdfs://path/’
- Return type:
None
- Raises:
- TypeError:
If
gradient_clip_val
is not an int or float.- MisconfigurationException:
If
gradient_clip_algorithm
is invalid.