API Reference
This section provides an auto-generated API reference for the ibbi
package.
ibbi
Main initialization file for the ibbi package.
This file serves as the primary entry point for the ibbi
library. It exposes the most
important high-level functions and classes, making them directly accessible to the user
under the ibbi
namespace. This includes the core model creation factory (create_model
),
the main workflow classes (Evaluator
, Explainer
), and key utility functions for
accessing datasets and managing the cache.
The goal of this top-level __init__.py
is to provide a clean and intuitive API,
simplifying the user experience by abstracting away the underlying module structure.
ModelType = TypeVar('ModelType', YOLOSingleClassBeetleDetector, RTDETRSingleClassBeetleDetector, YOLOBeetleMultiClassDetector, RTDETRBeetleMultiClassDetector, GroundingDINOModel, YOLOWorldModel, UntrainedFeatureExtractor, HuggingFaceFeatureExtractor)
module-attribute
A generic TypeVar for representing any of the model wrapper classes in the ibbi package.
This is used for type hinting in functions and methods that can accept or return any of the available model types, providing flexibility while maintaining static type safety.
Evaluator
A unified evaluator for assessing IBBI models on various tasks.
This class provides a streamlined interface for evaluating the performance of models on tasks such as classification, object detection, and embedding quality. It handles the boilerplate code for iterating through datasets, making predictions, and calculating a comprehensive suite of metrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
ModelType
|
An instantiated model from the |
required |
Source code in src\ibbi\evaluate\__init__.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
__init__(model)
Initializes the Evaluator with a specific model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
ModelType
|
The model to be evaluated. This should be an instance of a class
that adheres to the |
required |
Source code in src\ibbi\evaluate\__init__.py
28 29 30 31 32 33 34 35 36 |
|
classification(dataset, predict_kwargs=None, **kwargs)
Runs a full classification performance analysis.
This method evaluates the model's ability to correctly classify objects in a dataset. It iterates through the provided dataset, makes predictions using the model, and then compares these predictions against the ground truth labels to compute a suite of classification metrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
A dataset object that is iterable and contains items with 'image' and 'objects' keys. The 'objects' key should be a dictionary containing a 'category' key, which is a list of labels. |
required | |
predict_kwargs
|
Optional[dict[str, Any]]
|
A dictionary of keyword arguments to be passed
to the model's |
None
|
**kwargs
|
Additional keyword arguments to be passed to the |
{}
|
Returns:
Name | Type | Description |
---|---|---|
dict |
A dictionary containing a comprehensive set of classification metrics, including accuracy, precision, recall, F1-score, and a confusion matrix. Returns an empty dictionary if the model is not suitable for classification or if the dataset is not properly formatted. |
Source code in src\ibbi\evaluate\__init__.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|
embeddings(dataset, use_umap=True, extract_kwargs=None, **kwargs)
Evaluates the quality of the model's feature embeddings.
This method extracts feature embeddings from the provided dataset using the model's
extract_features
method. It then uses the EmbeddingEvaluator
to compute a variety
of metrics that assess the quality of these embeddings, such as clustering performance
and correlation with ground truth labels.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
An iterable dataset where each item contains an 'image' key. |
required | |
use_umap
|
bool
|
Whether to use UMAP for dimensionality reduction before clustering. Defaults to True. |
True
|
extract_kwargs
|
Optional[dict[str, Any]]
|
A dictionary of keyword arguments to be passed
to the model's |
None
|
**kwargs
|
Additional keyword arguments to be passed to the |
{}
|
Returns:
Name | Type | Description |
---|---|---|
dict |
A dictionary containing the results of the embedding evaluation, including internal and external cluster validation metrics, and optionally a Mantel test correlation. Returns an empty dictionary if no valid embeddings can be extracted. |
Source code in src\ibbi\evaluate\__init__.py
223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
object_detection(dataset, iou_thresholds=0.5, predict_kwargs=None)
Runs a mean Average Precision (mAP) object detection analysis.
This method assesses the model's ability to accurately localize objects within an image. It processes a dataset to extract both ground truth and predicted bounding boxes, then computes the mean Average Precision (mAP) at specified Intersection over Union (IoU) thresholds.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
A dataset object that is iterable and contains items with 'image' and 'objects' keys. The 'objects' key should be a dictionary with 'bbox' and 'category' keys. |
required | |
iou_thresholds
|
Union[float, list[float]]
|
The IoU threshold(s) at which to compute mAP. Can be a single float or a list of floats. Defaults to 0.5. |
0.5
|
predict_kwargs
|
Optional[dict[str, Any]]
|
A dictionary of keyword arguments to be passed
to the model's |
None
|
Returns:
Name | Type | Description |
---|---|---|
dict |
A dictionary containing object detection performance metrics, including mAP scores. Returns an empty dictionary if the model is not suitable for object detection or if the dataset is not properly formatted. |
Source code in src\ibbi\evaluate\__init__.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
|
Explainer
A wrapper for LIME and SHAP explainability methods.
This class provides a simple interface to generate model explanations using
either LIME (Local Interpretable Model-agnostic Explanations) or SHAP
(SHapley Additive exPlanations). It is designed to work with any model
created using ibbi.create_model
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
ModelType
|
An instantiated model from |
required |
Source code in src\ibbi\explain\__init__.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
|
__init__(model)
A wrapper for LIME and SHAP explainability methods.
This class provides a simple interface to generate model explanations using
either LIME (Local Interpretable Model-agnostic Explanations) or SHAP
(SHapley Additive exPlanations). It is designed to work with any model
created using ibbi.create_model
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
ModelType
|
An instantiated model from |
required |
Source code in src\ibbi\explain\__init__.py
24 25 26 27 28 29 30 31 32 33 34 35 |
|
with_lime(image, **kwargs)
Generates a LIME explanation for a single image.
LIME provides a local, intuitive explanation by showing which parts of an image
contributed most to a specific prediction. This method is a wrapper around
the explain_with_lime
function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image
|
Image
|
The single image to be explained. |
required |
**kwargs
|
Additional keyword arguments to be passed to the underlying
|
{}
|
Returns:
Type | Description |
---|---|
tuple[lime_image.ImageExplanation, PIL.Image.Image]: A tuple containing the LIME |
|
explanation object and the original image. The explanation object can be |
|
visualized using |
Source code in src\ibbi\explain\__init__.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
with_shap(explain_dataset, background_dataset, **kwargs)
Generates SHAP explanations for a set of images.
SHAP (SHapley Additive exPlanations) provides robust, theoretically-grounded
explanations by attributing a model's prediction to its input features. This
method is a wrapper around the explain_with_shap
function and requires a
background dataset to integrate out features.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
explain_dataset
|
list
|
A list of dictionaries, where each dictionary
represents an image to be explained (e.g., |
required |
background_dataset
|
list
|
A list of dictionaries representing a background dataset, used by SHAP to simulate feature absence. |
required |
**kwargs
|
Additional keyword arguments to be passed to the underlying
|
{}
|
Returns:
Type | Description |
---|---|
shap.Explanation: A SHAP Explanation object containing the SHAP values for each
image and each class. This object can be visualized using
|
Source code in src\ibbi\explain\__init__.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
|
clean_cache()
Removes the entire ibbi cache directory.
This function will permanently delete all downloaded models and datasets
associated with the ibbi
package's cache. This can be useful for forcing
a fresh download of all assets or for freeing up disk space.
Source code in src\ibbi\utils\cache.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
create_model(model_name, pretrained=False, **kwargs)
Creates a model from a name or a task-based alias.
This function is the main entry point for instantiating models within the ibbi
package. It uses a model registry to look up and create a model instance based on
the provided model_name
. Users can either specify the exact name of a model
or use a convenient, task-based alias (e.g., "species_classifier").
When pretrained=True
, the function will download the model's weights from the
Hugging Face Hub and cache them locally for future use.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name or alias of the model to create. A list of available
model names and aliases can be obtained using |
required |
pretrained
|
bool
|
If True, loads pretrained weights for the model. Defaults to False. |
False
|
**kwargs
|
Any
|
Additional keyword arguments that will be passed to the underlying model's factory function. This allows for advanced customization. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
ModelType |
ModelType
|
An instantiated model object ready for prediction or feature extraction. |
Raises:
Type | Description |
---|---|
KeyError
|
If the provided |
Source code in src\ibbi\__init__.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
|
get_cache_dir()
Gets the cache directory for the ibbi package.
This function determines the appropriate directory for storing cached files,
such as downloaded model weights and datasets. It first checks for a custom path
set by the IBBI_CACHE_DIR
environment variable. If the variable is not set,
it defaults to a standard user cache location (~/.cache/ibbi
).
The function also ensures that the cache directory exists by creating it if it does not already.
Returns:
Name | Type | Description |
---|---|---|
Path |
Path
|
A |
Source code in src\ibbi\utils\cache.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
get_dataset(repo_id='IBBI-bio/ibbi_test_data', local_dir='ibbi_test_data', split='train', **kwargs)
Downloads and loads a dataset from the Hugging Face Hub.
This function facilitates the use of datasets hosted on the Hugging Face Hub by handling the download and caching process. It downloads the dataset to a local directory, and on subsequent calls, it will load the data directly from the local cache to save time and bandwidth.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
repo_id
|
str
|
The repository ID of the dataset on the Hugging Face Hub. Defaults to "IBBI-bio/ibbi_test_data". |
'IBBI-bio/ibbi_test_data'
|
local_dir
|
str
|
The name of the local directory where the dataset will be stored. Defaults to "ibbi_test_data". |
'ibbi_test_data'
|
split
|
str
|
The name of the dataset split to load (e.g., "train", "test", "validation"). Defaults to "train". |
'train'
|
**kwargs
|
Additional keyword arguments that will be passed directly to the
|
{}
|
Returns:
Name | Type | Description |
---|---|---|
Dataset |
Dataset
|
The loaded dataset as a |
Raises:
Type | Description |
---|---|
TypeError
|
If the object loaded for the specified split is not of type |
Source code in src\ibbi\utils\data.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
get_shap_background_dataset(image_size=(224, 224))
Downloads, unzips, and loads the default IBBI SHAP background dataset.
This function is specifically designed to fetch the background dataset required for the SHAP (SHapley Additive exPlanations) explainability method. It handles the download of a zip archive from the Hugging Face Hub, extracts its contents, and loads the images into memory. The data is stored in the package's central cache directory to avoid re-downloads.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image_size
|
tuple[int, int]
|
The target size (width, height) to which the background images will be resized. This should match the input size expected by the model being explained. Defaults to (224, 224). |
(224, 224)
|
Returns:
Type | Description |
---|---|
list[dict]
|
list[dict]: A list of dictionaries, where each dictionary has an "image" key with a
resized PIL Image object. This format is ready to be used with the
|
Source code in src\ibbi\utils\data.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
list_models(as_df=False)
Displays or returns a summary of available models and their key information.
This function reads the model summary CSV file included with the package, which contains a comprehensive list of all available models, their tasks, and key performance metrics. It can either print this information to the console in a human-readable format or return it as a pandas DataFrame for programmatic access.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
as_df
|
bool
|
If True, the function returns the model information as a pandas DataFrame. If False (the default), it prints the information directly to the console. |
False
|
Returns:
Type | Description |
---|---|
pd.DataFrame or None: If |
Source code in src\ibbi\utils\info.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
plot_lime_explanation(explanation, image, top_k=1, alpha=0.6)
Plots a detailed LIME explanation with a red-to-green overlay.
This function visualizes the output of explain_with_lime
. It overlays the original
image with a heatmap where green areas indicate features that positively contributed
to the prediction, and red areas indicate negative contributions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
explanation
|
ImageExplanation
|
The explanation object generated by |
required |
image
|
Image
|
The original image that was explained. |
required |
top_k
|
int
|
The number of top classes to display explanations for. Defaults to 1. |
1
|
alpha
|
float
|
The transparency of the color overlay. Defaults to 0.6. |
0.6
|
Source code in src\ibbi\explain\lime.py
153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
|
plot_shap_explanation(shap_explanation_for_single_image, model, top_k=5, text_prompt=None)
Plots SHAP explanations for a SINGLE image.
This function is designed to visualize the output of explain_with_shap
for one image.
It uses SHAP's built-in image plotting capabilities to show which parts of the image
contributed to the model's predictions for the top-k classes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shap_explanation_for_single_image
|
Explanation
|
A SHAP Explanation object for a single image. |
required |
model
|
ModelType
|
The |
required |
top_k
|
int
|
The number of top class explanations to plot. Defaults to 5. |
5
|
text_prompt
|
Optional[str]
|
The text prompt used for explaining a zero-shot model. Defaults to None. |
None
|
Source code in src\ibbi\explain\shap.py
153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 |
|