Models Module¶
The bayes_hdc.models module provides classification and learning algorithms.
CentroidClassifier¶
- class bayes_hdc.models.CentroidClassifier(prototypes, num_classes, dimensions, vsa_model_name='map')[source]¶
Bases:
objectCentroid-based classifier for HDC.
Stores one prototype hypervector per class. Classification finds the most similar prototype to the query.
- prototypes: Array¶
- static create(num_classes, dimensions=10000, vsa_model='map', initial_prototypes=None, key=None)[source]¶
Create a centroid classifier.
- Parameters:
- Return type:
- similarity(query)[source]¶
Compute similarity between query and all class prototypes.
- Parameters:
query (Array)
- Return type:
Array
- predict(queries)[source]¶
Predict class labels for queries.
- Parameters:
queries (Array) – Shape (batch_size, dimensions) or (dimensions,)
- Returns:
Predicted class indices
- Return type:
Array
- predict_proba(queries)[source]¶
Predict class probabilities using softmax of similarities.
- Parameters:
queries (Array)
- Return type:
Array
- fit(train_hvs, train_labels)[source]¶
Train classifier by computing class centroids.
- Parameters:
train_hvs (Array) – Training hypervectors of shape (n_samples, dimensions)
train_labels (Array) – Training labels of shape (n_samples,)
- Returns:
Trained CentroidClassifier (new instance)
- Return type:
- update_online(sample_hv, label, learning_rate=0.1)[source]¶
Update classifier online with a single sample.
- Parameters:
- Return type:
- score(test_hvs, test_labels)[source]¶
Compute accuracy on test data.
- Parameters:
test_hvs (Array)
test_labels (Array)
- Return type:
Array
Example:
from bayes_hdc import MAP, CentroidClassifier
import jax
import jax.numpy as jnp
model = MAP.create(dimensions=10000)
key = jax.random.PRNGKey(42)
# Create classifier
classifier = CentroidClassifier.create(
num_classes=10,
dimensions=10000,
vsa_model=model
)
# Train
train_hvs = model.random(key, (100, 10000))
train_labels = jax.random.randint(key, (100,), 0, 10)
classifier = classifier.fit(train_hvs, train_labels)
# Predict
test_hvs = model.random(key, (20, 10000))
predictions = classifier.predict(test_hvs)
# Evaluate
test_labels = jax.random.randint(key, (20,), 0, 10)
accuracy = classifier.score(test_hvs, test_labels)
LVQClassifier¶
- class bayes_hdc.models.LVQClassifier(prototypes, num_classes, dimensions, vsa_model_name='map')[source]¶
Bases:
objectLearning Vector Quantization classifier.
Prototypes are updated: move winner toward sample if correct, away if wrong.
- prototypes: Array¶
- static create(num_classes, dimensions=10000, vsa_model='map', key=None)[source]¶
- Parameters:
- Return type:
- predict(queries)[source]¶
Predict class labels by nearest prototype.
- Parameters:
queries (Array)
- Return type:
Array
- fit(train_hvs, train_labels, epochs=10, lr=0.1)[source]¶
Train with LVQ updates (winner-take-all, move toward/away).
- Parameters:
- Return type:
- score(test_hvs, test_labels)[source]¶
- Parameters:
test_hvs (Array)
test_labels (Array)
- Return type:
Array
RegularizedLSClassifier¶
- class bayes_hdc.models.RegularizedLSClassifier(weights, dimensions, num_classes, reg)[source]¶
Bases:
objectRegularized Least Squares classifier in hypervector space.
Solves the ridge-regression objective \(\min_W \|XW - Y\|_F^2 + \lambda \|W\|_F^2\) in closed form. Automatically selects primal or dual form based on \(n\) vs. \(d\):
Primal (when \(n \geq d\)): \(W = (X^\top X + \lambda I_d)^{-1} X^\top Y\). Conditioning on the \(d \times d\) feature-covariance matrix.
Dual (when \(n < d\), i.e. the typical HDC regime with high-dim hypervectors and modest training sets): \(W = X^\top (X X^\top + \lambda I_n)^{-1} Y\). Conditioning on the \(n \times n\) Gram matrix — numerically well-behaved when \(d \gg n\) and avoids the rank deficiency that kills the primal form on small datasets.
The two forms are mathematically equivalent when both are well posed; only the conditioning differs.
- weights: Array¶
- fit(train_hvs, train_labels)[source]¶
Fit by solving regularised least squares.
Uses whichever of the primal (d×d) or dual (n×n) formulation conditions better given the training-set size vs dimensionality.
- Parameters:
train_hvs (Array)
train_labels (Array)
- Return type:
- score(test_hvs, test_labels)[source]¶
- Parameters:
test_hvs (Array)
test_labels (Array)
- Return type:
Array
AdaptiveHDC¶
- class bayes_hdc.models.AdaptiveHDC(prototypes, num_updates, num_classes, dimensions, vsa_model_name='map')[source]¶
Bases:
objectAdaptive HDC classifier with iterative prototype refinement.
- Parameters:
- prototypes: Array¶
- num_updates: Array¶
- static create(num_classes, dimensions=10000, vsa_model='map', key=None)[source]¶
- Parameters:
- Return type:
- fit(train_hvs, train_labels, epochs=1, learning_rate=0.1)[source]¶
Train with iterative prototype refinement.
Initialises each class prototype with the unit-normalised class mean, then walks the training set for epochs passes; on each misclassification the true-class prototype is moved toward the sample by learning_rate and re-normalised. Accuracy-preserving, single-sided LVQ update.
- Parameters:
- Return type:
- score(test_hvs, test_labels)[source]¶
Compute accuracy.
- Parameters:
test_hvs (Array)
test_labels (Array)
- Return type:
Array
Example:
from bayes_hdc import AdaptiveHDC
classifier = AdaptiveHDC.create(
num_classes=10,
dimensions=10000,
vsa_model=model
)
# Iterative training
classifier = classifier.fit(
train_hvs,
train_labels,
epochs=10,
learning_rate=0.1
)