Weak Supervision Baselines¶
This section provides comprehensive documentation for weak supervision baseline algorithms implemented in the framework. Each algorithm includes detailed pseudocode, implementation notes, and evaluation results.
Note
All algorithms follow the universal baseline comparison framework structure with standardized training, evaluation, and hyperparameter tuning capabilities.
Current Algorithms¶
LoL (Losses over Labels)
A weak supervision method that learns from noisy labels by modeling label noise through a losses-over-labels approach.
Key Features:
Label noise modeling
Gradient-based optimization
Multiple method variants (LoL, LoL_simple)
LPWS (Label Propagation with Weak Supervision)
Graph-based label propagation enhanced with weak supervision signals for semi-supervised learning tasks.
Key Features:
Graph-based propagation
Weak labeler integration
Multiple propagation variants
Usage Examples¶
All algorithms can be executed using the standard command-line interface:
# Evaluate an algorithm
python bin/lol.py --data youtube --mode eval --output results/lol_youtube
# Hyperparameter tuning
python bin/lol.py --data youtube --mode tune --output results/lol_tune --n-trials 50
# Using custom configuration
python bin/lol.py --data youtube --config config/lol_custom.toml --mode eval
Algorithm Comparison¶
Algorithm |
Type |
Key Approach |
Complexity |
---|---|---|---|
LoL |
Noise Modeling |
Losses over labels |
\(O(nke)\) where n=samples, k=classes, e=epochs |
LPWS |
Graph Propagation |
Label propagation + weak supervision |
Misc.¶
Algorithm Documentation Structure
Each algorithm documentation follows a standardized academic structure:
Overview/Introduction - Algorithm motivation and high-level description
Pseudocode - Detailed algorithmic steps in standard format
Implementation Details - Framework-specific implementation notes
Evaluation Results - Performance metrics and experimental analysis
Presentation Slides - Beamer presentation about this work
References - Academic papers citation and link
Contributing New Algorithms
To add a new algorithm to the baseline collection:
Create Algorithm Module in
src/[algorithm_name]/
Implement BaseTrainer Interface following the framework pattern
Add Executable Script in
bin/[algorithm_name].py
Create Documentation using the template structure
Update This Index to include the new algorithm