============================ Weak Supervision Baselines ============================ This section provides comprehensive documentation for weak supervision baseline algorithms implemented in the framework. Each algorithm includes detailed pseudocode, implementation notes, and evaluation results. .. note:: All algorithms follow the universal baseline comparison framework structure with standardized training, evaluation, and hyperparameter tuning capabilities. Current Algorithms ================== .. grid:: 1 2 2 2 :gutter: 4 :margin: 0 :padding: 3 4 0 0 .. grid-item-card:: :class-card: intro-card :shadow: md :link: lol :link-type: doc LoL (Losses over Labels) ^^^^^^^^^^^^^^^^^^^^^^^^ A weak supervision method that learns from noisy labels by modeling label noise through a losses-over-labels approach. **Key Features:** * Label noise modeling * Gradient-based optimization * Multiple method variants (LoL, LoL_simple) +++ :bdg-primary:`Algorithm` :bdg-secondary:`Pseudocode` :bdg-info:`Results` .. grid-item-card:: :class-card: intro-card :shadow: md :link: lpws :link-type: doc LPWS (Label Propagation with Weak Supervision) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Graph-based label propagation enhanced with weak supervision signals for semi-supervised learning tasks. **Key Features:** * Graph-based propagation * Weak labeler integration * Multiple propagation variants +++ :bdg-primary:`Algorithm` :bdg-secondary:`Pseudocode` :bdg-info:`Results` Usage Examples ============== All algorithms can be executed using the standard command-line interface: .. code-block:: bash # Evaluate an algorithm python bin/lol.py --data youtube --mode eval --output results/lol_youtube # Hyperparameter tuning python bin/lol.py --data youtube --mode tune --output results/lol_tune --n-trials 50 # Using custom configuration python bin/lol.py --data youtube --config config/lol_custom.toml --mode eval .. toctree:: :maxdepth: 1 :caption: Algorithm Documentation: :hidden: lol lpws Algorithm Comparison ====================== .. csv-table:: Algorithm Comparison Overview :header: "Algorithm", "Type", "Key Approach", "Complexity" :widths: 25, 25, 25, 25 "LoL", "Noise Modeling", "Losses over labels", "$O(nke)$ where n=samples, k=classes, e=epochs" "LPWS", "Graph Propagation", "Label propagation + weak supervision", Misc. ====== .. dropdown:: Algorithm Documentation Structure :icon: file-directory Each algorithm documentation follows a standardized academic structure: 1. **Overview/Introduction** - Algorithm motivation and high-level description 2. **Pseudocode** - Detailed algorithmic steps in standard format 3. **Implementation Details** - Framework-specific implementation notes 4. **Evaluation Results** - Performance metrics and experimental analysis 5. **Presentation Slides** - Beamer presentation about this work 6. **References** - Academic papers citation and link .. dropdown:: Contributing New Algorithms :icon: plus-circle To add a new algorithm to the baseline collection: 1. **Create Algorithm Module** in ``src/[algorithm_name]/`` 2. **Implement BaseTrainer Interface** following the framework pattern 3. **Add Executable Script** in ``bin/[algorithm_name].py`` 4. **Create Documentation** using the template structure 5. **Update This Index** to include the new algorithm