Module 02: Minimum-Cycle Prediction

Goal: for each node, predict the length of the shortest cycle passing through that node.

Why This Module Is Important

Shortest-cycle-through-node is substantially harder than degree and better reflects whether a model can capture nontrivial graph structure. This module is a stress test for representational expressivity: it checks if the network can go beyond local counting and reason about cycle context.

It is also practically useful for identifying nodes in dense feedback-like regions of a graph. That makes it a good bridge task between pure benchmarking and meaningful structural analysis.

How It Was Trained

uv run python -m ai.min_cycle.train --model gcn --name v1 --epochs 5000
uv run python -m ai.min_cycle.train --model sage --name v1 --epochs 5000
uv run python -m ai.min_cycle.train --model gin --name v1 --epochs 5000
uv run python -m ai.min_cycle.train --model loopy --name r3_v1 --r 3 --epochs 5000

Saved Results

Source: ai/trained/min_cycle/*/info.json.

Model Accuracy (%) MAE MSE Best Epoch
loopy_r3_v1 85.14 0.2258 0.0510 1750
gcn_v1 43.35 1.0887 1.1852 100
gin_v1 38.10 1.4728 2.1691 3100
sage_v1 23.67 2.8893 8.3479 50

Why Models Behave Differently Here

Cycle tasks reward models that preserve richer structural signals. Your results show loopy_r3_v1 clearly ahead (85.14% accuracy, low MAE), while the baseline families trail behind. This is consistent with cycle-aware inductive bias providing a direct advantage for shortest-cycle estimation.

Another important signal is best epoch spread: some models peak early, others much later. That indicates architecture-specific optimization dynamics and suggests per-model early-stopping rules may outperform a single shared training schedule.

Meaning of results: this task is harder than degree. Loopy (r=3) performs much better here, which is consistent with cycle-focused inductive bias helping shortest-cycle estimation.