GNN Experimentation Platform

This project is an interactive environment for experimenting with graph neural networks and combinatorial graph problems. It combines a static frontend editor with a Flask backend and model registry.

Module 01

Degree Prediction

Predict node degree from structure. Includes training setup, complete model metrics, and interpretation of why some models hit exact matches while others drift.

Module 02

Minimum-Cycle Prediction

Predict shortest cycle length through each node. Includes training details and analysis of why cycle-sensitive models separate from baseline message-passing models.

Module 03

Cage Generator

Generate candidate cage graphs with search and RL. Includes queue constraints, checkpoint coverage, and discussion of practical interpretation when RL metadata is limited.

Why This Work Matters Overall

Graph-structured tasks appear in chemistry, planning, recommendation, and network analysis. Having one environment where multiple graph objectives are implemented side by side lets you compare what is easy for standard message passing and what needs stronger structural bias. This helps turn model selection into an evidence-based decision rather than trial-and-error.

The degree task acts as a low-level sanity check, min-cycle tests whether the model can capture cycle structure, and cage generation stresses constructive search under constraints. Together they form a progression from node regression to structure-aware reasoning and controlled generation.

Why Model Performance Differs

Not all tasks reward the same inductive bias. Degree prediction can be solved well by architectures that aggregate local neighbor statistics, so some models saturate quickly. Minimum-cycle prediction requires stronger cycle-sensitive representations, where models with explicit cycle-related structure (such as Loopy variants) can gain a large advantage.

Training dynamics also matter: best epoch can differ dramatically across models, indicating each architecture has a different stability region. Comparing MAE, MSE, and exact-match accuracy together is important because each metric highlights a different failure mode.