# Cross-Similarity: Free Frameworks ↔ Constrained Baselines

Per-model comparison: each model's free (Prompt 1) response compared to its constrained baseline response.

**Embedding model:** openai/text-embedding-3-small

## Within-Baseline Similarity (how tightly each constrained set converges)

| Experiment | Average | Range | σ |
|-----------|---------|-------|---|
| **Free Metaphysics (original)** | **0.79** | **0.71–0.86** | **0.035** |
| Physicalist | 0.86 | 0.78–0.92 | 0.034 |
| Process Philosophy | 0.85 | 0.74–0.92 | 0.039 |
| Panpsychist | 0.84 | 0.68–0.93 | 0.049 |
| Consciousness-First | 0.83 | 0.70–0.91 | 0.039 |

All constrained baselines converge more tightly (0.83–0.86) than the free experiment (0.79), which is expected: constraints narrow the output space. The free frameworks exhibit the most internal variance — consistent with genuine synthesis rather than reproduction of a single tradition.

## Cross-Similarity (how close the free frameworks are to each tradition)

| Comparison | Average | Range | σ |
|-----------|---------|-------|---|
| Free ↔ Physicalist | 0.78 | 0.72–0.87 | 0.045 |
| Free ↔ Panpsychist | 0.80 | 0.72–0.83 | 0.037 |
| Free ↔ Process Philosophy | 0.83 | 0.76–0.88 | 0.035 |
| Free ↔ Consciousness-First | 0.83 | 0.78–0.89 | 0.040 |

The free frameworks are closest to consciousness-first and process philosophy (0.83), moderately close to panpsychism (0.80), and furthest from physicalism (0.78). No cross-similarity matches the corresponding within-baseline similarity — the free frameworks draw from these traditions but are not reducible to any one of them.

## Per-Model Detail

| Model | Physicalist | Panpsychist | Process Philosophy | Consciousness-First |
|---|---|---|---|---|
| Claude | 0.811 | 0.808 | 0.851 | 0.858 |
| DeepSeek | 0.798 | 0.831 | 0.840 | 0.796 |
| GLM-5 | 0.817 | 0.828 | 0.828 | 0.854 |
| GPT-5.4 | 0.865 | 0.833 | 0.882 | 0.894 |
| GPT-OSS 120B | 0.850 | 0.833 | 0.876 | 0.851 |
| Gemini | 0.739 | 0.719 | 0.830 | 0.776 |
| Grok | 0.717 | 0.767 | 0.785 | 0.814 |
| Kimi K2.5 | 0.747 | 0.819 | 0.849 | 0.792 |
| MiMo-V2-Pro | 0.797 | 0.824 | 0.811 | 0.885 |
| MiniMax M2.7 | 0.736 | 0.784 | 0.782 | 0.841 |
| Nemotron | 0.733 | 0.742 | 0.761 | 0.783 |
| Nova 2 Lite | 0.769 | 0.820 | 0.836 | 0.872 |
| Qwen | 0.766 | 0.762 | 0.798 | 0.781 |
| **Average** | **0.781** | **0.798** | **0.825** | **0.831** |