-
Notifications
You must be signed in to change notification settings - Fork 10
Description
Hi @KaimingHe and the team, congratulations on the release of VARC! 👋
It is incredibly exciting to see your work confirming a hypothesis I've been exploring for the past few months: ARC can indeed be framed as a vision problem and solved via image-to-image translation.
I would like to share some corroborating evidence from my own independent research that aligns perfectly with your findings, and offers a broader perspective on the inherent reasoning capabilities of neural networks.
1. Independent Validation on ARC & Geometric Reasoning (Image-to-Image)
About 3 months ago, I conducted experiments using Swin-UNet on subsets of ARC-AGI-1/2 and geometric construction tasks (e.g., finding inscribed circles).
- Method: I used manually extracted logic to procedurally generate large-scale, noiseless datasets (~150k samples per task).
- Result: The model achieved near 100% validation accuracy with extremely fast convergence on almost all tested tasks.
- Takeaway: This strongly supports your conclusion that with the right visual priors and data formulation, Vision Transformers are sufficient for abstract reasoning tasks previously thought to require symbolic or language-based approaches.
2. Generalization: Neural Networks as Exact Logic Engines
My research extends this observation beyond vision models. I found that this "perfect reasoning" capability is likely a fundamental property of connectionist systems.
- I successfully trained simple MLPs to solve complex LeetCode algorithmic problems (encoded as binary strings) with zero error.
- Key Insight: As long as the data is noiseless and structurally consistent, neural networks act as "Software 2.0", capable of distilling precise logic from data without "hallucination," regardless of whether the architecture is a ViT or an MLP.
3. Limitations
Consistent with general observations in the field, I also found that for tasks requiring deep multi-step backtracking or strict constraints (e.g., Sudoku or logic puzzles), single-pass inference struggles. For these, intermediate representation supervision (similar to CoT) remains necessary.
Full experiments and code (Neural Sculpting Paradigm):
(Note: The repository is currently under active refinement.)
https://github.com/ball-lightning6/neural-sculpting-paradigm
Congratulations again on this inspiring work!
您好 @KaimingHe 及团队,祝贺 VARC 发布!
非常令人兴奋看到你们的工作证实了我过去几个月一直探索的一个假设:ARC 确实可以被定义为一个视觉问题,并通过图像到图像的转换来解决。
我想分享一些来自我独立研究的确凿证据,这些证据与你们的发现完美契合,并这就神经网络的内在推理能力提供了更广阔的视角。
1. ARC 及几何推理上的独立验证(图像到图像)
大约 3 个月前,我在 ARC-AGI-1/2 的子集以及几何构造任务(如寻找内切圆)上使用 Swin-UNet 进行了实验。
- 方法: 我利用人工提取的逻辑,程序化生成了大规模的无噪声数据集(每个任务约 15 万个样本)。
- 结果: 模型在几乎所有测试任务上都以极快的收敛速度达到了接近 100% 的验证准确率。
- 结论: 这有力地支持了你们的结论,即在拥有正确的视觉先验和数据形式的情况下,Vision Transformer 足以处理以前认为需要符号或语言方法才能解决的抽象推理任务。
2. 泛化:神经网络作为精确逻辑引擎
我的研究将这一观察扩展到了视觉模型之外。我发现这种“完美推理”能力很可能是联结主义系统的基本属性。
- 我成功训练了朴素的 MLP 来解决复杂的 LeetCode 算法问题(编码为二进制字符串),并实现了 0 误差。
- 核心洞察: 只要数据是无噪声且结构一致的,神经网络就能充当 “Software 2.0”,能够从数据中蒸馏出精确的逻辑且不产生“幻觉”,无论架构是 ViT 还是 MLP。
3. 局限性
与该领域的普遍观察一致,我也发现对于需要深度多步回溯或严格约束的任务(例如数独或逻辑谜题),单次前向推理会遇到困难。对于这些任务,中间表示的监督(类似于思维链 CoT)仍然是必要的。
完整的实验和代码(神经雕刻范式):
(注:仓库目前仍在持续完善中。)
https://github.com/ball-lightning6/neural-sculpting-paradigm
再次祝贺这项充满启发性的工作!