You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: research.md
+26-22Lines changed: 26 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,16 +30,38 @@ will be submittedd
30
30
**Methods:**
31
31
A Neural Physics Engine is developed using a geometry-aware Graph Neural Network. This network is trained on the high-fidelity dataset generated in the previous topic to predict full-field nodal deformations based on sparse contact primitives provided by a rigid-body simulator. The model acts as a fast proxy solver that injects soft-body physics into a rigid simulation loop.
32
32
33
-
**Results**
33
+
**Main Takeaway**
34
34
Takeaway The Neural Physics Engine achieves sub-millimeter accuracy in predicting deformation and runs significantly faster than traditional finite element solvers, enabling real-time simulation of contact-rich tasks. This capability allows for the zero-shot transfer of manipulation policies, such as peg-in-hole insertion, from simulation to the real world.
This work addresses the data scarcity issue in vision-based tactile sensing, where high-resolution visual data exists but lacks corresponding physical ground truth such as force and deformation fields. Existing simulators often prioritize visual realism over mechanical accuracy, limiting their utility for physically grounded learning.
44
+
45
+
**Demo Video:**
46
+
<videowidth="100%"controls>
47
+
<sourcesrc="/Videos/topic3.mp4"type="video/mp4">
48
+
Your browser does not support the video tag.
49
+
</video>
50
+
51
+
**Methods**
52
+
A bidirectional data pipeline is established using a finite element model of a sensor that is rigorously calibrated to real-world indentation data. Two neural networks are trained on paired datasets: a perception network that infers dense physical states from real tactile images, and a rendering network that synthesizes photorealistic images from simulated physical states.
53
+
54
+
**Main Takeaway**
55
+
The framework creates a closed loop between the visual and physical domains, enabling the automatic annotation of real-world tactile images with physical data and the generation of large-scale, physically grounded synthetic datasets. This resolves the labeling bottleneck for tactile perception tasks.
0 commit comments