I’m currently working on a research project that explores AI-driven 3D generation for material and pattern design — specifically, how to generate 3D woven structures directly from text or image inputs. This is for a project I am doing for masters at the Bartlett School of Architecture in London.
Right now, I’ve been using the public Shap-E inference code, which works well for testing prompts, but it seems the training pipeline and dataset used for Shap-E haven’t been released publicly.
I wanted to ask:
• Is there any way to gain experimental or research access to the trainable version of Shap-E, or to collaborate with teams who have it internally?
• Alternatively, does OpenAI plan to release a fine-tuning or dataset interface for Shap-E in the future?
My current workflow combines:
- Pix2Pix (for image-to-diagram training),
- Grasshopper 3D (for parametric geometry reconstruction), and
- Shap-E for text-to-3D mesh generation.
I’d love to connect with anyone from the community who’s exploring similar text-to-3D diffusion models, fine-tuning pipelines, or dataset creation for material/structural design.