These underlying AI tools are great, but sometimes doing the right thing with them is what triggers the breakthrough.
!!! That is truly incredible. Whoa.
Here’s some theorizing: Because the image dragged in is generated by the same nn the ‘features’ (internal representations of ‘things’) the nn recognises are therefore a perfect match, when it sees them it’s ‘circuits’ (how they think) light up like a christmas tree.
Just had a brainwave for a project I’m working on, similar kind of nested/layered/inception stuff.
When it started spitting out it’s response I was floored.
Was struggling for a good way to generate a dataset for an experiment, and as usual forgot to just ask gpt4. Didn’t just have a suggestion:
Absolutely, that makes perfect sense. You’re looking to create a dataset of G-code files representing a range of geometric primitives, and you need natural, casual language prompts that would be used to generate these shapes. This approach will help in training a model to understand and interpret non-technical, everyday language to create precise G-code.
Here’s how I can assist you:
- Create the G-code files for each of the specified shapes with variations.
- Provide casual, non-technical language prompts that could be used to generate these G-code files.
Let’s start by creating a list of such prompts for each shape category:
- “Draw a straight line of 100mm.”
- “Make a line that slants upwards and is 50mm long.”
- “Create a horizontal line of 150mm length.”
- “Form a gentle curve with a wide arc.”
- “I need a sharp curve, almost like a tight corner.”
- “Can you do a swooping curve that’s pretty steep?”"
So… an LLM figuring out a plan and creating a synthetic dataset to fine-tune an llm.
With the same principle, if the first dataset isn’t great, run it through gpt4 and things like what happened in that sketch start kicking off. Because it’s close enough it knows exactly what to do and can work miracles. “Please fix this dataset, expand it, and make it more suited to the task.”
[experiment is pretty out there, gpt4 has a tough time reasoning in 3d space, but I have had success when guiding it very carefully, so we’ll see! There’s a chance it’s spatial reasoning will get good by seeing all the primitives over and over. Probably add another dataset of not completely gcode, more like thinking about objects and translating/manipulating them kind of stuff.]