An architectural perspective for the OpenAI community and governance committees

Hello OpenAI community,

My name is Elmidor Rodriguez, founder of Elmidor Group, where we are currently building educational and automation-focused initiatives around AI systems, workflow design, and practical adoption of LLMs — especially for professionals and organizations that are not deeply technical.

As part of this work, I’m preparing a training program aimed at helping people understand GPT beyond basic prompting — how to think of it as a system-level assistant rather than just a chat interface.

One key idea I’m exploring (and teaching) is that not all AI models serve the same role or follow the same philosophy.

From my current experience and research:

  • Models like Gemini excel as information and retrieval engines — research, multimodal analysis, and integration with large information ecosystems.

  • GPT, in many real-world deployments, is increasingly used as a workflow orchestrator — structuring decisions, enforcing rules, coordinating tools, and managing multi-step processes.

This is not a comparison of “which model is better,” but an attempt to help learners understand how different AI systems can complement each other inside real workflows.

Before finalizing this training within Elmidor Group, I’d truly appreciate insights from this community:

  • Does this distinction between information engines and workflow orchestration match your real-world experience?

  • Are there important nuances or risks I should highlight for learners?

  • What common misunderstandings about GPT do you see among beginners that you think are worth addressing?

Our mission with Elmidor Group is educational and collaborative, and we’re open to feedback, dialogue, and potential collaboration with practitioners who care about responsible and practical AI adoption.

Thank you for any perspective you’re willing to share.

— Elmidor

This topic was automatically closed after 22 hours. New replies are no longer allowed.