Getting it done with AI takes Longer?

Does Anyone Else Have This Dilemma or Is It Just Relatable

I’m interested in finding a like-minded individual to collaborate on researching and achieving tangible results with AI integration. Here’s what I’ve observed:

With the proliferation of AI tools, I’ve found myself with more free time as these tools can take over many tasks. As a Business Analyst, my goal is to automate as much of my work as possible through AI. However, I’ve encountered a dilemma: tasks such as preparing a Business Requirements Document (BRD) or making revisions often take longer using AI than doing them manually.

I’m curious if others have faced a similar experience. My aim is to research this phenomenon to develop a streamlined template or procedure that optimizes AI usage for specific tasks. From my project experiences, I’ve noticed a common misconception that simply having data is sufficient without understanding the underlying process. I want to ensure that AI automation is effective, particularly for underserved areas.

Despite the small number of people who are unaware of tools like ChatGPT, it’s intriguing to observe this knowledge gap. I propose creating a research paper where users can easily speak to create datasets, reflecting how people believe they should be made.

I’m aware that this topic involves many aspects and emotions, but I invite you to reach out and share your thoughts. Let’s explore how we can harness this potential for positive outcomes.

I hope if chatgpt-team has something to say about it or could share if this has been observed.

1 Like

It depends on the task, but many tasks are still suitable for plain old computers or even manual effort.

I think of AI as a potential gap filler between what is currently done with machines, and what humans are currently doing.

As AI takes over more mundane things, we humans will have more time for the higher cognitive aspects. But AI isn’t making everything easier, at least not yet, and if it ever will, the next level of knowledge gets even harder.

So, I’m working on a project which involves writing a report using ChatGPT API to generate the report. It’s doable as the input is always the same, and the output can be relatively standardized.

I think things like reports need structure to get the best results out of AI assisted generation.

The template assisted generation is more or less how the project is likely to end up. Document (Probably a .doc file) is created with tags. Manually identify said tags and then have a UI for writing the prompting for each tag (Can even generate an initial version of the prompt using AI based on the tag name). Then feed a document or series of documents to base the new document off of, and presto chango you end up at a new “Report”. Add an approval UI for each of the elements it generates with a conversation available to adjust as needed, either manually or automatically.

If the report involves say calculations, then my best thought at the moment is that it needs to be asked to write code to do those calculations, and then after that code has been approved, that should get stored to get executed instead of just a straight up prompt.

Package this so that it can create new end points, I’m kind of leaning towards email end points as opposed to API end points so that it can be slotted into larger workflows as a new “Remote Email Employee” that now handles said report or a set of reports based on emails recieved.