AI Copilots Everywhere; GPT's Trillion Dollar Economy

The fuse for the trillion-dollar AI economy was lit last year with the launch of GPT apps. ChatGPT4 is simply an affirmation that this new economy is both significant and very near; perhaps well in play.

If you missed Microsoft’s Office 365 Copilot event this week, you might want to carve out an hour to read and watch the video. Be sure to watch it end-to-end with relatively focused attention because what you’re about to see is similar to the moment when the first web browser, Marc Andreessen’s 1993 Mosaic (later Netscape) was unveiled.

The fuse for the trillion-dollar economy was lit last year with fairly useful GPT examples. ChatGPT4 is simply an affirmation that this new economy is both significant and near. Microsoft simply validated - at an enterprise level - just how powerful LLMs can change everything.

Today, Microsoft is to LLMs what Netscape was to the world wide web.

While I have no strong affinity for Microsoft or their products since personally abandoning Windows in 2003, the vision they paint (The Future of Work with AI) is quite remarkable. It demonstrates that the rumours were fact; Microsoft has been quietly working with OpenAI and GPT models since early 2020 to be able to exhibit this much forethought and vision of future work with paired-AI copilots.

In 60 minutes, Microsoft wholly dismembered Google Workspaces, which now looks as if it were built by a team of five-year-olds who missed the Mensa cut. Heads will likely roll at Google, and they’ll catch up over time. Still, in the meantime, the tens of millions of businesses and hundreds of millions of users who jettisoned Microsoft Office to feast on a low-cost collaborative alternative to the office suite will be grazing in a pasture that is arid, brown, and unable to nourish the appetite we crave for advanced productivity in the AI economy.

Copilots Everywhere

But more important, micro pilots as well. The nature of AI and the implementation approaches vary. They can start small and grow in complexity. I’ve already created some interesting experiments in various products from note-taking apps like Mem. The diverse possibilities to enhance work performance are almost infinite. At Stream It we use GPT with Coda to perform summations and entity extraction for CyberLandr support.

Copilots will soon exist everywhere, and if you aren’t using them or building them, you’ll miss the opportunity to participate on the ground floor of an emerging trillion-dollar economy. Worse, you’ll watch from the sidelines as your competitors make more deals, provide more advanced solutions, and outsell you on every level.

Google Workspaces and LLMs

I’ve been a huge fan of Workspaces since 2010. I have a historical client base of businesses that have created advanced automation solutions using Firebase, Google Apps Script, and other Google Cloud features and SDKs.

Google struggles to get ahead of the AI movement surrounding OpenAI and its broad array of GPT APIs. But, until it can offer something tangible from its own stable of AI research experiments, GPT will be an attractive and highly useful platform for building many of the examples Microsoft demonstrated this week.

Google’s rich development environment has allowed me to build impressive GPT features intersecting all Workspace document types.

  • Given a slide deck → write talking points for the presentation.
  • Given AI-generated talking points, create and insert an appropriate image for each slide.
  • Given a large collection of documents in Google Drive, build a search bot capable of locating and summarizing documents based on a simple natural language query.
  • With a corpus of Gmail messages, categorize and report metrics about the conversations.
  • With a new spreadsheet, use the column headings to generate 100 sample rows of data into the sheet.

The possibilities for integrating GPT features into Google Workspace workflows and documents are limited only to your imagination.

Acquire, Enhance, Do Something with It

GPT and LLMs are ideal for collecting information, enhancing it, and then using it. Ideally, the time required to perform these processes are compressed to create hyper-value for workforces. If instrumented well, the benefits impact two key dimensions of work.

  1. Compress the time needed to gather and enhance information
  2. Expand the output

Not only does AI help us do more in less time; it helps us create vastly more information about our work.

There’s a slice of automation and hyper-productive work that intersects with browser plugins as well as OS-level apps. Bardeen (for example) dancing around a very powerful idea - the ability to design no/low-code recipes (playbooks) that act as copilots. It still lacks a number of features to reach the status of seamless copilot integration and workflow benefits, but it is well-poised to generate paired-AI assistants to workers. Text Blaze and even Coda have equal appeal as copilot building blocks.

The Reach of Copilots

Some consultants eat their own dog food; they run their entire business on the same solution patterns they advocate for their clients. Most, however, don’t.

With AI and LLMs, it will be a very different climate; purveyors of AI solutions must use these future work patterns or risk disruption. Imagine a web design firm competing for business in 1998 without a modern-looking website. Imagine if Expedia executives used travel agents while promoting the disruption of the travel industry. Imagine if Apple execs stayed with Blackberry post-2007.

The depth and impact of AI are tantamount to deep-rooted disruptors we have witnessed in our lifetimes. This is not a superficial advancement that provides a new sheen to dull application surfaces.

I’m old; my career should have ended a decade ago. Delightfully though, integration and automation demands have kept me somewhat relevant. Oddly, my near-fifty-year experience appears to be the prologue to a new, emerging, trillion-dollar adventure.

I quietly started learning everything I could about GPT, LLMs, embedding architectures, vector databases, and search index architectures four years ago. I’m almost certain I will be working with copilots to build copilots until at least 2036.


Yes, it’s honestly quite scary knowing that almost everything will be influenced by AI, if it’s not already at that point.

I have really been enjoying using GPT-4 side-by-side for coding. Mainly for theoretical questions and simple debugging. However by no means would I ever deploy it as Microsoft seems to intend. Numerous times it has given me incorrect code and resulted in a fractal of “I’m sorry, my previous code was incorrect, let me fix it”. To be fair though I am using Typescript with ReactJS which is very unforgiving.

After watching the discord example in the livestream I tried it myself but with a customized chat using WebSockets. It was wonderful for the majority, but where it failed was such a failure that it made debugging take longer than actually writing the code. It had no dependencies either, it was just an attempt to setup a one source of truth function for Server & Client Events.

Don’t get me wrong, it did a spectacular job for the majority, but the coherency of the project had so many loose ends that it took more time to fix than to actually coded myself. Unnecessary hooks & interfaces. Broken parameters as a result of types, and finally confusion with separate files. As in it would create the file, but then start applying its function in another file.

Again, I would never use it to create my code unless it was a beautiful simple & pure function, but the livestream made it seem like it was capable. It was a solid 90% in the code is created, however the 10% was so jumbled up it had a domino effect on the rest of the code.

For the rest of the industries. Wow. Yes. I am embarrassed to admit but I have cGPT open almost all the time. Going to be interesting to see. My question is: How will the job market cope with it?


Yeah, there’s going to be scepticism, and that’s healthy.

Your experience is likely gated by a lack of prompts. ChatGPT is not a “copilot”.

In Microsoft’s case, they do not show the actual underlying prompts that are instrumented into the intercase with GPT models. In some cases, I surmise that five different calls into OpenAI APIs were necessary to make their copilot super accurate and highly productive.

Your use of ChatGPT (directly) is woefully disconnected from the reality of the coding experience. Have you ever used Github’s CoPilot with VSCode? The outcomes are far different when the underlying GPT model is prompted with your actual codebase at its disposal to aid in inference exports.

Of course. It’s purely speculative as I haven’t been able to try it yet, and can only use what I have. I wouldn’t doubt that they have a very customized pipeline to handle it.

Keep in mind, my comment is based off the livestream of using ChatGPT, not CoPilot.

It was a fresh project, as stated in my comment. There was no “actual codebase” besides a templated React App with Typescript. I was testing to see how it was capable of initializing a strong non-opinionated framework for a websocket-based chat application.

I’ve used Codex in the Playground for something I had no idea on. It spit out some bloated code, 100 lines, and I paired it down to the 10 lines I actually needed. It would probably take the same time to do it from scratch, so not really a time savings. But I’m sure the models will improve over time.

1 Like

Yes, completely agree.

I love using ChatGPT alongside my programming for any sort of “why”, “what”, and simple “how” questions.

However using it to write code - especially if I am not knowledgeable to just write it myself, is a recipe for disaster and loss of control. The name in itself hints towards this, and I’m glad that they are clear about it being a “copilot”.


The other thing I’m not sure these “copilots” can do is create good overall design patterns amongst interactions of services. This is more of a higher level thing that has nothing to do with code, but IMO is more important than the code.

1 Like

I agree - if you do not give the AI system information about the larger scope of the code you’re writing, it is trying to help you by makeing inferences with a lot of assumptions. It lacks context and without it, it will perform poorly.

In a VSCode project, I get the sense that CoPilot is intimately aware of all parts of the code. It knows all the variables, their scope, and my development style. It is aware of every function, class, and environment variable. With that perspective, it needs to make fewer assumptions than ChatGPT needs to make. Ergo - magical productivity.

… especially if I am not knowledgeable to just write it myself

Doesn’t this make CoPilot’s point? GPT is not perfect, but neither are we. In fact, we have not seen the 200 patterns that GPT has seen to create a GUID (for example). And to be clear - GitHub describes it as a “Paired Programmer”, a partner whose intent is to help, not replace.

That is very impressive. I speak with ignorance when it comes to copilot as I’ve never used it. Most likely some of the salt spilling over from my cGPT experiment. Very good points - I have nothing to add to them.

It all honestly blows me away, and I know it’s all just going to continually be improved.

@curt.kennedy Completely agree. A good program is built before any code is written. Which, now that I think about it, it would be very interesting to see how it handles & translates a psuedocode project. Off to the drawing board.

1 Like

It definitely could be a good paired programmer. But the human has to ultimately “approve” it before using it, which is where the problem is … I see so many people on this forum get code from ChatGPT and then create a new thread as to the fact that it doesn’t work. But hey, they “approved” it without knowing what it is even doing.

I had a co-corker friend who worked in Silicon Valley for years during the dot com boom. “Software Engineers” were hired, and it was sad they couldn’t write a single line of code that worked. But companies were desperate.

I see this today too, where people with no discipline, no troubleshooting, decide to use no-code platforms like Zapier to power their entire system. And then they get frustrated that it doesn’t work.

You can make no-code work, but you still have to be disciplined and follow basic design patterns, and this is what people lack when they don’t try, or don’t care, and just want something to work without putting in their own work or thoughts into the project.

I don’t see AI filling this gap anytime soon. But it is great for folks who don’t have code or architecture experience, who are needing to fill their code gap, but also have a strong interest in the overall theory of what is going on in the design pattern (the architecture). Unfortunately, this population is small. So the next population is an experienced programmer that has so much boilerplate to implement, so these “copilot” things are of actual use in that case.

Another thing I want to point out is I have worked on projects that were coded into a high level language like MATLAB, and then ported automatically to an FPGA (VHDL). And what you saw was extremely inefficient code. The devs were spending more time removing the bloat in the code to meet their timing requirements. So computer generated code can be extremely poorly written for the higher performance cases. But could be viable in low-performance situations (assuming it works).

1 Like

Yeah, I suspect it will do well. I often create comments (with pseudo outlining) and then let CoPilot generate the code. It’s pretty effective. I wrote none of this function, of course.

This is a simplistic example, but CoPilot can also do things with existing or generated code. Example…

Make it more robust…

Chunk it …


Really good points.

It’s a double-edged sword, no doubt.
Power to the correct wielder. A missing limb for someone else.

In that example code above, I’m seeing the “make robust” and then it creates a blind try/catch block. Blind try/catch blocks are something I would avoid personally.

Also the code itself is unreadable. Why does it make sense to multiply a random number by a hex value and then convert this to a string. This is not canonical. Sorry. So maintainability is out here too. Especially with the total lack of comments.

1 Like

Everything you say resonates with me.

Coding as you say is goodish. But I can now spend the same amount of time fixing and debugging.

That said, I have had it start me in coding tasks that I had no idea where to start (I’ve been 10+ years away from when development was my “proper” job.)

Like building a node.js consumer on a google pub/sub queue with a service account for authentication.

It helped me decide that architecture, install visual studio, and install Node. And then write the code and help me deploy it. None of which I previously had experience with.

1 Like

I’d be lying if I said that my projects aren’t influenced by ChatGPT.

I believe it ultimately comes down to the user, their ability to prompt, and their understanding of the limitations. Which is why I don’t like the message of “build an app […] just by describing what you need through multiple steps of conversation.”

GPT is not perfect, but humans are flawed as well. And let’s be clear; these examples are super simple and intended only to help frame a general understanding given that you and @RonaldGRuckus use only customer-facing GPT features. I recommend you give it a try - they offer a free trial period as I recall.

The question I believe deserves consideration is whether an AI paired-assistant is likely to improve things. Or make them worse. So far, the data says it’s better](Research: quantifying GitHub Copilot’s impact on developer productivity and happiness | The GitHub Blog). But, each developer needs to really assess that in their own work style.

We can debate how specific copilots may work well or poorly, but my assertions in my post are about copilots in general. I believe they will begin to be proven effective in almost every aspect of business, communications, marketing, sales, research, management, and product development. I think they’re going to change pretty much everything.

I think that’s a fair objection. Uninformed readers will make assumptions that may be a stretch for today’s AI models to handle. But isn’t that what a copilot is all about? Shaping the prompts and layering API calls under the covers to achieve that which is very difficult to achieve in a free-form UI?

The first “copilot” was ostensibly the original Microsoft wizard, which was first introduced in Microsoft Publisher in 1991. It was part of the operating system in Microsoft’s Windows 95. The most commonly-used wizard at the time was the Internet Connection Wizard, which was renamed to the “New Connection Wizard” in later versions of Microsoft Windows. But the concept was the same - try to help users compress time and complexity.

LLMs are transcendent; they raise the bar significantly because they can infer meaning through natural language, and they can generate content with meaningful, though, not perfect precision. If you’re old enough, you may recall that Internet Connection Wizard had flaws as well. :wink:

It really depends. Optimizing a third-party prompt from what I’ve seen requires some assumptions that can drift away from the original intent. There needs to be a certain level of understanding to write a prompt in a subject without it essentially being noise, or wrong.

There’s so many nuances in building an application, it would almost be the same amount of time to format it for Copilot, as opposed to just writing it myself. As shown in your example, Copilot just “does”. There’s a layer of thought that’s needed which is still absent.

In terms of programming you almost need to already know the architecture, or the end result to actually prompt it correctly. Not all programs need some overly thought out and designed process, obviously. As a learning tool it is wonderful. I absolutely love GPT’s potential for global education.

However for anyone who is starting out and picks it up thinking they can “talk their way to an app” is grossly misinformed.

Again, I completely love using it, I am just disappointed in their statement.

I’m sure there are cases where the net gain is zero, even negative. The data, however, indicates overwhelming evidence that CoPilot users are experiencing significant net gains.

No, for sure. Again, I don’t use Copilot but I use GPT and it helps me tremendously.

What I am trying to say that in almost no current setting would talking their way to building an app be a good idea. It’s just setting the stage for disappointment.

I’m totally aware it’s named “Copilot” for a reason. It’s just slightly frustrating seeing claims of it being capable of essentially writing out an app through words.