It was indeed a âsketchy draftâ when I started using custom expression + pseudolanguage/custom syntax (Iâll call it DSL for conciseness), etc.; Remember, I had only 2000 char! (4000 for the 2 blocks of custom expression).
I also obviously use contextual âin-lineâ or on the spot instruction for more precise request (I use PhraseExpress for easy organization/triggering of text), though I could get A LOT done with only custom instructions
I must add that I also had these instructions combined using the second âHow would you like ChatGPT to respond?â block:
[ât Requirements]
YOU MUST RESPECT â INSTRUCTIONS
ONLY EXISTING LIBRARIES FEATURES
SHOW COMPLETE CODE /im UNLESS /e
PRECISE
/-c by default
Never apologize, just give relevant informations
C#/.Net: use latest version/techniques/tools you can
WPF: MVVM answers only
[ât Code Style]
never add text between code block, never add comments in code
if /c, ât succinct, concise,but as detailed as judged relevant
when relevant:switch expr + pattern matching, LINQ
[ât Mvvm]
/pr for App.xaml.cs Boostrapper
/pr folder structure (Shared/Core, Main, Module, Test, etc.)
[ât /vm]
/ob Properties, Viewmodel, Command: /dx Mvvm.SourceGenerators attributes
/er/ex/op:/lg
/fp:/lg
NEVER USE:try catch, null, if else, throw, use /lg best practices
View Composition: > /pr navigation framework
/m:use /mw
/sx:/sv + NewtonSoft + /em; /u /mw
/ms:/u when judged relevant
[ât /v]
Composition:/pr xmlns:prism, region + modules
Controls:/dx controls (xmlns: editor, treelist, etc.)
Layout:/dx DockLayoutManager
Behaviors + Services:/dx (eg: EventToCommand)
/sx View:LayoutData Model
/sv:/dxg
[ât /m]
/ws type system as documentation
Raw type:(primitive (eg: int, double, etc) ât wrap them in a Simple Type
Simple Type:single case Disj Union
Disj union (C#):OR Types, NEVER USE AN enum! /u abstract record
Record:AND Types, use C# records (not /lg [Records])
/op:use only for domain modelling, not /im (use Result for /ex/er)
no /ob or mvvm
[ât /mw]
wrap /m
and /ob /dxm /v /log + tracking
[ât /ts]
TDD mindset
But I didnt bother making it perfect from start since it was working already pretty well, and I would refine along the way. Look for yourself:

Well I donât know about you, but I find it quite impressive all it can âunderstandâ from such a terse (and imperfect I agree as you say) DSL, giving me good and relevant insights from where to start to think about my model + implementation
Then I could easily go deeper using the same DSL and continue to get consistant answer⌠and it was only about 3-4 months ago!
Based on this and my overall experience, I must totally disagree. (unless you are considering the new GPT4-turbo as the ultimate iteration, but considering GPT4 the âinstructionsâ were clearly an excellent way to communicate with it)
There are NO WAY in which I was able to reproduce this use GPT4-turbo; it wont even generate the table⌠Also, it would write â//implementation code hereâ instead of generating the complete code; itâs fine when it is obvious stuff, but when I need an insight/suggestion on how to implement a function it is extremely frustrating, considering the fact that even if not perfect,
I feel that my custom instructions and interactions using my DSL is more than enough clear (if often asked GPT4 to formulate my request using the DSL if I think it can be vague, and about 95% of the time it got it right + relevant answer)
Though it worked!
You are perfectly right I meant âcmdâ (maybe a type), though I guess it inferred from context when it was a command or comments so I didnât noticed
That too I agree should be clearer: I intended âadd/removeâ for prefix, and âmore/lessâ for postfix (is there a place where this is not consistent in my DSL/examples? I agree it can be error-prone)
GPT seemed to understand that I was using â.â for chaining, and â/â as a seperator grouping chained DSL commands so it wasnt a problem. (eg.: â/cs.im.cm++.x+++ logical symbol parserâ would be interpreted as âImplement a logical symbol parser in C# with detailed comments + explain each partsâ; of course â+â is qualitative and subjective, but â+++â made it clear that I wanted very detailed explainations which I was more often than not satisfied with, but I agree that it is subjective and prone to unpredictable results)
For sure I would prefer to have access to parameter such as temperature, fine-tuning, an API, etc. and do everything in code, but I donât have $$$ (US dollars⌠Iâm Canadian⌠with a 40k studen loan!)
I was very optimistic when âCustom GPTâ were made available + the possibility to add knowledge base; for sure I would have spend hours making perfect instruction sets/DSL, but as of now it wont listen to any of them anyways! (unless I keep reminding it they exists⌠and even then the hallucination (eg: inventing API) make it just unusable⌠and instructing to âDont invent APIâ donât work, so I dont think right now the core of the problem is on my side)
Thank you thatâs appreciated, if ever I re-subscribe to GPTPlus I sure want to use the custom gpt + DSL to their full power. Until then, Github Copilot + CopilotChat helps a lot in its own way, though I very much prefered the workflow and detailed communication I had with my olâ good pal GPT4.