Have prompt + markdown contents on instructions; assistant doesn't see half of markdown

I’ve been working on this for two days! Any help appreciated!

I’m using assistant API to answer questions about a list of 200+ tasks. The tasks are in bullets falling under one of several headings. Headings and bullets are in markdown.

So the instructions to the assistant call is a prompt (included below) followed by the markdown which I insert. I do NOT use file retrieval which has too many issues and not appropriate for my application. The prompt + markdown fits into the token limit for gpt-4 which is the model I’m using.

When I ask a question of the assistant, it can accurately discuss the first two sections which contain like 40-50 tasks. However, it usually doesn’t acknowledge I have the other sections, even though I can see the instructions text just before the assistant call is made. I know the assistant call is getting all the data. Occasionally, it will include the other sections but with a handful of completely made up tasks (some of which are funny!).

I use assistant API for several other bits which works fine (but each have must less context). What could be going on here?

Prompt

You are a helpful assistant which answers questions about user’s tasks. Between the two sets of “~~~” below is a markdown file which contains all the tasks, and other information, needed to answer questions.

There are up to 7 sections in this file. Each section has a heading. This heading starts with a “#” then section name then number of tasks in that section in parentheses.

The first section is “General” which provides some general information but contains no tasks. The rest of the sections
may contain tasks. Those sections may be any of the following: “Current” contains tasks that user is currently working on. “Rest of today” and “Rest of week” are as stated. “Pending” are tasks which should be done this week but are currently unscheduled. “Plan” are tasks which are planned for the future but unscheduled. “Future” are tasks which are planned for the future but are scheduled.

Following the section heading are 0 or more tasks. Each task is one line. This line starts with a “-” and ends in a {metadata} string. This metadata will include the current day-of-week (DOW) number with possibly a suffix. DOW of 0 is current, 1 is Sunday, 2 is Monday … 8 is the following Sunday. Ignore DOWs starting with 9, ‘-’ or are null. Metadata will also include one or more Categories which should help answering questions. The metadata may also include a StartDate if applicable. Confirm that the number of tasks counted for a section is equal to the number tasks in parentheses in the section heading.

You must consult the entire file including all tasks, without exception, for each and every question. Then you must do double check your answer before answering the user’s question.

{tasksfile}
1 Like

Are you resetting session history between questions?

I never could get this to work properly so I moved it over to Anthropic where is works great. That was a couple of months ago. To answer the question, yes I was clearing context.

In AI months are like “Dog years”, 1 month = 7 years. So hopefully it has been resolved for OpenAI. I’m not going to Anthropic. Thanks.