5 Comments
User's avatar
Peter W.'s avatar

And yes, a 100 page prompt is a strong indicator that whoever wrote the prompt had a serious problem with the definition of the task. If a subject is so complex that it can't be defined in a couple of sentences (few pages at most), maybe breaking the task down into steps that each fulfill the S.M.A.R.T.* criteria helps *(specific, measurable, attainable etc).

Otherwise, we end up with "42" as the answer.

Expand full comment
Peter W.'s avatar

Question about this line in the post "...with AI-generated fake citations...". What's the performance of the commonly used AIs as proof-readers? Maybe I'm being naive here, but why not have a different AI check the output of the first for errors such as wrong or entirely made-up citations? For example, if the document was generated with help from chatGPT, have Claude proof-read it, and vice versa. Has anyone here tried that?

Expand full comment
AK's avatar

Brilliant article and so true.

Expand full comment
Bronson Elliott's avatar

😂

Expand full comment
Zhi Hui Tai's avatar

“Personally, I would never use generative AI for tax advice in the first place” - Curious why is that?

Expand full comment