11 Comments
User's avatar
Kavita's avatar

Why is this surprising? It just shows how generally consulting firms work.

Expand full comment
Kamil Banc's avatar

Not mine :)

Expand full comment
Kavita's avatar

Is your consulting firm a 300,000 people global giant?

Expand full comment
Kamil Banc's avatar

Why should it be? Sounds antiquated 😎

Expand full comment
Ed Kool's avatar

Being a experienced programmer in cobol and pl/1. Prompting is nothing more than modular programming. Instructions are verbal commands for any llm, want to know more you can hire me via edrobot@semanta.

Expand full comment
Peter W.'s avatar

And yes, a 100 page prompt is a strong indicator that whoever wrote the prompt had a serious problem with the definition of the task. If a subject is so complex that it can't be defined in a couple of sentences (few pages at most), maybe breaking the task down into steps that each fulfill the S.M.A.R.T.* criteria helps *(specific, measurable, attainable etc).

Otherwise, we end up with "42" as the answer.

Expand full comment
Peter W.'s avatar

Question about this line in the post "...with AI-generated fake citations...". What's the performance of the commonly used AIs as proof-readers? Maybe I'm being naive here, but why not have a different AI check the output of the first for errors such as wrong or entirely made-up citations? For example, if the document was generated with help from chatGPT, have Claude proof-read it, and vice versa. Has anyone here tried that?

Expand full comment
AK's avatar

Brilliant article and so true.

Expand full comment
Bronson Elliott's avatar

😂

Expand full comment
Zhi Hui Tai's avatar

“Personally, I would never use generative AI for tax advice in the first place” - Curious why is that?

Expand full comment