The Big Four spent millions on AI systems that need expert supervision (and still fail). Meanwhile, small teams build AI tools that regular people can use safely.
Being a experienced programmer in cobol and pl/1. Prompting is nothing more than modular programming. Instructions are verbal commands for any llm, want to know more you can hire me via edrobot@semanta.
And yes, a 100 page prompt is a strong indicator that whoever wrote the prompt had a serious problem with the definition of the task. If a subject is so complex that it can't be defined in a couple of sentences (few pages at most), maybe breaking the task down into steps that each fulfill the S.M.A.R.T.* criteria helps *(specific, measurable, attainable etc).
Question about this line in the post "...with AI-generated fake citations...". What's the performance of the commonly used AIs as proof-readers? Maybe I'm being naive here, but why not have a different AI check the output of the first for errors such as wrong or entirely made-up citations? For example, if the document was generated with help from chatGPT, have Claude proof-read it, and vice versa. Has anyone here tried that?
Why is this surprising? It just shows how generally consulting firms work.
Not mine :)
Is your consulting firm a 300,000 people global giant?
Why should it be? Sounds antiquated 😎
Edrobot@semanta.nl
Being a experienced programmer in cobol and pl/1. Prompting is nothing more than modular programming. Instructions are verbal commands for any llm, want to know more you can hire me via edrobot@semanta.
And yes, a 100 page prompt is a strong indicator that whoever wrote the prompt had a serious problem with the definition of the task. If a subject is so complex that it can't be defined in a couple of sentences (few pages at most), maybe breaking the task down into steps that each fulfill the S.M.A.R.T.* criteria helps *(specific, measurable, attainable etc).
Otherwise, we end up with "42" as the answer.
Question about this line in the post "...with AI-generated fake citations...". What's the performance of the commonly used AIs as proof-readers? Maybe I'm being naive here, but why not have a different AI check the output of the first for errors such as wrong or entirely made-up citations? For example, if the document was generated with help from chatGPT, have Claude proof-read it, and vice versa. Has anyone here tried that?
Brilliant article and so true.
😂
“Personally, I would never use generative AI for tax advice in the first place” - Curious why is that?