Many search engines nowadays come up with an AI generated preface summing up the results before showing the hits.
Of course we are not a search engine nor a code writing service, but a community tool helping others becoming better programmers.
I initially said it's a horrible idea, bc the way harangzsolt is proposing would lead to too many problems, (apart from implementation problems)
But let's try:
So I took the liberty to feed a current question into Duck.AI using ChatGPT 4o
> Why is "any" slow in this case?
Answer:
It reads interesting - like always with AI - but requires deeper analysis for disguised non-sense. (Disclaimer: I didn't)
Definitely nothing an amateur could handle. But an expert can draw inspiration from this.
For instance I was intrigued by the idea of $1 being slow because it can't optimize numification.(point 2)
So I asked for clarification, guessing this being about dual-values.
> why is numification of read-only slower
Answer:
Again interesting, but I'm not convinced. The claim (Point 2) that "read-only variables do not benefit from caching" can't be reproduced. Because $1 is indeed also a dual-value.
:~$ perl -MDevel::Peek -E'"42"=~/(\d+)/; say $1; Dump $1; $a=$1+1; Dum +p $1' 42 SV = PVMG(0x5d60b28468a0) at 0x5d60b2868a50 REFCNT = 1 FLAGS = (GMG,SMG,POK,pPOK) IV = 0 NV = 0 PV = 0x5d60b283f5f0 "42"\0 CUR = 2 LEN = 16 MAGIC = 0x5d60b286f0e0 MG_VIRTUAL = &PL_vtbl_sv MG_TYPE = PERL_MAGIC_sv(\0) MG_OBJ = 0x5d60b2868a38 MG_LEN = 1 SV = PVMG(0x5d60b28468a0) at 0x5d60b2868a50 REFCNT = 1 FLAGS = (GMG,SMG,IOK,POK,pIOK,pPOK) IV = 42 #<--- Caching NV = 0 PV = 0x5d60b283f5f0 "42"\0 CUR = 2 LEN = 16 MAGIC = 0x5d60b286f0e0 MG_VIRTUAL = &PL_vtbl_sv MG_TYPE = PERL_MAGIC_sv(\0) MG_OBJ = 0x5d60b2868a38 MG_LEN = 1 :~$
LLM output can inspire good ideas in our context but require an expert to deal with. The wordy answers are often full of hidden pitfalls and contrary to a human being the LLM doesn't even try to cross-check what it (hear-)says.
So
I don't know. But using normal questions as prompts is obviously a horrible idea.
Hardly. This is most likely already happening, at least with AnoMonk posts.
Very likely, at least in requiring us to increase the quality standards of answers or demanding POCs.
Cheers Rolf
(addicted to the Perl Programming Language :)
see Wikisyntax for the Monastery
This should probably better be a meditation in it's own thread.
This test is far from methodical. I ran this conversation with Duck.AI after the original question already got many answers. Hence the replies might have already been training and influencing the LLM.
In reply to Re^7: AI in the workplace (... in the Monastery)
by LanX
in thread AI in the workplace
by talexb
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |