in reply to Re^5: AI in the workplace
in thread AI in the workplace
It's not "just what I say", it's a culmination of the votes cast within the last 12 months by people using the site. The point being you claim now it's a good idea, while your own previous posts are critical of AI in providing answers to such questions. Consistently inconsistent and demonstrably exhibiting the facets you're critical of.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^7: AI in the workplace (... in the Monastery)
by LanX (Saint) on Jul 28, 2025 at 10:14 UTC | |
Many search engines nowadays come up with an AI generated preface summing up the results before showing the hits. Of course we are not a search engine nor a code writing service, but a community tool helping others becoming better programmers. I initially said it's a horrible idea, bc the way harangzsolt is proposing would lead to too many problems, (apart from implementation problems) But let's try: So I took the liberty to feed a current question into Duck.AI using ChatGPT 4o > Why is "any" slow in this case? Answer: It reads interesting - like always with AI - but requires deeper analysis for disguised non-sense. (Disclaimer: I didn't) Definitely nothing an amateur could handle. But an expert can draw inspiration from this. For instance I was intrigued by the idea of $1 being slow because it can't optimize numification.(point 2) So I asked for clarification, guessing this being about dual-values. > why is numification of read-only slower Answer:
## 🧩 Understanding Numification of Read-Only Variables in Perl Numification refers to the process of converting a string or other data type into a numeric value in Perl. When dealing with read-only variables like `$1` and `$2`, which are special variables that hold the results of a regex match, the performance implications of numification can be significant. Here’s why: ### 🔍 Key Reasons for Slower Numification of Read-Only Variables #### 1. *Repeated Conversion Overhead* When you use `$1` and `$2` directly in comparisons, Perl must convert these values to numeric form each time they are accessed. This involves: #### 2. *Lack of Caching* Unlike lexically scoped variables (like `$c` and `$r` in your "ugly_cr" method), which can be assigned once and accessed multiple times without further conversion, read-only variables do not benefit from caching. Each access to `$1` and `$2` results in a fresh conversion, leading to inefficiencies. #### 3. *Context Sensitivity* Perl is context-sensitive, meaning that the same variable can behave differently depending on how it is used (numeric vs. string context). When using `$1` and `$2`, Perl must always be aware of the context in which they are being used, adding additional overhead to the process. ### 🧠 Conclusion In summary, the numification of read-only variables like `$1` and `$2` is slower due to: Again interesting, but I'm not convinced. The claim (Point 2) that "read-only variables do not benefit from caching" can't be reproduced. Because $1 is indeed also a dual-value.
Read more... (1035 Bytes)
Probably another caching mechanism is meant??? Nope ChatGPT confirms that dual-values are meant:
MY CONCLUSION (so far):LLM output can inspire good ideas in our context but require an expert to deal with. The wordy answers are often full of hidden pitfalls and contrary to a human being the LLM doesn't even try to cross-check what it (hear-)says. So Cheers Rolf
IN HINDSIGHTThis should probably better be a meditation in it's own thread.
DisclaimerThis test is far from methodical. I ran this conversation with Duck.AI after the original question already got many answers. Hence the replies might have already been training and influencing the LLM. | [reply] [d/l] [select] |
by ikegami (Patriarch) on Jul 28, 2025 at 18:30 UTC | |
Perl does have to numify them again, but not for the reason given.
As you can see, $1 gets numified. But every time you read from it, it gets repopulated since it's a magic variable. This wipes the previous values. In the context of analyzing the AI's answer, it's worth noting that I missed the repeated numification in my answer. I stopped too soon.
That can't be true since ugly_cr is way faster than ugly. The actual culprit is the overhead from the addition of capturing. In the context of analyzing the AI's answer, it's worth noting the response is self-contradicting. According to the AI, assigning to the lexicals makes the cr version faster by only doing numification once, but it makes the cr version slower because of the addition of an assignment. | [reply] [d/l] [select] |
by LanX (Saint) on Jul 28, 2025 at 20:20 UTC | |
In fact it doesn't look to me like repeated numification (or magic fetching) was happening.
Edithm ... probably the IV values don't matter if the appropriate I flags aren't set.
Cheers Rolf
| [reply] [d/l] [select] |
by ikegami (Patriarch) on Jul 28, 2025 at 20:25 UTC | |