Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options


( #480=superdoc: print w/replies, xml ) Need Help??

If you've discovered something amazing about Perl that you just need to share with everyone, this is the right place.

This section is also used for non-question discussions about Perl, and for any discussions that are not specifically programming related. For example, if you want to share or discuss opinions on hacker culture, the job market, or Perl 6 development, this is the place. (Note, however, that discussions about the PerlMonks web site belong in PerlMonks Discussion.)

Meditations is sometimes used as a sounding-board — a place to post initial drafts of perl tutorials, code modules, book reviews, articles, quizzes, etc. — so that the author can benefit from the collective insight of the monks before publishing the finished item to its proper place (be it Tutorials, Cool Uses for Perl, Reviews, or whatever). If you do this, it is generally considered appropriate to prefix your node title with "RFC:" (for "request for comments").

User Meditations
IntelliJ IDEA for Perl !! Try it, use it, BE HAPPY !!
5 direct replies — Read more / Contribute
by ait
on Oct 14, 2021 at 13:05

    Hi there! ye fellow monks.

    Just writing to report that I've been trying out and actually using IntelliJ IDEA Community for Perl on a daily basis and it works AMAZINGLY WELL!!!

    I am so happy, that if JetBrains would charge to commercially support it, I would actually upgrade to IJ Pro. It's THAT good, IMHO.

    A HUGE shoutout and THANK YOU to Alexandr Evstigneev!! (Apparently hurricup here on PM).
    Here's the plugin page:

    Also, a thank you to JetBrains for such a great IDE (I use it in my day job every day for years now).

    Features I like so far:

    • Good and smart syntax highlighting
    • Refactor/renaming stuff
    • Autocompletion
    • Running a script inside the IDE console
    • Debugging with point and click breakpoints, watchers, evaluations (basically all you have in Java)
    • Jumping to implementation (i.e. Goto -> declaration/implementation actually works and drills into modules, etc.)
    • Find in path is a great IJ feature and works nicely with Perl code
    • Specific support for: mason, TT2, Mojolicious, and embedded Perl (I haven't tested all of these)

    BTW, It may seem/feel ironic that a Java IDE is probably the best tool for Perl in 2021, and painful that we can't get our crap together with Padre, but hell, anything that helps Perl adoption is good in my book. Yeah, even if it's written in Java. Besides, many people currently making a living in the JDK ecosystem, could quickly try and adopt things like Mojo or Dancer if you can integrate seamlessly into their workflow so it's a win, win, win for everyone.


Research into occasional participation in Perl/Raku development
2 direct replies — Read more / Contribute
by talexb
on Sep 30, 2021 at 21:11

    If you have any experience with helping out with Perl/Raku, there's a survey that could use your help. I took part a few weeks back, and had a really good conversation with the research assistant.

      Dr. Ann Barcomb (her website) of the University of Calgary is conducting research to understand episodic, or occasional, participation in the Perl/Raku community, in collaboration with The Perl Foundation. The results of her research will be provided as a TPF report and will assist the community in improving practices for managing episodic participation. Please consider assisting them by taking the survey about your participation in the Perl/Raku community. The link to the survey is here.


    Alex / talexb / Toronto

    Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013.

Performance of hash and array inserts
4 direct replies — Read more / Contribute
by bliako
on Sep 29, 2021 at 07:45

    After reading A short meditation about hash search performance because of Hash Search is VERY slow I wanted to investigate the performance of inserting into a hash and array as the number of existing elements increases. The methodology is to Benchmark::timeit() a number of insertions into an empty hash or an empty array. For array, insertion means "push at the end". My aim was to see if the time to insert is constant, linearly increasing with the number of items inserted. I believe this is confirmed. Inserts of N elements into a fresh hash are timed and recorded for different N. I did not just time only 1 insert because there's the suspicion that existing size may affect each new insert. I.e. it's faster to insert into an empty container. So I inserted N items and timed that in total.

    Linear regression is used to build the relationship T = aX+b where T is the total time to insert X items into a fresh container. The error of the regression will be indicative of how well the hypothesis of constant insert time holds.

    I interpret the results as follows: inserting into a hash or array takes constant time and is independent of size: e.g. if insertion of 1000 items into a fresh hash takes 1 unit of time then 2000 items take 2 units of time. Same with array. Caveat: hash keys provided by random_key() were extremely random. The confidence of this conclusion is stronger for arrays than for hash (the error of the linear regression). Inserting into an array is faster than inserting into a hash. Twice as fast in my computer/OS. Perl version used 5.32.1, Linux Fedora 5.13.19.

    Caveat #2 The methodology is simplistic so please add on if you have better inspiration.

    5' Edit: The ascii plot looks volatile but in reality the numbers show a nice linear relationship, which is confirmed by the low error of the least squares. Also "linear regression" and "least squares" mean the same in this post.

    There have been some more edits within 20 minutes of posting. E.g. to clarify that array insert means push.

    10000 array insertions took 1.00004911422729 seconds. 11000 array insertions took 1.09900379180908 seconds. 12000 array insertions took 1.20203304290771 seconds. 13000 array insertions took 1.32625269889832 seconds. 14000 array insertions took 1.42015385627747 seconds. 15000 array insertions took 1.50754404067993 seconds. 16000 array insertions took 1.60211801528931 seconds. 17000 array insertions took 1.70919871330261 seconds. 18000 array insertions took 1.80735301971436 seconds. 19000 array insertions took 1.90634799003601 seconds. 20000 array insertions took 1.99797105789185 seconds. 21000 array insertions took 2.09790301322937 seconds. 22000 array insertions took 2.20532703399658 seconds. 23000 array insertions took 2.30040597915649 seconds. 24000 array insertions took 2.40066504478455 seconds. 25000 array insertions took 2.50436305999756 seconds. 26000 array insertions took 2.59865713119507 seconds. 27000 array insertions took 2.70806193351746 seconds. 28000 array insertions took 2.79667997360229 seconds. 29000 array insertions took 2.89690399169922 seconds. 30000 array insertions took 2.99919009208679 seconds. Array inserts: Y = 9.94971160764818e-05 * X + 0.0141616115322365 with error 0.0014576 +9586141692 Y : total time to insert X elements. 10000 hash insertions took 2.02198815345764 seconds. 11000 hash insertions took 2.21846413612366 seconds. 12000 hash insertions took 2.43116092681885 seconds. 13000 hash insertions took 2.62970495223999 seconds. 14000 hash insertions took 2.83560991287231 seconds. 15000 hash insertions took 3.04865097999573 seconds. 16000 hash insertions took 3.23556900024414 seconds. 17000 hash insertions took 3.44259905815125 seconds. 18000 hash insertions took 3.65537810325623 seconds. 19000 hash insertions took 3.85966873168945 seconds. 20000 hash insertions took 4.09639000892639 seconds. 21000 hash insertions took 4.35650420188904 seconds. 22000 hash insertions took 4.53428721427917 seconds. 23000 hash insertions took 4.74114084243774 seconds. 24000 hash insertions took 5.05467200279236 seconds. 25000 hash insertions took 5.18268871307373 seconds. 26000 hash insertions took 5.41770219802856 seconds. 27000 hash insertions took 5.68814396858215 seconds. 28000 hash insertions took 5.90737128257751 seconds. 29000 hash insertions took 6.0479040145874 seconds. 30000 hash insertions took 6.26542806625366 seconds. Hash inserts: Y = 0.000215204607047044 * X + -0.176900404356259 with error 0.0075377 +646200501 Y : total time to insert X elements.
    7 +----------------------------------------------------------------- +-----+ | + + + + | | 'arr2' +A | 6 |-+ 'hash2' +BB +-| | B B + | | B + | | B B + | 5 |-+ + +-| S | B B + | E | B + | C4 |-+ B B + +-| O | B + | N | B + | D3 |-+ B B + +-| S | B A A + A | | B B A A A + | | B A A A + | 2 |-+ A A A + +-| | A A A + | | A A A + + + + | 1 +----------------------------------------------------------------- +-----+ 10000 15000 20000 25000 + 30000 number of items

    bw, bliako

Organizational Culture (Part VII): Science
4 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Sep 14, 2021 at 00:02

    Publish or Perish

    I remember my dad lamenting his scientific publish or perish workplace culture, lightheartedly explaining to me one day how promotions were awarded:

    1. Each applicant's publications are placed on a weighing scale
    2. The promotion is awarded to the candidate with the heaviest weight

    Using that simple method, I doubt that Albert Einstein would've gained a promotion in 1905, due to the (meagre) weight of his Annus Mirabilis papers:

    • On a Heuristic Point of View Concerning the Production and Transformation of Light External (Photoelectric effect). 16 pages.
    • On the Movement of Small Particles Suspended in Stationary Liquids Required by the Molecular-Kinetic Theory of Heat External (Brownian motion). 11 pages.
    • On the Electrodynamics of Moving Bodies External (Special Theory of Relativity). 30 pages.
    • Does the Inertia of a Body Depend Upon its Energy Content? (E=mc**2). 3 pages.

    In fact, Albert did not score a promotion in 1905, as detailed in his 1905 Performance Appraisal:

    This is a patent office, Albert. Your job is to transform written patent applications into clear and precise language, and to study applications and pick out the new ideas of an invention. These are the priorities. Where does it say that your priorities are rewriting the rules of the Universe, unifying space and time, unifying radiation and matter, or demonstrating the existence of atoms?

    Regrettably, I had to put you down as "poor" for "works well with others" and "shares credit appropriately". You had no co-authors on your five papers, and your citations were quite skimpy: no citations at all in your June and September paper, only one citation in your April paper, and not much better on the others. You wrote that your special theory of relativity came to you after a discussion with your friend Michele Besso. But you didn't even acknowledge him in your June paper. This is an area for improvement.

    You seem to lack a flare for self-promotion. Lucky for us our PR department stepped in and changed your L/c2 equation into the much more marketable E = mc2.

    Based on his performance as a patent clerk, I cannot recommend Albert for a promotion at this time.

    -- from Einstein's Patent Clerk Third Class Performance Appraisal of 1905

    Curiously, Einstein was passed over for promotion for three long years, until he "fully mastered machine technology", remaining a lowly Patent Clerk Third Class at the Swiss Patent Office until 1 April 1906, when he was finally promoted to Technical Expert Second Class.

    See also: Performance Appraisals from the Agile Imposition series.

    The Sad Story of Giordano Bruno

    In 1600 Giordano Bruno was found guilty of heresy by the Roman Inquisition. He was then humiliatingly paraded naked through the streets of Rome (with his tongue lashed to prevent him speaking) and finally burned at the stake, with his ashes thrown in the Tiber river. His crime? Declaring that the stars are distant suns surrounded by their own planets.

    I'm still amazed at how far ahead of his time Bruno was, his bold conjecture only recently verified by exoplanet detections by the Kepler space telescope. If there is a heaven, I hope Bruno smiles every time Kepler finds a new exoplanet and I look forward to paying my respects to him at his statue, located at the site of his execution.

    Galileo was only placed under house arrest for the lesser crime of suggesting that the Earth and planets revolve around the Sun (rather than the other way around).

    Scientific Culture

    The process in the scientific method involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments or empirical observations based on those predictions. A hypothesis is a conjecture, based on knowledge obtained while seeking answers to the question. The hypothesis might be very specific, or it might be broad. Scientists then test hypotheses by conducting experiments or studies. A scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.

    -- from Scientific method (wikipedia)

    Thankfully, Science has come a long way since the days of Galileo and Bruno. Today, the scientific community expects:

    • Rigorous scrutiny. In science, all ideas (especially the important ones) must stand up to rigorous scrutiny. The culture of science does not value dogma.
    • Honesty, integrity, and objectivity.
    • Credit where credit is due. Scientific research articles always provide a list of citations, crediting other scientists for ideas, techniques, and studies that were built upon by the current research. The number of citations a paper receives can help indicate how influential it was, since important research influences how other scientists think about a topic and will be cited many times in other papers.
    • Adherence to ethical guidelines.

    Update (see erix response below): the four Mertonian norms (often abbreviated as the CUDO-norms) are:

    • Communism: all scientists should have common ownership of scientific goods (intellectual property), to promote collective collaboration; secrecy is the opposite of this norm.
    • Universalism: scientific validity is independent of the sociopolitical status/personal attributes of its participants.
    • Disinterestedness: scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them.
    • Organized skepticism: scientific claims should be exposed to critical scrutiny before being accepted: both in methodology and institutional codes of conduct.

    Because it undermines science, scientists take misconduct very seriously. In response to misconduct, the scientific community may withhold esteem, job offers, and funding, effectively preventing the offender from participating in science. There are also strict rules around scientific publications, as detailed at publishing process and responsible referencing:

    • After erasing all signs of your identity, your paper will be sent to 2–3 other scholars who have done research in the same field for review; they will not know who you are or who the other readers or reviewers are.
    • The reviewers will spend a few months reading your submission, double-checking your methods and or claims/citations from other source; they will then send it back to the manuscript editor with commentary and one of the following votes: accept for publication; revise and resubmit (the most common outcome); reject for publication. The reviewers may further write some commentary, for example, a need to redo the experiment using slightly different equipment, or ask you to read through certain books or articles others have published and incorporate that into your work.
    • The reviewers have to judge the work based on what you have written and what you use as evidence, not who you are or whether you have a fancy degree.

    Though the traditional Scientific method has served us well, it seems we need new methodologies to tackle the urgent problems facing us today.

    Crisis Disciplines

    On an evolutionarily miniscule timescale, cultural and technological processes transformed our species’ ecology. These changes that have transpired over this period have come about largely to solve issues at the scale of families, cities, and nations; only recently have cultural products begun to focus on solutions to worldwide problems and wellbeing. Yet we lack the ability to predict how the technologies we adopt today will impact global patterns of beliefs and behavior tomorrow ... social interactions and external feedback make it difficult, if not impossible, to reason about cross-scale dynamics through argument alone (i.e., these are complex adaptive systems).

    Humanity faces global and existential threats including climate change, ecosystem degradation, and the prospect of nuclear war. We likewise face a number of other challenges that impact our wellbeing, including racism, disease, famine, and economic inequality. Our success at facing these challenges depends on our global social dynamics in a modern and technologically connected world. Given our evolved tendencies combined with the impact of technology and population growth, there is no reason to believe that human social dynamics will be sustainable or conducive to wellbeing if left unmanaged.

    Other crisis disciplines thrive on a close integration of observational, theoretical, and empirical approaches. Global climate models inform, and are informed by, experiments in the laboratory and the field. Mathematics describing disease dynamics suggest treatment paradigms in medicine, which can be tested and validated.

    A consolidated transdisciplinary approach to understanding and managing human collective behavior will be a monumental challenge, yet it is a necessary one. Given that algorithms and companies are already altering our global patterns of behavior for financial reasons, there is no safe hands-off approach.

    -- from Stewardship of global collective behavior (cited by erix)

    As argued convincingly above, the traditional scientific method, requiring laborious peer review for example, is too slow for crisis disciplines and better alternatives must therefore be urgently sought. On a more positive note, we've been forced to do this sort of thing before - the Manhattan Project and rapid vaccine development during our current COVID-19 pandemic spring to mind.

    Other Articles in This Series


    Historical References

    Updated 26-Sep-2021: Added paragraph on Mertonian Norms.

Similarities of Perl and Python?
8 direct replies — Read more / Contribute
by Bod
on Aug 30, 2021 at 06:07

    Learned and wise Monks...

    Within my business we use the usual front-end web technologies of HTML/CSS/native Javascript/AJAX and the back-end is entirely Perl. With a very small amount of Java for two Android apps.

    Here in the UK there is currently a government scheme to get young people into work. The Kickstart Scheme pays the young person minimum wage for 25 hours per week for 6 months. We have applied to take on two young people under the scheme. Most applicants have played with React at most but I am interviewing a graduate tomorrow. On his CV he has C, Python and R which has been done as part of his Physics degree plus Python and Shell Scripts on a Raspberry Pi as a hobby.

    I know absolutely nothing about R.
    From my very limited knowledge of Python, I believe it is broadly in the same group of languages as Perl. Therefore, the skills this applicant already has in Python should be relatively easily transferrable to Perl
    - is that a fair assessment???

    Any suggestions for me bearing in mind the applicant has had no real workplace experience?

RFC: Mock::Data::Regex
No replies — Read more | Post response
on Aug 27, 2021 at 04:13
    A few weeks ago I asked about iterating the characters of a character class, as part of a Yak-Shaving exercise for my not-yet-released Mock::Data module. Well, I went further, and built a regex-reverser! (generates random strings that match a user-supplied regex) I'd like some feedback on what people think of the API.
R.I.P. Charlie Watts
No replies — Read more | Post response
by kcott
on Aug 25, 2021 at 16:00

    Charlie Watts died today at the age of 80. A sad event, but obviously a good innings.

    At first, I thought this probably had little relevance here. Then I thought of the many hours that I had coding with the Rolling Stones playing in the background: Charlie brought what they said was the engine to the Stones; he also brought that same engine to my coding.

    He was a great man and had a great life: R.I.P.

    — Ken

Organizational Culture (Part VI): Sociology
2 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Aug 11, 2021 at 05:09

    An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth.

    -- Planck's Principle

    A person who has not made his great contribution to science before the age of thirty will never do so

    -- Albert Einstein

    Remembering some of my physicist friends lightheartedly complaining about being over the hill at their thirtieth birthday party, I was intrigued to see if programming language designers were similarly youthful.

    Some Successful Programming Language Designers

    John BackusFORTRAN30
    Dennis RitchieC30
    Bjarne StroustrupC++30
    Yukihiro MatsumotoRuby30
    John McCarthyLISP32
    Brendan EichJavascript33
    Larry WallPerl33
    Guido van RossumPython34
    James GoslingJava39
    Anders HejlsbergC#39

    As you can see, many successful programming languages were designed by men in their thirties. Perhaps programming language designers tend to be older than physicists because designing a new programming language requires more wisdom, experience and social networking skills, rather than a blinding flash of inspiration (combined with exceptional calculation faculties) often associated with physics breakthroughs.

    Curiously, these successful programming language designers seemed to have precious little experience in actual programming language design! They just boldly created a language to meet what they perceived as an unmet need in their workplace -- typically without asking permission to do so.

    I had this idea that given how much time we had available for Amoeba, I could actually build a whole new language, design and implement it from scratch, and then use it to implement our suite of tools and still be ahead of the game compared to a situation where we would have just clunked on writing the things we wanted to write in C

    -- Guido van Rossum

    Larry similarly invented Perl as a way to make report processing easier at work and to more efficiently solve Unix problems that were being inefficiently solved in a motley mix of C, sed, awk and shell ... while Ritchie invented the C programming language to implement the Unix operating system after Bell Labs withdrew from the ill-fated Multics project. Later, one of Ritchie's workmates, Bjarne Stroustrup, added ideas from Simula to C, creating C++ as a language to make writing his simulations more enjoyable than C while running faster than Simula.

    Perhaps this lone hacker language designer trend was simply a natural reaction to the vacuum created by the failure of design by committee experienced by Algol-68, PL/I and Ada.

    This paper (based on a keynote address presented at the SIGACT/SIGPLAN Symposium on Principles of Programming Languages, Boston, October 1-3, 1973) presents the view that a programming language is a tool which should assist the programmer in the most difficult aspects of his art, namely program design, documentation, and debugging. It discusses the objective criteria for evaluating a language design, and illustrates them by application to language features of both high level languages and machine code programming.

    -- Hints on programming language design by Sir Charles Antony Richard Hoare (1973)

    Though Hoare's classic paper makes interesting reading, it's hopelessly outdated, sorely in need of a modern equivalent. I couldn't find one though (citations welcome), the nearest I could find being the Socio-PLT works of Leo Meyerovich (see Sociology References below).

    Programming Language Sociology and Evolution

    Another reason for the unpleasantly large size of modern language is the need for stability. I wrote C++ code 20 years ago that still runs today and I'm confident that it will still compile and run 20 years from now. People who build large infrastructure projects need such stability. However, to remain modern and to meet new challenges, a language must grow (either in language features or in foundation libraries), but if you remove anything, you break code. Thus, languages that are built with serious concern for their users (such as C++ and C) tend to accrete features over the decades, tend to become bloated. The alternative is beautiful languages for which you have to rewrite your code every five years.

    -- Bjarne Stroustrup

    Stroustrup hits the nail on the head: if you have serious concern for your users, as both Perl and C++ do, you must not break backwards compatibility.

    Related is Simon Peyton Jones’s unofficial motto for Haskell "avoid success at all costs" because becoming too successful harms the ability to modify Haskell ... an evolutionary dead end. I might add that Haskell seems to have been successful in its mission to avoid success. :)

    It seems to me that Perl and Raku have (finally, much too late) got it right: Perl does not break backwards compatibility (out of serious concern for its users), while Raku is free to innovate and improve without the shackles of backwards compatibility ... though as it becomes more widely adopted it will presumably need to pay more attention to backwards compatibility.

    Improving Programming Language Adoption

    Social factors seem to have a far greater impact on language adoption than technical features.

    From Sociology we can learn of many ideas that might be tried to improve language adoption:

    • Identify opinion leaders. Apparently, this approach proved effective in promoting safe sex to fight HIV/AIDS in the 1980s. Given how heavily Google invested in C++, Java and Python I was surprised by the growth of Golang, perhaps this can be partly explained by Rob Pike being an effective opinion leader at Google. Similarly, now that Guido moved from Google to Microsoft, will we see an increase in Python adoption at Microsoft and a decrease at Google?
    • Observability. This worked well to increase Coverity static code analysis adoption simply by running the customer's code base through Coverity and having it find hundreds of long-standing bugs the customer was not previously aware of.
    • Data mining and AI analysis (e.g. of github repos, SO, social media). Looking for niches to fill, for example.
    • More important than language features are the libraries and community supporting various domains - this explains why BioPerl is so popular and why Python is so popular in the AI/Machine Learning domain.
    • See the Sociology References and this response below for other ideas that might be tried to improve Programming Language Adoption.

    Other Articles in This Series

    Sociology References

    Other References

    Updated: corrections to history of B and C programming languages by Ritchie and Thompson. Expanded Other References section.

Organizational Culture (Part V): Behavior
1 direct reply — Read more / Contribute
by eyepopslikeamosquito
on Jul 16, 2021 at 06:34
Prefer Pure Perl Core Modules
9 direct replies — Read more / Contribute
by Leitz
on Jul 13, 2021 at 10:03

    "I prefer to use pure Perl core modules instead of depending on the CPAN."

    Saying that on IRC usually causes a flurry of negative comments. I agree that the CPAN is resource intense, and I've put stuff there. If you use lots of code from the CPAN, I'm not going to make fun of you. However, I would ask that you give me the same respect. Here's why I choose this path.

    1. Compatibility Pure Perl modules are portable, the target node doesn't need a set of compiler tools. While XS based modules might improve performance, not all nodes have compiler tools. Some nodes are precluded from having compilers based on resources or security mandates. Depending on a module that may not be installable creates a production risk.

    2. Idempotence Well engineered software can be installed and removed cleanly. When I worked for a telco, we would install, remove, and then reinstall software during QA. If it failed at any of those, it failed. Period. There is no "cpan" command to uninstall a module. The "cpanm" -U option for "uninstall" is marked "EXPERIMENTAL". The few times I've tried to use it to install modules that were installed via "cpan", it could not find the modules. Even when I told the command where the module was. If a module cannot be installed and removed cleanly then it does not belong on a production system.

    3. Upgradeability The "cpan" -u option (upgrade) comes with the warning: "Blindly doing this can really break things, so keep a backup." This conflicts with the concept of keeping software current to reduce security vulnerabilities and bugs. If a system's software cannot be cleanly upgraded it should not be in production.

    4. Security -- See Addendum below -- One of Perl's common object oriented modules is Moose. Installing Moose adds roughly 900 modules to the node. Who is security checking all those dependencies? Who wants to explain each and every one of those modules to a security auditor? In truth, how many of us could explain the risks and benefits of all nine hundred dependencies? And are we being paid to check someone else's code or are we paid to keep a production system running?

    5. Immiscibility Most Linux distributions require some version of Perl for operation. This sounds good for Perl, until you realize that the versions are often very out of date. If you want to use a semi-recent Perl you usually have to compile your own and install it somewhere. You also have to install any CPAN modules separately, which means your backups are now taking longer and you have more to sift through when trying to make space. And, of course, anyone who wants to use your code has to concoct the same environment.


    My personal solution is to use pure Perl core modules or pure Perl CPAN modules that do not have a large dependency list. Large, in this sense, is "Am I willing to deal with these dependencies manually?" At some point in time I hope to be Perl-smart enough to help improve CPAN, but I'm not there yet.


    An earlier version of this page referenced YAML::Tiny in the security section. Investigating, based on hippo's comment (below) about "suggested_options" ('suggests_policy' in removed YAML::Tiny as a culprit. YAML::Tiny has no non-core module dependencies.

    Chronicler: The Domici War (

    General Ne'er-do-well (

Cliques solution pertinent to my use case
3 direct replies — Read more / Contribute
by Sanjay
on Jul 07, 2021 at 11:57

    This refers to the hardness of finding cliques in a graph when the number of nodes becomes high. My use case is that I am trying to form cliques within a cluster where the sum of relationships is maximum.

    Background & definition (anthropomorphized for ease of understanding):

    Cluster: A number of persons where each person is related to at least one another person. The strength of the relationship is a number, i.e. Si-j is the strength of the relationship between person i & person j (Symmetric: Si-j = Sj-i). All combinations need not be present - if person x & person y have no relationship then Sx-y does not exist. No relationship with self - Si-i does not exist as well.

    Problem: Given a list of relationships Si-j: i from 1 to N, for each i some j from 1 to N, i!= j; (graph with the edges giving the strength of the relationship). Find those groups (cliques) where each member is connected to each other and the sum of the relationship is maximum, e.g. if the persons are as follows:

    Person-1 Person-2 Relationship-strength A B 92 A C 7 B C 2 C D 88

    Then the groups formed would be (A,B) & (C,D) for a sum of relationships of 180 (92 + 88).

    Any other combination would have a lesser sum, e.g. (A,B,C) & (D) would have a sum of relationships of 101 (92 + 7 + 2).

    How I did it: Convert the list of Si-j's into an integer linear programming problem and call a free solver lp_solve. Got good results - out of about 40,000 problem sets (clusters) only about 20 timed out. This on an ancient 3'rd gen i7 laptop with 4GB RAM. About another 10 were "very" large (more than 2,000 edges). Gives me confidence that use of a commercial solver and a larger timeout period may give even better results.

    Linear programming problem formulation left out here as it is more an algorithm issue rather than a Perl issue. We can discuss if interested. Off forum perhaps?

    Hope this is interesting. And, perchance, if it is useful for anyone then it would be the icing on the cake!

Organizational Culture (Part IV): Perl Culture
No replies — Read more | Post response
by eyepopslikeamosquito
on Jul 05, 2021 at 09:25
Perl OOP (POOP) - a use case for Util::H2O (hash to object)
1 direct reply — Read more / Contribute
by perlfan
on Jun 30, 2021 at 13:15
    Intuitively, I've been strongly drawn Util::H2O, and I finally have a solid example that has informed me explicitly what I have been feeling implicitly. My goal is to be laconic this time.

    Recently, I was writing a commandline tool in Perl, one of my favorite things to do. I needed Getopt::Long. I also was being nagged internally to use Util::H2O. Why? I simply wanted "real" accessors for the options hashref that I really like using with GetOptions. I ended up with something like this:

    use strict; use warnings; use Getopt::Long qw/GetOptions/; use Util::H2O qw/h2o/; # <~ here my $opts = {}; GetOptions( $opts, qw/ option1=s option2=i option3 option4! option5+ / ); h2o $opts; # <~ here #... exit; __END__ ## now we get the following, but only for the options actually passed +(updated) # $opts->option1 # $opts->option2 # $opts->option3 # $opts->option4 # $opts->option5
    Almost magically, with the addition of a single line (well, 2 if you count the use Util::H2O qw/h2o/; line,) I had accessors for $opts. I was pleased, to say the least. Note, I have not tried this with callbacks, but I also don't use them much. I suspect that works just fine. (update) Also note, in this case one gets accessors only for the options passed (or more precisely for only the keys that exist in $opts). I suspect this could be leveraged for more interesting and perlish ways of checking what options were actually passed.

    And now I can start to quantify where and when I'd like to use Util::H2O - not as an actual way to have a bunch of boiler plate at the top of my script to declare classes (which it does support), but perlishly (and iteratively) adding accessors to hash references - which my code tends to be absolutely full of. I've used it in some other places to clean up existing code and in new code, knowing full well I am going to be slinging hash references all over the place.

    In summary - check out Util::H2O. If you're like me, then you'll find little benefits to you that POOP offers can actually be satisfied in this natural way. It also occurs to me, this topic would be an ideal entry for the Perl Advent Calendar - so maybe I'll work out some more examples and make a submission later this year.

Benchmark: Storable, Sereal and JSON
3 direct replies — Read more / Contribute
by stevieb
on Jun 30, 2021 at 11:38

    After I get a distribution very stable, I go back and try to make it better, faster or more efficient. In IPC::Shareable, I've been benchmarking various serialization techniques to see which one works the fastest. I knew that Sereal was quite a bit faster than Storable, but there are some gotchas with it that I couldn't work around. This morning I tested with JSON, and to my surprise, it blew both out of the water!


    Benchmark: timing 5000000 iterations of json, sereal, store... json: 17 wallclock secs (17.53 usr + 0.00 sys = 17.53 CPU) @ 28 +5225.33/s (n=5000000) sereal: 22 wallclock secs (21.78 usr + 0.00 sys = 21.78 CPU) @ 22 +9568.41/s (n=5000000) store: 49 wallclock secs (49.55 usr + 0.01 sys = 49.56 CPU) @ 10 +0887.81/s (n=5000000) Rate store sereal json store 102312/s -- -56% -64% sereal 233863/s 129% -- -18% json 286862/s 180% 23% --

    Benchmark code:

    use warnings; use strict; use Benchmark qw(:all) ; use JSON qw(-convert_blessed_universally); use Sereal qw(encode_sereal decode_sereal looks_like_sereal); use Storable qw(freeze thaw); if (@ARGV < 1){ print "\n Need test count argument...\n\n"; exit; } timethese($ARGV[0], { sereal => \&serial, store => \&storable, json => \&json, }, ); cmpthese($ARGV[0], { sereal => \&serial, store => \&storable, json => \&json, }, ); sub _data { my %h = ( a => 1, b => 2, c => [qw(1 2 3)], d => {z => 26, y => 25}, ); return \%h; } sub json { my $data = _data(); my $json = encode_json $data; my $perl = decode_json $json; } sub serial { my $data = _data(); my $enc = encode_sereal($data); my $dec = decode_sereal($enc); } sub storable { my $data = _data(); my $ice = freeze($data); my $water = thaw($ice); }

    I would never have expected that. Next up, a new serialization option for IPC::Shareable!

Perl tools for making code better
3 direct replies — Read more / Contribute
by Leitz
on Jun 29, 2021 at 10:02

    I have bodies of code to maintain and improve. Using tests is a given, and TDD is a wonderful thing. I am still learning Perl culture, and would appreciate guidance on tools and processes.

    My parameters are hopefully simple. I prefer clear and verbose code so I can understand it the next time I read it. When producing tools I prefer to stay as close to Core/StdLib Perl as possible. When working *with* tools, more options are open. For example, using a Perl::Critic plug-in while developing code is fine, shipping those add-ons is something I would avoid.

    My current plan is a cycle of:

    1. Write Tests
    2. Kwalitee
    3. Perl::Tidy
    4. Perl::Critic

    Repeating as necessary, or until I have to ship the code. Are there other tools to add to the mix? Better processes?

    Chronicler: The Domici War (

    General Ne'er-do-well (

Add your Meditation
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":

  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or or How to display code and escape characters are good places to start.
Log In?

What's my password?
Create A New User
Domain Nodelet?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (6)
As of 2021-10-15 23:30 GMT
Find Nodes?
    Voting Booth?
    My first memorable Perl project was:

    Results (69 votes). Check out past polls.