4 comments
Comment from: Yuval Kogman Visitor

Comment from: robin Member

@Yuval: We of course need to inform about this thing in all the right channels. Then we need to assure corporate users that it is safe to submit reports from their secret code.
To make average users aware of the tool at all is the hard part, as you say. Information about the tool should of course follow the normal perl documentation (hopefully some basic outline in the README). The tool should be easily installable on several platforms, hopefully packaged as a standalone PAR (with instructions on how to build, for the paranoid). Someone needs to write good articles on risk management and change management that mention the tool, so that project managers that normally don't see any of our articles start to think "are we using those methods to mitigate our risk factors?"
But really, first we need to make the tools. Then we need to make it easy to install, run and understand them. And finally we need to market them.
We cannot hope to change their walled garden ways, but at least we can try to inform them, and hope that they will understand the benefit of analyzing their code. Adam Kennedy mentioned that they had 300k lines of code. What if this tool could be run on that codebase and tell them that they had 10 occurrences of pseudo-hashes, 25 occurrences of @_[x] etc. (I'm not suggesting they have). It would be a tremendous help for them in tracking down obsolete constructs before they even start running their test suite. I'm just suggesting that we make things easier, not necessarily perfect.
Comment from: Adam Kennedy Visitor

I don't know if I've mentioned it anywhere, but if you look at Python 3.0 this is exactly what they do.
WRT our code, running these kinda of automated tools is exactly what we do.
We've run Perl::Critic over our code a few times to cherry pick some specific policies we'd like to deal with.
We've also used Ovid's SQL injection detector a few times to look for (extremely) old code that did unsafe things.
Apart from Perl::Critic, take a look at Perl::MinimumVersion for another example.
You only need two things.
1. A collection of PPI search expressions.
2. A set of replacement expressions for the subset that you can automatically (and safely) upgrade.
WRT doing this in the large, I've already been experimenting with something like this.
I have a 7.5g MD5-indexed PPI::Cache of documents, and a metrics/detection plugin API.
One initial experiment was do locate all existing uses of the soon-to-be-deprecated $[ variable.
What was interesting was that a bunch were typos. So for some future automated replace code, you'd also want to try and look around to isolate known-good replace situations, and then bail out on the ones you can't be sure of.
But this whole problem is quite tractable, especially for the smaller deprecations.
I think what the doomsayers amoung us are trying to argue is that just because users don't talk to you, doesn't mean they aren't important.
The kind of automated tool you suggest would be hugely valuable. It lets us fix the DarkPAN without having to see it at all.

Apart from saying "Yay ra! This is a great idea!", I want to give the old AOL "me too" to Alias' last point.
This sort of tool can allow DarkPAN, who may not be able to talk to us because of stupid legal concerns, to test their own code and lessen the fear of upgrading.
I see several subprojects.
First is Perl::Critic policies targetted at modernizing source code. This can provide a solution to many, but definitely not all compatibility problems.
Second is a Perl::Critic result database with features for anonymity.
These technical bits are not that hard, but still projects.
But the hard subproject is getting community participation.
I don't see how we can get better participation from the darkpan community. There are an unknown number of walled gardens where people assume that their way is the right one. There are probably countless more environments where people use Perl but are unaware of the Perl community. How do you reach these people?