Here is my list, which contains URLs for file downloads:
//servername.com/version/panasonic1,1_1.1.1
//servername.com/version/panasonic3,1_6.7.1
//servername.com/version/panasonic3,2_6.8
//servername.com/version/panasonic2,6_3.0.2
//servername.com/version/panasonic3,1_7.1.3
//servername.com/version/panasonic2,6_3.0.4
This list been acquired using curl and saved to a text file. The file is used as input for wget and the files are downloaded into a specified directory. I have worked out the curl and wget syntax and have working lines of code.
However I don’t need to download all of the files listed in the text file as some are older versions of software for a particular models of hardware.
From the text above models are defined by model,revision (x,x) and the software versions are in the form major.minor.revision (x.x.x) or sometimes major.minor (x.x).
From my reading of the boards and limited knowledge of awk I need to isolate the (model,revision) patterns
awk ‘BEGIN{FS=”/p”; OFS=”_”}
and use an if statement to print those which are unique. But where there are other lines which match compare by the version numbers, which means I’ll need to redefine FS and OFS to isolate the second pattern.
{FS=OFS=”_”}
The comparison will then only print lines that have the latest software version. From the example lines above that would be:
//servername.com/version/panasonic1,1_1.1.1
//servername.com/version/panasonic3,2_6.8
//servername.com/version/panasonic3,1_7.1.3
//servername.com/version/panasonic2,6_3.0.4
I think awk is up to the task but my knowledge of it is not. Any ideas would be much appreciated!