On Tue, Aug 12, 2003 at 02:36:17PM -0400, Rock wrote:
>
> I have a data file that is nearly 14,000 lines long. It is a variable
> length line. I have need to update this file twice a day with updated
> information relative to the first field in this file. It is space
> delimited and sorted on the first field. I need to replace the line
> with information in an update log which has the same key (first field).
> The update file could be as long as a hundred lines.
>
> Is there a straight forward way of doing this?
>
> I was going to do a merge and then sort and then uniq to remove the
> duplicates but I was not able to assure that the update file entry was
> the first line read in a uniq operation. My tests proved that I could
> not control that the latest entry would be the first in the file.
Perl, yes, and here's how I'd do it:
Read the entire update file into memory. Open the main file and scan it
in line by line. At each line, compare its "key" to the update file's
lines (in memory). On a match, output the update line. Else, output the
original line. Close all the files. Kill the original file, and rename
the old file to the new file.
The actual code is left as an exercise for the reader. ;-}
Paul
-----------------------------------------------------------------------
This list is provided as an unmoderated internet service by Networked
Knowledge Systems (NKS). Views and opinions expressed in messages
posted are those of the author and do not necessarily reflect the
official policy or position of NKS or any of its employees.
This archive was generated by hypermail 2.1.3 : Fri Aug 01 2014 - 16:38:16 EDT