Hi Folks,
Attached is some code I’ve cobbled together for analysing a large amount of data (about 12million records) in a series of 8 text files, varying from ~120,000 records to 4,250,000 records each. The largest file is about 78Mb. Basically the code reads in a block of data, typically around 500 records, processes it, clears the worksheet and starts over with the next set of records.
The problem is that the time taken to process the files (on my system – AMD 2500 Athlon, Win 2K & 512Mb RAM) increases by around 22 seconds per 100,000 records – eg the 1st 100,000 takes about 26 seconds, the 2nd takes 48 seconds, the 3rd takes 70 seconds, and so on.
I’d be grateful if someone could take a look at the code and suggest some efficiencies – especially with a view to eliminating the increasing cycle times.
Cheers
Cheers,
Paul Edstein
[Fmr MS MVP - Word]