This is based on source code downloaded on 27th May 2013. I had a file with +100000 records in it and this file doesn't have index. So it took ~ 3 hours to complete reading that file. I debugged source code and found out that ExcelBinaryReader.moveToNextRecordNoIndex method skips all 100000 records (with ID = ROW) each time it reads 1 record! So perfomance degrades even worse when file size become bigger. According to my file's structure first come all blocks with ID = ROW and then all blocks with ID related to cell. I don't know whether this is correct structure for all files which do not have index. But I modified code and skipped all rows on sheet initialization and then just reading cells it took 40 seconds to complete large file.
Comments: migrated issue to github
Comments: migrated issue to github