jschell wrote:I'm fairly certain that there's still considerable I/O overhead to writing byte-by-byte vs. buffered, though I haven't actually tested it that I can recall.That kind of stuff happens when you read one byte at a time. Each time you call a read method, it basically amounts to one Hard drive access. A Hard drive access is really slow, 10-20 ms, (due to the mechanical parts in your HDD),Not on any modern OS at least not at the level high order language calls. The OS will buffer reads.
So excluding perhaps some real time javas that will not happen.
I'm fairly certain that there's still considerable I/O overhead to writing byte-by-byte vs. buffered, though I haven't actually tested it that I can recall.Not sure what you are referring to though.
For any decent hash, the probability of two different versions of a given file hashing to the same value is exeedingly tiny, so if it's for, say, a daily backup of something where losing a day is expensive but not crippling, it might be a fair tradeoff.Ok. But if it was me I would prefer to use a timestamp and size, both of which are really likely to change for a backup, and then transfer the file even if it was large.
Ozie02 wrote:Message digest.
About hashing, does it mean to use hash table or use what I have found which is known as MessageDigest Class that converts memory chunks into strings, for example the below link:
that had MD5 hash,