I'd say that's not the best strategy unless you have a better reason than the one you mentioned. You're generally better off mirroring your archivelogs to separate disks. I'd rather schedule incremental level backups and archivelog backup during off peak hours, and use BCT if you have enterprise edition. For recovery, incremental backups will also be faster.
Can it be done every 1 minute? 5 minutes ? 10 minutes. Do you need a fast infrastructure for this?
It can be done - but, not a good strategy.
Do you need a fast infrastructure... you will need a really fast one if your database generates many archivelogs per minute.
You might want to consider DataGuard - having a standby database is far better than your current strategy.
Dataguard has a licensing requirement and isn't free. It's a good solution for high availability, but from what I understand is not a substitute for data loss and recovery using backups and archivelogs.
How come ? How is recovery using backups better than DataGuard or vice-versa ?
Both are HA solution to minimize data loss - both with advantages and disadvantages.
In a DG setup your backups can be used to both primary and standby no matter where you run the actual backup at primary or standby.
About the licensing, that is valid for Active DataGuard.
If you just want to prevent the loss of your DB due to a volume disaster, then make sure the archivelogs go to another physical volume (ie FRA).
When you do that, then you're protected from this disaster scenario.
My thoughts are that archivelogs are not a matter of HA but recovery and PITR in particular. Hence my understanding of data loss in connection with archivelogs is not a matter of HA, but being able to restore and recover data when necessary. Dataguard provides a replica but does not replace backup.