Duplication of Data, as database developers know, it is a problem to be solved – at the table level, while designing a database. However, from the pont of view of DBAs, replication of databases is a very good way of securing your data. That way, your system will not be offline for a long time even in the face of a major security breach.
Keeping data encrypted or behind many levels of authentication is one step.
Keeping the data store room well-guarded like a fortress is another step.
Another step is to keep at least two carbon copies of the database (as well synchronized as possible) in physically very diverse locations. The network should obviously be a high-speed one, but preferably connected and disconnected over a span of seconds, constantly. Maybe you can use a closely-guarded algorithm (the same or a matching one at both ends) to generate IP addresses which will change at runtime. Preferably you should not use TCP/IP. A proprietary protocol offers that much more security.
Surely these are common practices in implementation of real-time systems which deal with large amounts of data or large number of transactions.
To prevent sabotage, you can have critical database activities done jointly by people operating out of different locations. Only if person A at location X and person B at location Y, simultaneously, at run-time, enter their correct passwords and answer a series of questions, will they (or one of them) be allowed to access a particular important function of the database.
This kind of a check mechanism could be built in into the database itself – (say, we’ll call it Simultaneous Remote Secure Authentication – SRSA or some such name). So now the database becomes a Secure Distributed RDBMS or SDRDBMS.