Synchronized, Highly Available, and Resilient CDSs

Modern information systems include load balancing and clustering capabilities to achieve resiliency, higher availability, scalability, and security. However, to date Cross Domain Solutions (CDSs) have been unable to take advantage of these capabilities. This is due, in large part, to the tension between visibility and control that Load Balancers (LBs) need to operate effectively, and the CDS’s need to hide information (to prevent exfiltration) and to explicitly restrict outside control (to prevent compromise by attackers).

Cross domain information sharing could benefit significantly from CDSs that are resilient, available, scalable, and secure. This would be a vast improvement over todays cross domain environments that are statically and manually configured and exhibit a general lack of resilience and scalability.

The objective of this effort is to design and develop Synchronized, Highly Available, and Resilient CDSs (SHARC) to provide the advantages of load balancing and clustering to CDSs in a way that maintains the effective protection of cross-domain information flows. SHARC will produce an easy-to-manage, highly robust, and auto-adjusting capability as shown in Figure 1. The SHARC prototype transparently directs traffic to available CDSs via load balancers (LBs), allowing continued cross domain information sharing even in the face of load spikes and failures. SHARC automates management of CDSs and LBs enabling development of clusters of CDSs and dynamic addition of new CDSs to adapt to increased load. The SHARC prototype will also be designed to work well with certification and accreditation processes.

SHARC is based on the following strategic combination of three functional units that together manage the CDSs and LBs to increase CDS availability:

  1. Leveraging commodity load balancers by providing CDS state information. LB integration involves connecting CDS sensors, e.g., sensors using Guard Remote Management Protocol (GRMP) or Simple Network Management Protocol (SNMP), with LB actuators, e.g., actuator using the F5 load balancer control interface [1], to gather information and update configurations for multiple LB types, e.g., DNS or inline.
  2. Maintaining configuration synchronization between multiple CDSs within a cluster. Our approach is based on using pre-approved policies and CDS remote management protocols to synchronize CDS state and dynamically manage the policies on each CDS.
  3. Detecting and recovering from CDS failures. To operate through transient and persistent CDS failures, e.g., caused by unexpected new data and/or misconfigurations, we propose creating adaptation rules that update the configuration of LBs and CDSs in response to the current situation. The rules are encapsulated in cluster-wide adaptation bundles that can be pre-approved, securely persisted, and executed automatically or in consultation with administrators.