Skip to main content

Usage Reporting and Forensics in tiCrypt Audit

· 12 min read
Thomas Samant
Thomas Samant

Why Audit Is Not Optional

Compliance frameworks like CMMC 2.0 Level 2 do not treat audit logging as a best practice. They treat it as a requirement. The Audit and Accountability (AU) domain of CMMC, mapped directly from NIST SP 800-171 Revision 2, defines nine controls that govern how systems must create, retain, protect, correlate, and report on audit records. These controls exist for a reason: without a trustworthy audit trail, there is no forensics, no accountability, and no way to prove that CUI was actually protected.

tiCrypt was designed from the start with the assumption that every action must be recorded and that records must be resistant to tampering. This article explains how tiCrypt's audit system works, what it captures, how it supports forensic investigation, and how it maps to the CMMC AU controls that organizations are assessed against.

What tiCrypt Records

tiCrypt's audit schema defines over 150 distinct event types, each with structured attributes that capture the full context of an operation. These events span every layer of the platform:

  • Session lifecycle: Session requests, challenges, creation, invalidation, deletion, temporary sessions, subsession creation, permission additions, and session downgrades. Every login, logout, and privilege change is recorded with the user identity, source IP, timestamps, actions granted, expiration, and MFA token usage.
  • File operations: File creation, deletion, chunk-level reads and writes, encryption key additions and retrievals, ownership changes, and project tagging. Each file event captures the session, file ID, and the specific key type and owner involved in the cryptographic operation.
  • Directory operations: Entry additions, removals, renames, directory creation and deletion, root assignments, and project tagging. The full path context (directory ID, entry name, entry type, and target) is preserved.
  • Virtual machine lifecycle: VM creation, registration, shutdown, deletion, proxied connection registration, authorized user changes, configuration management (create, verify, run, kill, update), and inter-VM messaging. VM events capture the VM Hardware Setup template, host registration details, and the user who initiated the action.
  • Drive and storage: Drive creation, attachment, detachment, deletion, key management, ownership transfers, and project tagging. Storage-level reads and writes record transfer IDs, byte counts, and object identifiers.
  • Group and team management: Group and team creation, deletion, member additions and removals, permission changes, quota assignments, and ownership transfers.
  • Project governance: Project creation, modification, deletion, membership management (including expiration tracking), security level assignments, security requirement definitions, and user certification tracking.
  • Key escrow: The complete escrow lifecycle, from escrow user and group creation through public key registration, escrowed key generation, recovery key set creation, and key retrieval. Every escrow operation records the authentication type, authorizer identity, and cryptographic parameters.
  • Forms and data ingress: Form creation, token-based and session-based submissions, attachment reads, key management, and form metadata changes.
  • Infrastructure: VM Hardware Setup management, libvirt host provisioning, realm restarts, image lifecycle, reservations, large uploads, and external server/SFTP configurations.
  • System administration: Global settings and info changes, external server definitions, MFA token creation, notification delivery, service restarts, maintenance operations (clearing old accounts, keys, and drives), and device manager events.

Every event records a success or failure indication, the acting session or authentication context, and an error code when applicable. Version fields in the schema ensure backward compatibility as the audit format evolves across tiCrypt releases.

info

There is no notion of a non-audited action within the tiCrypt environment. If a user, administrator, or system process performs an operation, it produces an audit record.

How the Audit Trail Is Protected

An audit trail is only useful if it can be trusted. If an attacker or rogue administrator can modify or delete log entries, the trail becomes unreliable as evidence. tiCrypt addresses this with a cryptographic chaining mechanism.

Each audit record is hashed using SHA-256, and each hash incorporates the hash of the previous record. This creates a chain where any modification to a historical entry invalidates all subsequent hashes. The mechanism is structurally identical to the hash chaining used in blockchain systems: the integrity of the entire chain can be verified by recomputing hashes forward from any checkpoint.

This design has several practical consequences:

  • Tamper evidence: Modifying, inserting, or deleting a log entry breaks the hash chain. The inconsistency is detectable automatically.
  • Non-repudiation: Because every action is tied to a cryptographic identity (session, user, or authentication context) and protected by the hash chain, users cannot plausibly deny actions attributed to them.
  • Independent verification: Because tiCrypt Audit runs as a separate system from the tiCrypt backend, and multiple independent audit installations can consume the same log stream, independent system log consumption is possible.
warning

The audit system is architecturally separated from the tiCrypt backend. The tiaudit-logger service receives log entries pushed from the backend and writes them to ClickHouse. The backend does not have write access to the audit database, and the audit system does not have access to the backend. This separation ensures that compromising one does not grant the ability to tamper with the other.

Architecture of tiCrypt Audit

tiCrypt Audit consists of three independent components:

  1. tiaudit: The main service that hosts the audit interface and reporting engine. Administrators interact with this to run queries, generate reports, and configure alerts.
  2. tiaudit-logger: A background service that listens for new log entries pushed from the tiCrypt backend, parses them according to the audit schema, validates the hash chain, and inserts structured records into ClickHouse.
  3. tiaudit-log-uploader: A utility for backfilling historical logs during initial deployment or recovery.

ClickHouse serves as the backing database. It is a column-oriented analytics engine optimized for the kind of queries audit work demands: filtering millions of records by time range, user, event type, or object ID and returning results in seconds. In practice, queries across large datasets complete in one to two seconds.

tip

Multiple independent tiCrypt Audit installations can serve a single tiCrypt backend. Deploying audit on a separate server from the backend is a recommended practice that strengthens both availability and tamper resistance.

Usage Reporting

tiCrypt Audit's reporting engine provides three layers of access to audit data, designed for different roles and use cases:

Reports

Pre-built compliance reports can be generated with a single action. Each report is parameterized by time range and optionally scoped to specific objects (users, teams, projects, VMs, or files). Output is in .xlsx format with charting for visual analysis. These reports are designed to satisfy the documentation requirements of CMMC assessments, NIST 800-171 self-assessments, and ITAR reviews.

Reports answer questions like:

  • Which users accessed which files during a given period?
  • What VM sessions were active and what actions were taken?
  • Which administrative changes were made to team memberships or project configurations?
  • What encryption keys were created, retrieved, or revoked?

Queries

Built-in parameterized queries provide targeted interrogation of the audit database. Queries are filterable by teams, users, query type, time range, and result status (success or failure). These are the primary tool for day-to-day monitoring and investigation.

Direct SQL Access

For ad-hoc analysis beyond what pre-built queries and reports cover, tiCrypt Audit exposes direct SQL access to the ClickHouse database. This allows administrators and auditors to write custom queries against the full audit schema, useful for forensic investigations or building custom dashboards.

Programmatic Access

The tiCrypt Audit REST API provides token-based access to all reporting and query functions. This enables integration with external SIEM platforms, automated compliance workflows, and custom tooling.

Forensic Investigation

When an incident occurs, whether it is an unauthorized access attempt, a data exfiltration concern, or a compliance inquiry, the audit trail must support rapid, targeted investigation. tiCrypt's audit system is built for this.

Reconstructing user activity: Every session event records the user identity, source IP, timestamps, granted actions, and MFA tokens used. By querying session events for a specific user or IP range, an investigator can reconstruct the complete sequence of sessions, what permissions were granted, and when sessions were terminated or downgraded.

Tracing data access: File key retrieval events (record type FG) capture the session, file ID, key type, key owner, and an intent field that records why the key was accessed. Combined with chunk-level read events (FR), an investigator can determine exactly which files a user accessed, when, and at what granularity.

Detecting privilege escalation: Group member additions (GA), permission modifications (GB), and role changes (UM) are all recorded with the acting session. Any attempt to grant unauthorized access produces a traceable record.

Identifying infrastructure changes: VM creation, host provisioning, VM Hardware Setup template modifications, and external server configurations are all recorded. Changes to the infrastructure that hosts CUI are as auditable as changes to the data itself.

Hash chain verification: If there is any suspicion of log tampering, the SHA-256 hash chain can be verified programmatically. A break in the chain pinpoints the exact record where tampering occurred.

Alerts vs Logging

tiCrypt Audit draws a clear distinction between alerts and logging. They serve different purposes and operate on different timescales.

Alerts are triggered by system events that require immediate attention. They are not records to be reviewed later; they are notifications that something has happened that a security team needs to act on now. Examples of alert-generating events include:

  • User added — a new account has been created in the system
  • User role change — a user's permissions or role has been modified
  • Hash chain interruption — the SHA-256 audit chain has been broken, indicating possible tampering
  • File downloaded — a file has been retrieved from the secure environment
  • Account deactivation — a user account has been disabled
  • XSS attack detected — a cross-site scripting attempt has been identified
  • Account locked — a user account has been locked due to failed authentication or administrative action

These alerts are designed to be pushed into an organization's existing security infrastructure. tiCrypt Audit includes a dedicated Splunk driver for direct integration, as well as support for email (SMTP) and console output. Organizations using other SIEM platforms such as LogRhythm can receive alerts through their standard ingestion pipelines.

Logging, by contrast, is the complete, tamper-evident record of every operation in the system. The secure logs protected by the SHA-256 hash chain are not for real-time response. They exist for historical analysis, forensic investigation, and compliance reporting. When an auditor asks "what happened in this project over the last six months," the answer comes from the logs. When a security team needs to reconstruct the full sequence of events surrounding an incident, the logs provide the authoritative timeline.

The distinction matters operationally. Alerts tell you something is happening. Logs tell you everything that happened. A well-configured tiCrypt deployment uses both: alerts routed to the SOC for immediate triage, and logs retained indefinitely for the investigative and compliance work that follows.

tip

Organizations undergoing CMMC assessment should ensure that alert integration with their SIEM is documented in their System Security Plan. Assessors evaluating AU.L2-3.3.4 (audit failure alerting) and AU.L2-3.3.5 (audit correlation) will look for evidence that alerts are actively monitored, not just available.

Mapping to CMMC AU Controls

The nine Audit and Accountability controls in CMMC Level 2 map directly to tiCrypt capabilities:

CMMC ControlRequirementtiCrypt Implementation
AU.L2-3.3.1Create and retain system audit logs150+ event types with structured attributes; lifetime retention (logs are never discarded)
AU.L2-3.3.2Trace actions to individual usersEvery event is tied to a session or authentication context with cryptographic identity; no shared or anonymous actions
AU.L2-3.3.3Review and update logged eventsPre-built queries, custom SQL, and configurable alert rules; schema versioning supports evolving event definitions
AU.L2-3.3.4Alert on audit process failureAlert drivers (Email, Splunk, console) notify on hash chain breaks, logging failures, and anomalous conditions
AU.L2-3.3.5Correlate audit recordsClickHouse enables cross-event correlation by user, session, time range, object ID, or any combination; sub-second query performance
AU.L2-3.3.6Reduction and reportingOne-click compliance reports in .xlsx; parameterized queries with filtering; REST API for programmatic access
AU.L2-3.3.7Authoritative time sourceMillisecond-precision timestamps on all events; system time synchronized to NTP
AU.L2-3.3.8Protect audit informationSHA-256 hash chain provides tamper evidence; architectural separation of audit and backend systems; role-based access to audit data
AU.L2-3.3.9Limit audit managementOnly designated administrators can configure logging, manage alerts, or access the audit interface; role-based access enforced at the application level
note

CMMC Level 2 does not prescribe a specific log retention period, but DFARS 252.204-7012 requires preserving cyber incident data for at least 90 days from incident report submission. tiCrypt exceeds this by retaining all audit logs for the lifetime of the installation. No records are ever discarded.

Mechanism Over Policy

Many compliance approaches rely on written policies and manual procedures: "administrators shall review logs weekly," "users shall not share credentials," "systems shall be configured to log events." These policies are necessary for governance, but they are only as reliable as the people who follow them.

tiCrypt takes a different approach. The audit system is not something an administrator enables or configures per-policy. It is a structural property of the platform. Every operation produces an audit record because the backend emits log entries as part of its normal processing. There is no flag to disable logging. There is no way to perform an action without producing a record. The hash chain ensures that records, once written, cannot be silently altered.

This is the distinction between mechanism and policy. Policy says "you should log events." Mechanism means events are logged whether or not anyone remembers to configure it. For organizations undergoing CMMC assessment, this distinction matters: assessors are looking for evidence that controls are implemented, not just documented.

Audit in tiCrypt is not a feature you turn on. It is a property of how the system operates.

Audit as Infrastructure

Audit logging is often treated as an afterthought, something bolted on to satisfy a compliance checkbox. tiCrypt treats it as foundational infrastructure. The combination of comprehensive event capture, cryptographic chain protection, architectural separation, and high-performance analytics creates an audit system that serves both compliance and real forensic investigation.

For organizations handling CUI under CMMC 2.0, the audit system provides direct, demonstrable coverage of all nine AU controls. For administrators and security teams, it provides the tools to answer hard questions quickly: who did what, when, to which data, and can we prove it.

The New tiCrypt Network Architecture Based on OpenVSwitch

· 12 min read
Alin Dobra
Alin Dobra
CEO & Co-founder

Motivation

The "traditional" LibVirt networking is based on Linux bridges. This architecture is simple yet effective for providing networking connectivity to VMs. If the VMs run on a single server, this architecture is sufficient. However, if the VMs run on multiple servers, the Linux bridge architecture becomes more complex and less efficient. Specifically, in the case of tiCrypt, it creates the following issues:

  • Host network isolation: The Linux bridge network is confined to the host it is defined on. The network can be extended using routing, but this creates significant complexity.
  • IP management complexity: IP assignment becomes very difficult since each host must have its own IP range.
  • Control of external access: tiCrypt needs to control external access to the VMs, and this is more difficult to achieve with Linux bridges since firewall rules must be defined on each host.
  • External proxied access: tiCrypt needs to provide external proxied access to the VMs. This is accomplished by mapping port ranges on the host to the possible VM IPs (port 22). Such mapping rules pollute the firewall rules on the hosts.
  • VM migration: The Linux bridge architecture does not support VM migration. This is a planned feature for tiCrypt.
  • Proxy performance: The Linux bridge solution forces the use of "software proxying" for external access to VMs. This is much slower than a firewall-based solution that requires a unified network architecture across hosts.
  • Rigid network integration: Libvirt, when using the Linux bridge architecture only supports a few (nat, route, and open) setups. This makes it difficult to deal with custom firewall rules on hosts and backend.

OpenVSwitch Solution

OpenVSwitch provides virtual switching capabilities, similar to "real" switches, but implemented in software. It is a high-performance solution integrated into the Linux kernel. OpenVSwitch supports advanced features such as VLANs, QoS, and network virtualization. It also provides a unified network architecture across hosts, using switching Layer 2 network extension.

The specific benefits for tiCrypt include:

  • Unified network architecture: OpenVSwitch provides a unified network architecture across hosts, allowing for easier management and control of the network. Specifically, it allows all the VMs to be on the same network, regardless of which host they are running on. This allows VM-to-VM communication, a feature needed by Clustering and Batch Jobs (via Slurm).
  • Unified IP address management: Since all the VMs are on the same Layer 2 network, IP address management is simplified. Specifically, by running a DHCP server on the tiCrypt backend, all the VM IPs are automatically managed centrally.
  • Single exit point: The network and firewall rules can be configured to only allow the backend as an exit point thus enhancing security and auditing.
  • Support for per-VM allow lists: This allows a more refined control of the external access for the VMs. Specifically, instead of allowing all the VMs to access an external server, only specific VMs can be allowed.
  • Network isolation and control: By creating multiple virtual networks based on different OpenVSwitch bridges, tiCrypt can provide network isolation and control for different types of VMs. The secure VMs will use a different network than the data-in and service VMs. This is further enhanced by the use of VLANs, supported by OpenVSwitch, that extend the network isolation through the real switches.
  • Improved performance: Extending the network all the way to the backend allows a firewall-based proxying solution that leverages the NFTables support in the Linux kernel. This is much faster than the software proxying solution required by the Linux bridge architecture.
  • No LibVirt interference: When openvswitch network type is used in LibVirt, it only creates the OpenVSwitch port and does not interfere with the network configuration. This allows tiCrypt to have full control over the network configuration and firewall rules on the hosts and backend.

General Approach to Building Networks with OpenVSwitch in tiCrypt

The basic architecture we are interested in consists of:

  • A backend server or Virtual Machine (VM) that runs the tiCrypt backend services and acts as a gateway for the VMs.
  • Multiple hosts that run the VMs. The VM networking is provisioned on the hosts and must integrate with the backend network.
  • The network must be isolated and have only the backend as an exit point.

The general approach to build such a network with OpenVSwitch is as follows (exemplified by the secure network):

  1. Assign a dedicated VLAN ID for the network (e.g., 1081 for the secure network).
  • This VLAN ID must be assigned at the level of the organization (at least for complex setups that use multiple switches).
  • The real switches that connect the backend and VM hosts must be configured to allow the VLAN traffic (e.g., by configuring the ports as trunk ports).
  1. Assign a dedicated IP range for the network (e.g., 192.168.128.0/17 for the secure network).
  • This IP range is private and only used by the private network. It is not routable on the internet and it is only used for communication between the VMs and the backend. It does not need to be unique across different networks since the networks are isolated but it must not overlap with the IP range of the host network or the backend network.
  1. Create an OpenVSwitch bridge on each host (e.g., br-secure). This bridge will be used to connect the VMs to the network and it will extend the network to all the hosts and the backend.
  • The bridge must be configured to use the VLAN ID assigned for the network (e.g., by adding a VLAN interface to the bridge).
  • The bridge must be defined on a network (virtual or real) interface that is connected to the correct switch and VLAN. This is best accomplished by creating virtual network interfaces with VLAN IDs on a common physical interface such as bond0.
  1. Create a network interface on the backend and connect it to the same VLAN (e.g., by creating a VLAN interface on the backend and connecting it to the real switch).
  2. Configure the backend to act as a gateway for the network. This involves:
  • Assigning the first IP address in the network range to the backend (e.g., 192.168.128.1 for the secure network).
  • Running a DHCP server on the backend to assign IP addresses to the VMs. The DHCP server must be configured to assign IP addresses from the network range and to set the backend IP as the default gateway for the VMs. The preferred way is using dnsmasq since it is lightweight and allows both DHCP and DNS services. DNS control is very important for the secure network.
  • Configuring firewall rules on the backend to control the external access (based on network type).
info

The network architecture is flat based on Layer 2 switching. Moreover, ARP/RARP is used to "discover" the correct IPs and send the traffic to the correct destination.

note

Each network type will need a full and independent setup as described above. There is no intersection point between the different network types since they are isolated. The only common point is the backend which will have multiple interfaces, one for each network type.

warning

No routing rules should be used with this architecture since it is based on Layer 2 switching. All the traffic must be switched at Layer 2 and the backend must be the only exit point for the VMs. Masquerading rules will be used to allow external traffic (as required by the network type). These networks must be completely isolated from any other networks.

Specific Implementation in tiCrypt

Secure VMs are the primary workload in tiCrypt. They operate on a fully isolated network with strict firewall rules, controlled DNS, and per-VM allow lists governing external access. All data stored on their drives is encrypted, and network traffic is tightly audited through the backend gateway.

The following diagram illustrates the network architecture of tiCrypt based on OpenVSwitch (all IP ranges and VLAN IDs are examples):

Some specifics are:

  • Three separate networks are created using independent components:
    • secure: This network is used for the secure VMs (both interactive and batch).
      • OpenVSwitch bridge br-secure on interface enp1/bond0.1081
      • VLAN ID 1081
      • IP range 192.168.128.0/17
      • Gateway(backend IP): 192.168.128.1
      • DHCP range: 192.168.129.1:192.168.255.254
      • Highly controlled masquerading rules (based on ticrypt-nft and ticrypt-firewall services running on the backend) to allow only specific external access based on the VM allow list.
      • DNS strictly controlled to only servers in the allowed list (and tiCrypt backend).
    • service: This network is used for the service VMs.
      • OpenVSwitch bridge br-service on interface enp2/bond0.1082
      • VLAN ID 1082
      • IP range 192.168.122.0/24
      • Gateway(backend IP): 192.168.122.1
      • DHCP range: 192.168.122.3:192.168.122.254
      • Masquerading rules to allow all external access since these VMs are not considered secure. DNS is not controlled since these VMs are not considered secure.
    • datain: This network is used for the data-in VMs.
      • OpenVSwitch bridge br-datain on interface enp3/bond0.1083
      • VLAN ID 1083
      • IP range 192.168.123.0/24
      • Gateway(backend IP): 192.168.123.1
      • DHCP range: 192.168.123.3:192.168.123.254
      • Masquerading rules and DNS control similar to the service network since these VMs are not considered secure.
  • All the external traffic from VMs is routed (via gateway definition) to the backend.
    • This allows strict control of the VM interaction with the external entities.
    • This also allows the VM hosts to be isolated from general external access since VM traffic is not exiting directly from the hosts but is routed through the backend.
  • The only entry point for accessing VMs is the backend.
    • This hides the VMs in a private network and it allows full control of how the VMs can be reached. For the secure network, this allows the use of the Linux firewall based proxying that is much faster than the software proxying required by the Linux bridge architecture.
    • Attack surface is reduced and logging is simplified.
  • The entire network setup can be controlled via firewall rules on the backend alone.
    • This massively simplifies management and reduces interference with other firewall rules deployed on the backend or VM hosts.
    • All the tiCrypt related firewall rules can be confined to the ticrypt NFT table that can independently be managed by the ticrypt-firewall service running on the backend.
tip

It is possible to define 3 independent networks with 3 different switches and avoid use of VLANs. This is usually wasteful. The preferred solution is to use a high-performance network bond0 and create virtual interfaces with VLAN IDs on top of it. This allows the use of a single physical interface for all the networks while still providing isolation and control.

info

The interface bond0 can be shared with other networks that are required, for example, for management or storage. The only requirement is that the real switch ports connected to bond0 must be configured to allow the VLAN traffic for the tiCrypt networks. The network isolation provided by the VLANs through virtual interfaces, OpenVSWitch bridges and real switches is sufficient to allow the coexistence of the tiCrypt networks with other networks on the same physical interface.

Puppet and NetworkManager Integration Considerations

Since the OpenVSWitch-based solution uses a separate NFT table (ticrypt) and separate virtual network interfaces (e.g. ens1), Puppet should be configured to "ignore" both the ticrypt NFT table and the virtual interfaces.

warning

For VM hosts, the virtual network interfaces should be created but not assigned any IPs. This allows the OpenVSwitch bridges to use these interfaces without interference from Puppet. For the backend, the virtual network interfaces should be created and assigned the gateway IPs for each network (e.g., 192.168.128.1 for secure network). This allows the backend to act as a gateway for the VMs while still allowing Puppet to manage the IP configuration of the backend.

tip

It is best to let ticrypt-setup, the Ansible-based tiCrypt setup tool, configure both the backend and VM hosts. The number and complexity of tasks is significant; any part missing or misconfigured can result in an unusable system.

Conclusion and Future Work

The OpenVSwitch-based network architecture, already deployed in several production systems, provides a much more controlled and high-performance solution to networking in tiCrypt. Past the initial setup cost (re-configuring the network and tiCrypt services), the new architecture provides a much more flexible and scalable solution for the tiCrypt network. Going forward, the OpenVSwitch-based architecture will be the only supported network architecture for tiCrypt and the Linux bridge-based architecture will be deprecated and eventually removed.

The use of OpenVSwitch opens up new possibilities for the tiCrypt network architecture. Some of the future work includes:

  • VM migration: The unified network architecture provided by OpenVSwitch allows for VM migration between hosts without any network reconfiguration. This is a planned feature for tiCrypt and it will be implemented using the live migration capabilities of LibVirt and OpenVSwitch.
  • Network monitoring and management: OpenVSwitch provides a rich set of tools for monitoring and managing the network. This can be used to provide better visibility into the network traffic and to troubleshoot network issues.
  • Stricter network isolation: OpenVSwitch supports OpenFlow and other advanced features that can be used to provide stricter network isolation and control. This can be used to further enhance the security of the secure network. Specifically, OpenFlow rules can be used in the future to limit the VM-to-VM and VM-to-backend communication in the secure network to only what is required for the specific use case.

How tiCrypt Isolates Virtual Machines at the Network Level

· 7 min read
Thomas Samant
Thomas Samant

How tiCrypt Isolates Virtual Machines at the Network Level

Secure virtual machines in tiCrypt run in near-complete isolation from each other and from the surrounding environment. This isolation is the foundation of tiCrypt's security model. Every network pathway into or out of a VM is tightly controlled, authenticated, and encrypted — with no exceptions.

This post explains the mechanisms that make this possible: proxy-mediated communication, application port tunneling, VM-level network isolation, and controlled access to external licensing servers.


Key Components

Several tiCrypt components participate in VM network security. Understanding their roles makes the rest of the post easier to follow:

  • ticrypt-connect — The application running on the user's device.
  • ticrypt-frontend — The browser-based interface, served by ticrypt-connect.
  • ticrypt-backend — The set of tiCrypt services running on the backend server.
  • ticrypt-rest — The REST-based entry point into ticrypt-backend.
  • ticrypt-proxy — Mediates communication between components by replicating traffic between two separate connections.
  • ticrypt-allowedlist — Controls access to external licensing servers.
  • ticrypt-vmc — The VM controller, responsible for all security mechanisms within the secure VM.

How VM Communication Works

tiCrypt VMs do not accept direct connections. All ports are blocked except port 22, and even that port is not running SSH — it is controlled entirely by ticrypt-vmc for traffic tunneling. There is no mechanism to contact a secure VM through direct access.

Instead, all communication is mediated by ticrypt-proxy using the following sequence:

  1. The ticrypt-vmc maintains a persistent WebSocket connection to ticrypt-proxy.
  2. When a user wants to connect, ticrypt-frontend opens a matching WebSocket through ticrypt-proxy.
  3. ticrypt-proxy replicates traffic between the two WebSocket connections.
  4. ticrypt-vmc immediately creates a new WebSocket for future connections.
  5. Both the frontend and the VM controller exchange digital signature proofs of identity and negotiate a shared secret via Diffie-Hellman key exchange.
  6. If the digital signature fails or the user is not authorized, the connection is immediately closed. Likewise, if the VM's validation fails, ticrypt-frontend closes the connection.
  7. All subsequent messages in both directions are encrypted with the negotiated key.

After the initial handshake, all traffic is hidden from ticrypt-proxy and every other part of the infrastructure. Commands, terminal data, keystrokes — everything travels through this encrypted channel.

The only message sent in the clear is the initial authentication and key negotiation message. Any other message or operation is rejected outright.

Communication does not rely on listening on standard ports and can only be mediated by ticrypt-proxy. The VM owner's public key cannot change once learned by ticrypt-vmc, meaning that hijacking the communication would require compromising the user's private key.

Most VM functionality operates exclusively through this proxy-mediated channel. The only exception is application port tunneling.


Application Port Tunneling

To support rich application deployment inside VMs, tiCrypt provides a generic mechanism for tunneling TCP traffic on specific ports from the VM to the user's device. The mechanism is highly controlled, but in principle it can make any network application accessible — for example, RDP over port 3389. Multiple ports are typically forwarded simultaneously to support broader functionality.

Application tunneling works similarly to SSH reverse port forwarding, but uses TLS instead of SSH. Access to VMs is limited to port 22 — not SSH as a service. ticrypt-connect mediates the forwarding with ticrypt-vmc listening on port 22.

Setting Up a Tunnel

Initiating a forwarding tunnel involves several coordinated steps:

  1. The ticrypt-frontend, using an authenticated session, tells ticrypt-rest to create a pathway to a specific VM.
  2. The request is validated and ticrypt-proxy is informed.
  3. ticrypt-proxy sets up a listening endpoint on one of the designated ports (typically 6000–6100). The endpoint is strictly scoped and only accepts connections from the IP address that made the request.
  4. ticrypt-frontend receives the allocated port.
  5. ticrypt-frontend asks ticrypt-connect to generate a TLS certificate for authentication and encryption.
  6. ticrypt-frontend tells ticrypt-vmc to accept the application forwarding and provides the TLS certificate.
  7. ticrypt-vmc replies with a list of ports that need to be tunneled.
  8. ticrypt-frontend instructs ticrypt-connect to start the connection using the certificate.
  9. Upon connection, ticrypt-vmc verifies the digital signature and initiates TLS-mediated tunneling.
  10. Traffic to and from the local port on the user's device is tunneled and recreated inside the VM, enabling application access.

A special case exists for SFTP over port 2022, if the feature is enabled in ticrypt-vmc. This can be used to transfer large amounts of data from the user's device to the VM.

The Communication Pathway

Once established, the tunnel follows this path:

  1. ticrypt-connect accesses the allocated port controlled by ticrypt-proxy. The connection is pinned to the originating IP address.
  2. ticrypt-proxy forwards the request to the VM host endpoint.
  3. The VM host forwards traffic to port 22 of the correct VM.
  4. ticrypt-vmc listens on port 22 and runs the TLS protocol with port tunneling.

Port 22 is specifically chosen to prevent an SSH server from being deployed in its place. The traffic on port 22 is the TLS protocol under ticrypt-vmc control — not the SSH protocol.

The entire pathway is secured with TLS encryption and authenticated with digital signatures. At no point can any intermediate component intercept the communication without breaking TLS.


VM Network Isolation

To ensure security, only port 22 is open inbound on secure VMs. All outbound traffic is restricted to communication with ticrypt-rest, unless an exception is granted through the ticrypt-allowedlist mechanism.

The isolation is enforced through multiple layers:

  • Internal blocking — All traffic to ports other than 22 is blocked internally by the VM itself, isolating any other access. Application access is provided exclusively through the port tunneling mechanism described above.
  • Host-level firewall — All traffic to the VM's IP on ports other than 22 is blocked by firewall rules on the VM host.
  • No external routing — The VM host does not route external traffic to VMs, with the sole exception of port 22.
  • Outbound blocking — All outbound traffic from VMs is blocked unless the ticrypt-allowedlist mechanism is in use.

The mechanism that enables access on port 22 while blocking everything else works by preventing all traffic routing and instead providing a forwarding path from a specific port range on the VM host to port 22 of the IP ranges dedicated to secure VMs. There is no way for a server outside the specific VM host to access any other VM port.

Outbound traffic from secure VMs is blocked because it could be used to exfiltrate data, resulting in data breaches.


Licensing Server Access

Most commercial software requires access to external licensing servers. These servers typically reside within the organization, but occasionally they are external. tiCrypt provides a strictly controlled mechanism for enabling this access without compromising VM isolation.

The ticrypt-allowedlist component mediates access by manipulating two things:

  • Firewall and port forwarding rules on the server running ticrypt-backend.
  • DNS replies to requests originating from VMs.

The mechanism is precise: unless a specific mapping is configured using ipsets and firewall rules, all outgoing traffic from VMs remains blocked. The ticrypt-allowedlist component works in conjunction with ticrypt-frontend to provide a convenient interface for enabling and disabling access. This capability requires SuperAdmin privileges.


Defense in Depth

No single mechanism secures a tiCrypt VM. Instead, multiple independent layers — proxy-mediated communication, TLS-authenticated tunneling, internal and host-level firewalls, outbound traffic blocking, and controlled DNS — work together to create an environment where VMs are effectively unreachable except through authenticated, encrypted, and auditable channels. Each layer is designed so that even if one were bypassed, the remaining layers would continue to protect the VM and its data.

Understanding tiCrypt Infrastructure: Components, Connectivity, and Deployment Options

· 7 min read
Thomas Samant
Thomas Samant

Planning a tiCrypt deployment starts with understanding the infrastructure that powers it. This guide walks through the core components, how they connect, and the deployment architectures available — from a lightweight demo system to a full-scale production environment with batch processing.

Note: This guide covers infrastructure planning and setup. The tiCrypt installation and software deployment process is covered separately.


Core Components

tiCrypt is built from a set of modular components, each with a distinct role. Here's what powers the platform.

tiCrypt Backend

The backend is the heart of the system, composed of 11 services. The most critical ones include:

  • ticrypt-rest — The HTTPS entry point for the entire system. All other services depend on it. It runs behind Nginx as a reverse-proxied virtual domain.
  • ticrypt-auth — Handles authentication, authorization, and serves as the global coordinator across all backend services.
  • ticrypt-vm — Manages the full virtual machine lifecycle, including advanced features like SLURM integration for batch processing.
  • ticrypt-logger — Maintains a tamper-resistant, blockchain-structured relational log of all system activity, designed for processing by tiCrypt Audit.
  • ticrypt-proxy — Creates secure tunnels between users and their VMs, enabling RDP sessions, application access, and other connectivity.

tiCrypt Audit

tiCrypt Audit is a dedicated system for processing logs, generating reports and alerts, and running ad hoc queries. It is designed around three principles:

  • Isolation — Audit does not require direct access to the tiCrypt backend. The backend pushes live logs to Audit over port 25000, but the reverse path does not exist. This means security teams can use Audit without gaining access to any other part of the system.
  • Full History — Audit retains logs for the lifetime of the deployment. The complete system history can be reconstructed at any point in the future.
  • High Performance — Built on ClickHouse with specialized data-loading techniques, most ad hoc queries return in under a second. Individual reports export in 2–10 seconds, and generating thousands of reports takes only minutes.

Data Ingress

tiCrypt provides two mechanisms for securely acquiring data from external sources:

  • ticrypt-sftp — SFTP-based data ingestion. Requires an HTTPS endpoint and an SFTP port (22 or 2022).
  • ticrypt-mailbox — Web-based data ingestion. Requires an HTTPS endpoint.

Both services share the same underlying architecture and are intentionally deployed outside the secure infrastructure perimeter. This allows external collaborators to submit data without accessing the secure system. However, both require a network path to the tiCrypt backend REST interface — an important consideration if the backend sits behind a VPN.

Virtual Machine Hosting

tiCrypt manages one or more VM hosts with varying configurations of memory, CPU cores, and GPUs. The hardware does not need to be uniform across hosts.

VM hosts run secure, tiCrypt-managed virtual machines that interact with the backend and, in a tightly controlled manner, with each other. Direct internet access is not required — only connectivity to the backend server.

VM performance depends on three factors: host hardware, the distributed filesystem, and network speed. For production environments, high-performance storage and fast networking are essential.

Batch Processing with SLURM

tiCrypt supports batch processing through SLURM integration via a dedicated component called tiCrypt-host-manager, which coordinates between SLURM and the tiCrypt backend.

SLURM hosts require the same filesystem access and backend connectivity as standard VM hosts. While it's possible to run both interactive VMs and SLURM workloads on the same host, separating them simplifies the setup.


Setup Requirements

Connectivity

ConnectionRequirement
ticrypt-vm → VM HostsSSH access for VM lifecycle management
ticrypt-proxy → VM HostsAccess to ports 5900–6256
ticrypt-logger → tiCrypt AuditAccess to port 25000
ticrypt-sftp / ticrypt-mailbox → ticrypt-restAccess to the HTTPS frontend
ticrypt-rest, Audit, sftp, mailboxEach requires its own Nginx frontend for HTTPS

DNS and Certificates

Each HTTPS-enabled service requires a dedicated virtual domain and its own TLS certificate. Multi-domain certificates are not recommended, as they are considered less secure. A suggested naming convention:

ServiceSubdomain Example
ticrypt-restbackend.my_system.my_domain.edu
tiCrypt Auditaudit.my_system.my_domain.edu
ticrypt-sftpsftp.my_system.my_domain.edu
ticrypt-mailboxmailbox.my_system.my_domain.edu

Port Access

All HTTPS traffic is served through Nginx with virtual domains and reverse-proxied to local ports (typically 8080–8084).

ServicePort(s)Notes
ticrypt-restHTTPS → 8080Port 443 open to users
ticrypt-proxy6000–6100Same visibility as port 443
tiCrypt Audit25000 (logs), HTTPS → 8081Port 443 open to admins
ticrypt-sftp2022 (SFTP), HTTPS → 8082Port 443 open to the world
ticrypt-mailboxHTTPS → 8083Port 443 open to the world
SSH22Management access and Libvirt on VM hosts
VM Hosts5900–6256Port forwarding from the backend

Storage

ServiceMount PointMinimum Size
ticrypt-rest/storage/vault100 GB+
VM Hosts / ticrypt-vm/storage/libvirt1 TB+
tiCrypt Audit/var/clickhouse10 GB+

Storage needs scale with usage. Large deployments can reach 10 TB+ for the vault and 1 PB+ for VM disk images.


Deployment Architectures

tiCrypt scales from a single-server demo to a multi-node production cluster. Below are the most common configurations.

Single Server (Demo/Test)

Everything — backend services, Audit, data ingress, and VM hosting — runs on one machine. This is suitable for demos and testing only, not production use.

Minimum specs: 32 cores (with virtualization extensions), 128 GB RAM, 1 TB storage.

Small Production System

A three-node setup that separates concerns for reliability and access control:

RoleSpecsAccess
ticrypt-sftp + ticrypt-mailbox (VM)4 cores, 16 GB RAM, 100 GB storageWorld-facing
tiCrypt Audit (VM)2+ cores, 16 GB+ RAM, 100 GB+ storageAdmin/security teams
Backend + VM hosting (server)64+ cores, 512 GB RAM, 10 TB+ storageInternal

Storage is locally attached to the backend server.

Production System with Interactive VMs

This architecture separates the backend from dedicated VM hosts for better scalability:

RoleSpecsAccess
ticrypt-sftp + ticrypt-mailbox (VM)4 cores, 16 GB RAM, 100 GB storageWorld-facing
tiCrypt Audit (VM)8+ cores, 64 GB+ RAM, 1 TB+ storageAdmin/security teams
Backend (server or VM)32 cores, 128 GB RAMInternal
VM hosts (vm1, vm2, …)Varies by workloadInternal

Production System with Interactive VMs and Batch Processing

The most comprehensive deployment adds SLURM nodes alongside interactive VM hosts:

RoleNotes
ticrypt-sftp + ticrypt-mailboxWorld-facing VM
tiCrypt AuditAdmin/security-access VM
BackendDedicated server or VM
VM hosts (vm1, vm2, …)Libvirt for interactive VMs
SLURM hosts (slurm1, slurm2, …)SLURM + Libvirt for batch VMs

This configuration scales to a large number of SLURM nodes. Special interactive VMs manage the formation and security of private SLURM clusters on top of the global SLURM scheduler — which is why direct, high-performance connectivity between VM hosts and SLURM hosts is required.

Small SLURM Demo System

A variation of the single-server setup with added SLURM capacity:

  • One server runs all tiCrypt components plus interactive VM hosting.
  • Two or more additional SLURM hosts handle batch processing.

Flexible by Design

tiCrypt's modular architecture means there is no single "correct" deployment. A research group running a handful of interactive VMs on a single server and a large institution operating hundreds of SLURM batch nodes across a dedicated cluster are both valid configurations. The same components simply scale and redistribute across available infrastructure. Storage backends, VM host hardware, and network topology can all vary to match what your environment already provides. As requirements evolve, components like additional VM hosts or SLURM nodes can be introduced without redesigning the existing setup.

Why tiCrypt Uses MFA — But Never Trusts It

· 5 min read
Thomas Samant
Thomas Samant

Security isn't just about having the right tools. It's about how you use them.

Multi-Factor Authentication has become a cornerstone of modern cybersecurity. Whether you're chasing CMMC compliance, meeting NIST standards, or simply trying to keep bad actors out, MFA is table stakes. Duo, Shibboleth, NetID — these tools are everywhere, and for good reason: they work.

So why does tiCrypt refuse to trust them?

The Problem With "Trusting" MFA

Most platforms treat MFA as the final word on identity. Pass the second factor, get access. Simple.

The trouble is that this creates a single point of failure hiding behind a false sense of security. If an attacker finds a way to spoof or hijack a session on the backend — after MFA has already done its job — the authentication is effectively bypassed. The lock on the front door doesn't matter if someone can walk in through the wall.

tiCrypt is built on a different philosophy: Zero Trust, implemented completely. That means we don't grant implicit confidence to any single system — not even MFA.

Decoupling Authentication From the Platform

Instead of weaving MFA directly into our backend, tiCrypt treats identity providers as independent "proof-providers." They vouch for a user. They do not get keys to the kingdom.

Here's how it works in practice:

  1. A small, minimal-footprint web page protected by Shibboleth and Duo handles the initial login. Upon successful authentication, a PHP script uses a private signing key to create a digitally signed message containing the user's identity and a timestamp.
  2. The corresponding public key is shared with tiCrypt via configuration — nothing more. The two systems never talk to each other directly.
  3. tiCrypt's backend informs the frontend of the MFA factors it needs satisfied before a session can begin, then uses that public key to cryptographically verify the signed message.

The result: even if the authentication server were compromised, an attacker would gain nothing useful. There is no direct path to the encrypted data on tiCrypt.

Split Credentials: Your Key, Your Control

Because external MFA factors are used, tiCrypt adds another layer of protection: split credentials.

Every user's private key is encrypted with AES-256. Decrypting it requires three components working together:

  • The encrypted private key (stored in the user's key file)
  • An initialization vector (IV) — securely and randomly generated
  • A cryptographic salt — also securely and randomly generated

Without MFA, the IV and Salt live in the user's key file. With MFA enabled, they are stored on the server and protected by the MFA, effectively splitting the secret across two independent systems.

This has a powerful implication: even if a user accidentally loses their private key file, it is useless without the server-side IV and Salt. An attacker would be left trying to brute-force AES-256 directly — a task so computationally expensive it makes cracking an RSA-2048 key look affordable by comparison.

Split credentials also mean that when a user changes their password, all older copies of their key are instantly invalidated. The server regenerates the IV and Salt with every password change, so stale or stolen key files simply stop working.

MFA Is a Challenge, Not the Authentication

This is where tiCrypt's approach diverges most sharply from conventional thinking. In most systems, passing MFA is the authentication — it's the gate that grants access. In tiCrypt, MFA is simply one of several challenges that must be satisfied before a session is established. It is never the thing doing the actual authenticating.

The real authentication in tiCrypt is always performed by the user's private key. When a session is negotiated, the server issues a randomly generated 32-byte challenge. The user's client must return a digitally signed response using their private key — a signature the server can verify against the corresponding public key. No private key, no session. Full stop.

MFA fits into this flow as an additional proof that must be presented alongside the signed challenge, not instead of it. The signed MFA certificate — produced by the Shibboleth/Duo login — is bundled with the challenge response and verified independently at the tiCrypt gateway. Critically, the email identity in the primary login and the email signed by the MFA provider must be an exact match. A single character difference results in a denied session, closing the door on subtle identity-substitution attacks.

Because the private key is the root of trust, tiCrypt stores no passwords — hashed or otherwise. The server freely shares users' public keys, just as SSL/TLS does, because there is nothing sensitive to expose. A full database breach via SQL injection would yield an attacker nothing actionable. The key cannot be reconstructed from anything the server holds alone.

What This Means for Your Institution

Traditional MFA IntegrationtiCrypt's Approach
MFA baked into the backendMFA treated as an external proof-provider
Compromise of auth = compromise of dataCompromise of auth server ≠ access to data
Session tokens can be intercepted or reusedDigital signatures and timestamps prevent replay attacks
One layer of trustMultiple independent cryptographic layers

The database itself tells the same story: tiCrypt stores no hashed passwords. The server freely shares users' public keys — just as SSL/TLS does — because there is nothing sensitive to expose. A full database breach via SQL injection would yield an attacker nothing actionable.

Security That Earns Its Confidence

MFA is a powerful tool, but tools are only as good as the architecture around them. By refusing to grant MFA unconditional trust — and instead using cryptographic verification, split credentials, and strict identity matching — tiCrypt ensures that your institution's most sensitive research data stays protected even when individual components are tested.

In a true Zero Trust environment, nothing gets a free pass. Not even the second factor.

Want to learn more about tiCrypt's security architecture? Read our Security White Paper →

Getting Data Into the Enclave: tiCrypt's Ingress Methods Explained

· 6 min read
Thomas Samant
Thomas Samant

tiCrypt's security model is designed to protect data once it's inside the secure enclave. But in practice, the first question administrators and researchers ask is more immediate: how does data get in?

tiCrypt supports several ingress methods, each built for a different set of constraints — dataset size, whether the sender has tiCrypt credentials, where the data needs to land, and who owns the process. This post breaks down each option and when to use it.


Choosing the Right Method

Before diving into specifics, it helps to consider four factors:

Volume and cadence — Is this a one-time migration, a recurring data drop, or an iterative refresh during active research?

Sender identity — Is the sender a tiCrypt user, or an external collaborator without credentials?

Landing zone — Should data arrive in a user's Vault for staging, or land directly in a VM?

Operational ownership — Is the researcher handling ingress themselves, or is an administrator managing the intake?

Each ingress method maps to a different combination of these factors.


User-Initiated Methods

Local Upload (to Vault)

The most straightforward option. A logged-in tiCrypt user uploads files from their local workstation directly into their Vault.

Best for: Small to moderate datasets, ad hoc inputs, and interactive workflows — scripts, configuration files, reference data, and incremental additions that will be used inside a secure VM.

Keep in mind: This requires an active session, so it's not suitable for unattended or automated ingestion. Upload speed depends on the user's network connection, making it impractical for multi-terabyte migrations. Data lands in the Vault and needs to be moved to specific VMs as needed.

Inboxes (External Collaborator Drop)

Inboxes solve a common problem: getting data from someone who doesn't have a tiCrypt account. An Inbox is a designated directory within a user's Vault, exposed through an access point that external collaborators can use to submit data.

Inbox access points are configurable with maximum upload size, an expiration date, sender-facing instructions, and a choice of transfer method (URL or SFTP).

URL Upload

Generates a web link that the external collaborator opens in a standard browser. No account, no software — just a link and a file picker.

Best for: Time-boxed collection windows, low-barrier intake from collaborators who need the simplest possible workflow, and situations where issuing credentials to external parties is not desirable.

Keep in mind: Browser uploads work well for small to moderate files but aren't ideal for large or automated transfers. Once uploaded, tiCrypt users review and move the data as needed.

SFTP Upload

Provides sender-specific SFTP credentials (delivered through the Inbox URL) that collaborators use with any standard SFTP client.

Best for: Structured, repeatable transfers and larger datasets where browser uploads fall short. Also more compatible with automation and scripted workflows.

Keep in mind: Senders need an SFTP client or must integrate credentials into scripts, which adds complexity compared to the browser option. Like URL uploads, data lands in the Inbox and must be moved into VMs for analysis.

Inboxes are often the preferred option for third-party intake because they combine lifecycle controls (size limits, expiration) with the ability to accept data without distributing tiCrypt credentials.

Direct SFTP to VM

For authenticated tiCrypt users who need data inside a VM immediately, Direct SFTP to VM skips the Vault staging step entirely. The user generates SFTP credentials that push data straight into the target VM.

This requires the tiCrypt application to be running on the sending machine and an active VM connection to establish the secure pathway.

Best for: Workflows that demand rapid iteration — frequent incremental updates during active analysis, or any scenario where staging data through the Vault adds unnecessary delay.

Keep in mind: Both an active login and an established VM connection are required, so this method cannot run unattended. VM co-owners and managers have full drive access; basic users are limited to their home folder and any directories they've been explicitly granted access to.


Administrator-Initiated Methods

External Drive Builder

For large-scale migrations, the External Drive Builder offers a prepare-seal-deliver workflow. An administrator (or authorized operator) creates a virtual drive outside of tiCrypt, populates it with data, and then seals it using a manifest file. Sealing encrypts the drive and binds decryption and mounting rights to the user specified in the manifest. The sealed drive is then moved into the configured drive pool and made available within tiCrypt.

Best for: Large dataset migrations (5 TB+), project data seeding, and any scenario where data needs to be packaged, verified, and delivered as a complete, ready-to-use drive.

Keep in mind: This method involves a multi-step operational chain (prepare, seal, move, attach), which makes it more deliberate than other options — by design. The drive can be built entirely outside the tiCrypt environment before being introduced.

NFS Mounts Within Specific VMs

Administrators can configure NFS access within specific VMs, allowing data to be copied from institutional or project storage into secured storage inside the enclave (for example, encrypted drives attached to VMs).

Best for: Bridging existing institutional storage into tiCrypt. Particularly useful when external storage remains the source of truth and teams perform controlled copy-in operations as needed, or when transitioning established data pipelines into an enclave workflow.

Keep in mind: Scope control matters. Administrators should define clear access and duration criteria for each mount, with governance focused on the mount lifecycle, access minimization, and regular operational review.


Quick Reference

ScenarioRecommended Method
Small, ad hoc researcher uploadsLocal Upload
External collaborators without tiCrypt accountsInboxes (URL for simplicity, SFTP for repeatable workflows)
Active research sessions needing compute-ready placementDirect SFTP to VM
Large-scale onboarding and migrations (5 TB+)External Drive Builder
Institutional storage bridge with controlled copy-inNFS Mounts Within Specific VMs

Every Path Leads to the Enclave

tiCrypt's ingress methods cover a wide range of scenarios, from a researcher dragging a file into their Vault to an administrator sealing a multi-terabyte drive for migration. The common thread is that every method is designed to bring data into the secure enclave without compromising the boundary that protects it. The right choice depends on who is sending the data, how much of it there is, and where it needs to go.

Management at Scale with tiCrypt

· 9 min read
Betuel Gag
Betuel Gag
Lead Documentation Specialist

Managing a handful of users across a few projects is straightforward. Managing hundreds — or thousands — across dozens of projects, each with its own compliance requirements, VM infrastructure, and access controls, is a different challenge entirely. tiCrypt is built for the latter.

This post walks through the features that make large-scale management practical: bulk user operations, system-wide controls, VM administration, and the tools that tie it all together.


Global Management

At scale, performing actions one user or one project at a time is not sustainable. tiCrypt's Management section is designed around bulk operations that reduce repetitive work and minimize the risk of human error.

Announcements and Communication

When coordinating across large teams, clear communication channels matter. tiCrypt provides several ways to reach users at scale:

Global Announcements allow Project Managers and Admins to send secured messages to all users or admins within the system. This is especially useful before deploying large projects or rolling out changes that affect multiple teams. See Make an Announcement in a Project from Management for setup instructions.

Bulk Email offers a quick way to reach project members outside the platform. Admins can copy or download all project member email addresses with a single click, making it easy to send communications through external channels. See Bulk Email a User from the Vault.

Global Login Messages let you display a system-wide notice on the login screen — ideal for planned maintenance windows, outage notifications, or major project updates. Messages support custom colors, symbols, and display frequency settings. See Display a Global Login Message.

Global Terms of Service prompts can surface important policy updates or operational notices (e.g., "The system will be offline for 14 days for scheduled maintenance") that every user must acknowledge. See Implement Terms of Service into the System.

User Profiles and Role Management

Organizing a large user base manually is tedious and error-prone. User Profiles solve this by letting you define reusable personas — bundles of roles and permissions — that can be applied to users in bulk.

For example, in a project with over 1,000 users, you might create profiles based on management requirements, compliance tiers, or access levels. Once defined, these profiles can be assigned to multiple users at once, ensuring consistent permissions across the board.

Use with care. Misconfigured profiles can unintentionally block user actions. Always review the permissions a profile grants before applying it broadly.

See Create a User Profile and Apply User Profiles.

Beyond profiles, tiCrypt supports several other bulk role and status operations:

Certifications

Projects with classified or tagged data often require users to hold specific certifications before gaining access to certain security levels.

Add multiple certifications at once to certify a group of users for a security requirement within a given security level. See Certify User(s) with a Certification for a Security Requirement.

Bulk-expire certifications when requirements change. This revokes access for all affected users in a single action. See Mark a User Certification as Expired.

Project Membership

Adding users to projects is one of the most common administrative tasks, and tiCrypt makes it efficient at any scale:

Bulk Deletion

Super-admins can delete most objects in bulk from the Management section. The exception is cryptographically enhanced objects (Groups, VMs, Drives, etc.), which can only be deleted by their owner.

Bulk deletion applies to users, sub-admin rights, user profiles, teams, projects, and user certifications. See the relevant documentation for each object type.

Data Export

Admins and project managers can export data from the Management and Virtual Machines sections in JSON or CSV format. Export options are available for most tiCrypt objects, with the choice to export all items, only visible items, or a specific selection. See Export a System Service in CSV Format and Export a System Service in JSON Format.

Escrow Operations

tiCrypt supports bulk operations for escrow user management:


Virtual Machine Management at Scale

Managing VMs individually becomes impractical as infrastructure grows. tiCrypt provides bulk VM operations across hosts, projects, and user access.

Host-Level Operations

Hardware Setup Management

Hardware setups define the templates and configurations available to VMs. tiCrypt supports several bulk operations for managing them:

Running VM Operations

VM User Profiles

Just as system-level User Profiles organize users across projects, VM User Profiles organize permissions within the virtual machine environment. These profiles decouple VM-level roles from system-level roles, enabling flexible access control:

  • A system super-admin can be a standard VM user if their VM profile is configured that way.
  • A standard system user can hold a VM manager role within a specific machine.

Each user can hold one VM profile per virtual machine, and profiles can be assigned to multiple users at once. See Add User Profiles in a Virtual Machine and What is the Purpose of VM Profiles?.

Access Directories

For large VM groups, access directories control which users can reach shared directories. Four access levels are available:

  • Everybody — All VM users.
  • Nobody — Only the VM owner.
  • Managers — Only users with a manager role in the VM.
  • Custom — Specific users designated by the VM owner or managers.

See Create an Access Directory for a Virtual Machine Group.

Drive Operations

The Terminals

When managing complex workflows across many VMs, the Terminals feature provides a consolidated view of all running VMs. It allows you to monitor and interact with multiple machines simultaneously — a valuable tool when orchestrating large-scale operations. See Access the Terminals.


Designed for Scale

tiCrypt's management tools are built around a simple principle: any action you can perform on one object, you should be able to perform on many. From user onboarding and certification management to VM lifecycle operations and data export, bulk actions are native to the platform — not an afterthought. The result is a system that remains manageable whether you're running a small team or a large, multi-project deployment.

Running R in Offline Secure tiCrypt VMs

· 3 min read
Thomas Samant
Thomas Samant

R is a staple of statistical computing and graphics across research, healthcare, finance, and government. But in a tiCrypt environment, secure VMs operate offline by design. There's no direct internet access, which means no connection to CRAN or other package repositories.

That doesn't mean you're stuck without libraries. tiCrypt supports two approaches to R package management in offline VMs: local installation from transferred files, and installation from a CRAN mirror on an NFS mount.


Option 1: Install Packages Locally

This approach works in any tiCrypt deployment. You download packages on an internet-connected machine, transfer them into the VM, and install from local files.

Step 1: Download the packages. On a machine with internet access, download the R package files (typically .tar.gz) from CRAN or another repository. Be sure to grab any dependencies as well.

Step 2: Transfer to the VM. Move the downloaded files into your tiCrypt VM using one of the approved secure transfer methods (e.g., Vault upload, SFTP, or drive attachment).

Step 3: Install from local files. In your R console, run:

install.packages("path_to_file", repos = NULL, type = "source")

Replace path_to_file with the actual path to the downloaded package file.


Option 2: Install from an NFS-Mounted CRAN Mirror

If your deployment includes a CRAN repository mirrored on an NFS share accessible to the VM, you can skip the file transfer step entirely.

Step 1: Set the library path. Point R to the NFS mount where the CRAN mirror resides:

.libPaths("file:///path_to_cran_mirror")

Replace path_to_cran_mirror with the actual mount path.

Step 2: Install packages. Install directly from the mirror:

install.packages("PACKAGE_NAME", repos = NULL, type = "source")

Replace PACKAGE_NAME with the name of the package you want to install.


Deployments with a Pre-Configured CRAN Mirror

Some tiCrypt deployments come with RStudio available directly from the applications toolbar and a CRAN mirror already mounted via NFS. In these environments, the setup is even simpler. The R startup message may also include package installation instructions as a quick reference.

Launch RStudio from the applications toolbar.

Set the library path to the pre-configured CRAN mirror. Your administrator can provide the exact mount path, but a typical example looks like:

.libPaths("file:///mnt/modules/cran/src/contrib")

Install packages using either of the following approaches:

If the global library path has already been set:

install.packages("PACKAGE_NAME", repos = NULL, type = "source")

Or specify the contriburl directly:

install.packages("PACKAGE_NAME", contriburl = "file:///mnt/modules/cran2/src/contrib", type = "source")

Offline Doesn't Mean Limited

tiCrypt's offline VM model exists to prevent data exfiltration and maintain a strong security boundary. Package management works within that model. Whether you're transferring files manually or pulling from a local CRAN mirror, the full R ecosystem remains available to researchers working inside the enclave.

Interplay between the filesystem and tiCrypt

· 7 min read
Thomas Samant
Thomas Samant

tiCrypt Vault Storage

The tiCrypt Vault offers a file system-like facility that allows files and directories to be created and used. All the metadata, such as file properties, directory entries, access information, and decryption keys, are stored in the MongoDB database used by the tiCrypt-file storage service.

In tiCrypt Vault, the file content is broken into 8MB chunks, each encrypted independently using the file key. The size of such chunks is 8MB+64Bytes on disk (unless they are the last, incomplete chunk). The extra 64Bytes contain the IV (initialization vector for AES encryption). For each file, the chunks are numbered from 0 onwards. The chunks are stored in a directory structure based on the file ID, visible only to the tiCrypt backend (preferably not to the VM hosts). The storage location can be configured in the configuration file of the tiCrypt-storage service.

tiCrypt is not opinionated on the file system or integration with other systems for this storage, but for compliance reasons, it is recommended that the access is restricted to only the tiCrypt backend.

Without the decryption keys that only users can recover, the content of the chunk files is entirely indecipherable. It is safe to back up these files using any method (including non-encrypted backup, cloud, etc). The strong encryption, coupled with the unavailability of the key to even administrators, ensures that this content can be replicated outside the secure environment from a compliance point of view. The unavailability of the key to even administrators ensures that from a compliance point of view, this content can be replicated outside the secure environment.

tiCrypt Encrypted Drives in Libvirt

tiCrypt virtual machines use a boot disk image that is not encrypted and one or more encrypted drives. Both disk images and encrypted drives are stored as files in the underlying distributed file system available to all Libvirt host servers. The specific mechanism uses the notion of Libvirt disk pools. Independent disk pools can be defined for disk images, encrypted drives, ISOs, etc. Each pool is located in a different directory within the distributed file system.

Libvirt (and tiCrypt, by extension) is agnostic to choosing the file system where various disk pools are defined. A good practice is to place the different disk pools on a visible distributed file system (preferably in the same location) and to mount the file system on all VM hosts. Any file system, including NFS, BeeGFS, Luster, etc., can be used.

As part of the virtualization mechanism, Libvirt makes the files corresponding to drives stored on the host file system appear as devices to the OS running within the VM. Any writes to the virtual device get translated into changes in the underlying file. The situation is somewhat more complex when snapshots are used since multiple files on the disk will form the virtual device.

Encrypted Drive Creation

Upon drive creation, tiCrypt instructs Libvirt to create a drive. This results in a file being made in the underlying file system in the corresponding drive pool. Two drives are supported: raw, with extension .raw, and QCOW2, with extension .qcow2.

A file as large as the indicated drive size gets created for a raw drive. The file content is initialized with 0s (corresponding to a blank drive). Writes in the virtual drive result in writes in the corresponding file at the same position (e.g., if block 10244 of the virtual drive is written, block 10244 of the raw file gets changed as well)

For a QCOW2 drive, only changed blocks get written; the file format is quite complex and supports advanced features like copy-on-write. The initial file size is small (low megabytes) when the drive is new; the file grows in size as more data is written to the disk.

The qemu-img tool can be used to convert between two formats. Usually, tiCrypt sets the drives up without the need for this tool.

A newly created tiCrypt disk is blank. No formatting of the drive or any other preparation has been performed. The drive will be formatted the first time it is attached to a tiCrypt virtual machine. The main reasons for this are:

  • The encryption/decryption key for the encrypted drive is kept secret from the infrastructure. This includes the tiCrypt backend, Libvirt, and underlying VM host.
  • The choice of the file system to use is delegated to the underlying operating system and tiCrypt VM Controller. Libvirt is not aware, nor does it need to know, of the actual file system on the drive. For Linux formatted drives, by inspecting the files backing up the drives, there is no way to tell even if the drive is formatted at all, let alone any information on the content or type of file system.

Encrypted Drive Formatting

As far as Libvirt is concerned, only low-level disk reads and writes exist. Whatever operation the operating system is performing gets translated into reading/write operations on the virtual disk; in turn, this will result in reading/write operations for the underlying file in the disk pool.

In Windows, a standard NTFS file system is created, but immediately (before the drive is made available to the user), BitLocker is turned on. This ensures that all files created subsequently are encrypted. BitLocker uses so-called "full volume encryption," i.e., all the new data will be encrypted, including the metadata. An external tool scanning the backing file can determine that the drive is an NTFS formatted drive and read all non-encrypted content. Since tiCrypt turns on encryption immediately, minimal information is visible.

In Linux, the LUKS entire disk encryption mechanism is used. It essentially places an encryption block layer between the raw drive (virtual drive in this case) and the file system (usually EXT4). This way, absolutely all information on the disk is encrypted. An external tool can only tell which disk blocks have been written to (are non-zero) but can derive no info on the rest of the content.

tiCrypt Non-secure Drives

Two non-secure drives are supported in tiCrypt: ISOs and read-only NFS shares.

Attaching ISOs

ISOs are made available using read-only CD-ROM devices. As such, they are always safe to mount in a secure tiCrypt VM. Linux and Windows can readily mount such ISOs and make them available as "drives."

ISOs are particularly useful if the NFS shares described below are not used. For example, Python or R packages could be made available as ISOs so that various VMs can install the required packages locally.

Attaching NFS file systems

By allowing, through firewall rules, access to a local NFS server, various tiCrypt VMs can mount a common file system for the purpose of accessing public (non-secure) data, packages, software, etc.

From a security point of view, the NFS server should export the data as read-only. The tiCrypt secure VMs should never be allowed to mount a read-write NFS share since data can be exfiltrated, thus defeating the tremendous effort put into tiCrypt protections against data exfiltration. This will most unquestionably make the tiCrypt deployment non-compliant.

A further restriction is related to the location of the NFS server. The system administrators of tiCrypt must control the NFS file system. It has to be part of the tiCrypt system envelope; for example, one of the servers part of the tiCrypt infrastructure can take this role. This restriction is due to compliance complications: the security envelope extends to all system parts; a remote NFS server becomes part of the secure environment and is subject to all security restrictions.

A practical recommendation is to create a local NFS server inside the security envelope and a regular