STEK Share Plugin

This plugin coordinates STEK (Session Ticket Encryption Key) between ATS instances running in a group. As the ID based session resumption rate continue to decrease, this new plugin will replace the SSL Session Reuse Plugin plugin.

How It Works

This plugin implements the Raft consensus algorithm <https://raft.github.io/> to decide on a leader. The leader will periodically create a new STEK key and share it with all other ATS boxes in the group. When the plugin starts up, it will automatically join the cluster of all other ATS boxes in the group, which will also automatically elect a leader. The plugin uses the TSSslTicketKeyUpdate call to update ATS with the latest two STEK’s it has received.

All communication are encrypted. All the ATS boxes participating in the STEK sharing must have access to the cert/key pair.

Note that since the this plugin only updates STEK every few hours, all Raft related stuff are kept in memory, and some code is borrowed from the examples from NuRaft library <https://github.com/eBay/NuRaft> that is used in this plugin.

Building

This plugin uses NuRaft library <https://github.com/eBay/NuRaft> for leader election and communication. The NuRaft library must be installed for this plugin to build. It can be specified by the –with-nuraft argument to configure.

This plugin also uses YAML-CPP library <https://github.com/jbeder/yaml-cpp> for reading the configuration file. The YAML-CPP library must be installed for this plugin to build. It can be specified by the –with-yaml-cpp argument to configure.

As part of the experimental plugs, the –enable-experimental-plugins option must also be given to configure to build this plugin.

Config File

STEK Share is a global plugin. Its configuration file uses YAML, and is given as an argument to the plugin in plugin.config.

::

stek_share.so etc/trafficserver/example_server_conf.yaml

Available options:

  • server_id - An unique ID for the server.

  • address - Hostname or IP address of the server.

  • port - Port number for communication.

  • asio_thread_pool_size - [Optional] Thread pool size for ASIO library <http://think-async.com/Asio/>. Default size is 4.

  • heart_beat_interval - [Optional] Heart beat interval of Raft leader, must be less than “election_timeout_lower_bound”. Default value is 100 ms.

  • election_timeout_lower_bound - [Optional] Lower bound of Raft leader election timeout. Default value is 200 ms.

  • election_timeout_upper_bound - [Optional] Upper bound of Raft leader election timeout. Default value is 400 ms.

  • reserved_log_items - [Optional] The maximum number of logs preserved ahead the last snapshot. Default value is 5.

  • snapshot_distance - [Optional] The number of log appends for each snapshot. Default value is 5.

  • client_req_timeout - [Optional] Client request timeout. Default value is 3000 ms.

  • key_update_interval - The interval between STEK update.

  • server_list_file - Path to a file containing information of all the servers that’s supposed to be in the Raft cluster.

  • root_cert_file - Path to the root ca file.

  • server_cert_file - Path to the cert file.

  • server_key_file - Path to the key file.

  • cert_verify_str - SSL verification string, for example “/C=US/ST=IL/O=Yahoo/OU=Edge/CN=localhost”

Example Config File

server_id: 1
address: 127.0.0.1
port: 10001
asio_thread_pool_size: 4
heart_beat_interval: 100
election_timeout_lower_bound: 200
election_timeout_upper_bound: 400
reserved_log_items: 5
snapshot_distance: 5
client_req_timeout: 3000 # this is in milliseconds
key_update_interval: 3600 # this is in seconds
server_list_file: /abs/path/to/server_list.yaml
root_cert_file: /abs/path/to/ca.pem
server_cert_file: /abs/path/to/server.pem
server_key_file: /abs/path/to/server.key
cert_verify_str: /C=US/ST=IL/O=Yahoo/OU=Edge/CN=localhost

Server List File

Server list file as mentioned above, also in YAML.

  • server_id - ID of the server.

  • address - Hostname or IP address of the server.

  • port - Port number of the server.

Example Server List File

- server_id: 1
  address: 127.0.0.1
  port: 10001
- server_id: 2
  address: 127.0.0.1
  port: 10002
- server_id: 3
  address: 127.0.0.1
  port: 10003
- server_id: 4
  address: 127.0.0.1
  port: 10004
- server_id: 5
  address: 127.0.0.1
  port: 10005