Watchdog Server User Guide
Overview
This guide explains how the watchdog server works in simple terms. It focuses on how the server checks a controller's license and provides the correct settings based on which department it belongs to. It also includes a new section on the key expiry feature and instructions on accessing the Grafana dashboard for data analytics.
Workflow
Step 1: Sending Information from the Controller
The controller gathers details about itself, such as:
- Unique ID (GUID)
- Hostname (name of the computer)
- Version (software version)
- IP addresses
- MAC address (unique identifier for network interfaces)
- Operating system (OS)
- Label (a tag to identify the machine)
This information is sent to the /api/check-license/
endpoint on the watchdog server.
Step 2: Checking the License
The server receives the information and checks if the license is valid. It ensures all required details are present and that the GUID is formatted correctly. If everything checks out and the GUID is not already in use, it registers the GUID. If the number of active devices exceeds the allowed limit, the request is denied.
Step 3: Matching Information to Configuration Settings
The server reads the configuration and mapping files. It uses the details from the controller to find the best matching department based on criteria like label, IP address, hostname, MAC address, and operating system.
Step 4: Sending Configuration to the Controller
Once the right department is identified, the corresponding settings from the browsermon-watchdog.conf
file are selected. These settings are customized based on the controller's operating system (e.g., setting the log directory path differently for Windows and Linux).
Step 5: Config is sent to Controller
The new config from browsermon-watchdog.conf
is sent to the controller along with the valid license message.
Key Expiry Feature
Overview
The key expiry feature allows administrators to set an expiry date for the keys used by controllers. This ensures controllers must periodically check in with the watchdog server to renew their keys, enhancing security and control over access.
Workflow
Step 1: Generating and Assigning Keys
When a controller initially registers with the watchdog server or when a key expires, the server generates a new key with an associated expiry date. This key is then assigned to the controller.
Step 2: Checking Expiry
Each time the controller communicates with the watchdog server, the server checks the expiry date of the assigned key.
Step 3: Logging Remaining Days
Upon successful communication, the watchdog server logs the remaining days until the key expires. This information is logged for monitoring purposes.
Step 4: Key Expiry
Once the expiry date is reached, the key is considered expired. The controller will no longer be able to receive configurations or updates from the watchdog server until a new key is generated and assigned.
Implementation Details
Configuration
Administrators can configure the expiry duration for keys in the watchdog.conf
file.
Logging
The remaining days until key expiry are logged in the watchdog server's log files. Administrators can monitor these logs to ensure timely renewal of keys.
Grafana Dashboard
The watchdog server integrates with Grafana to provide a comprehensive interface for viewing history and data analytics.
Accessing the Grafana Dashboard
Users can view the Grafana dashboard by navigating to http://localhost:1514
in their web browser. This dashboard presents the history and analytics data collected by the watchdog server in an intuitive and visual format.
Benefits of the Grafana Dashboard
- Data Visualization: The Grafana dashboard offers various charts, graphs, and tables to help users visualize historical data and trends.
- Customizable Views: Users can customize the dashboard to display the most relevant metrics and information for their needs.
- Real-Time Monitoring: The dashboard allows for real-time monitoring of data, providing up-to-date insights into the performance and status of the watchdog server.
Configuration Files
Example mapping.conf
File
This file contains mappings for different departments, specifying conditions like the hostname pattern, label, operating system, IP address range, and MAC address pattern (can be changed on runtime).
Path the change these files on run time is /opt/watchdog/watchdog/mapping.conf
# mappings.conf
# File to define groups for the browsermon controllers
# based on any criteria (guid, hostname, mac, version, ip, os, label)
[Staff]
host=austin-*
label=staff
os=windows
[HR]
host=newyork-*
os=linux
label=hr
address=123.11.219.0/24
mac=23:ab:123:*
[Accounts]
host=sunnyvale-*
os=linux
[CEO]
host=chicago-ceo*
label=ceo
os=windows
Example browsermon-watchdog.conf
File
This file defines the settings for each department, including the browser type, mode, schedule window, log directory, log mode, rotation interval, and Kafka mode (can be changed on run time).
Path to change this file on run time is /opt/watchdog/watchdog/browsermon-watchdog.conf
[HR]
browser=firefox
mode=scheduled
schedule_window=1m
logdir=/opt/browsermon/logs
logmode=json
rotation=1h
kafka_mode=false
eti_mode=false
cache_ttl=30d
cache_max_size=1000
[Accounts]
browser=chrome
mode=scheduled
schedule_window=1m
logdir=/opt/browsermon/logs
logmode=csv
rotation=1h
kafka_mode=true
eti_mode=false
cache_ttl=30d
cache_max_size=1000
[CEO]
browser=firefox
mode=scheduled
schedule_window=1m
logdir=C:\\browsermon\\history
logmode=csv
rotation=1h
kafka_mode=true
eti_mode=false
cache_ttl=30d
cache_max_size=1000
Customizing Information for Different Departments
Users can customize the information sent from the controller to get different settings based on their department. The information should include fields like guid
, hostname
, version
, ip_addresses
, mac_address
, os
, and label
.
Example Information for HR Department
{
"guid": "123e4567-e89b-12d3-a456-426614174000",
"hostname": "newyork-hr1",
"version": "1.0.0",
"ip_addresses": ["123.11.219.5"],
"mac_address": "23:ab:123:45:67:89",
"os": "linux",
"label": "hr"
}
Example Information for CEO Department
{
"guid": "987e6543-e21b-12d3-a456-426614174999",
"hostname": "chicago-ceo1",
"version": "1.0.0",
"ip_addresses": ["192.168.100.100"],
"mac_address": "12:34:56:78:90:ab",
"os": "windows",
"label": "ceo"
}
EUNOMATIX Threat Intel (ETI)
- ETI Mode:
-
When
eti_mode = true
, the data collector service:- Runs every midnight.
- Collects new threat intel from URLhaus and Phishtank.
- Stores the collected data in a new Elasticsearch index running on port 9200.
-
Real-Time Malicious URL Identification:
-
The Browsermon controller:
- Interacts with the ETI service to assess threats in real time.
- Leverages ETI intelligence for threat assessment.
-
Local URL Cache:
- Browsermon maintains a local cache to:
- Minimize redundant ETI queries.
- Optimize performance and reduce unnecessary requests.
- Cache configuration:
- TTL (Time-To-Live):
- Default: 30 days.
- Customizable in minutes, hours, or days.
- Maximum Size:
- Default: 1000 URLs.
- Customizable between 100 to 5000 to control memory usage and prevent excessive growth.
How the Matching Works
- Label Matching: This is the highest priority. If the controller's
label
field matches thelabel
specified in themapping.conf
file, the department associated with that label is selected immediately. - Hostname Matching: If no label match is found, the system checks if the controller's
hostname
matches the pattern defined for a department. A wildcard (*
) can be used in the hostname pattern to allow for partial matching. - IP Address Matching: If neither label nor hostname matches, the IP address is compared. The controller's IP addresses are checked to see if any of them fall within the subnet ranges specified for a department.
- MAC Address Matching: If the IP address does not match, the system will try to match the controller's MAC address using a pattern that can include wildcards.
- Operating System (OS) Matching: As the last option, the controller's OS is compared against the OS specified for the department. The system performs a partial match if a wildcard (
*
) is used, ensuring that different versions of the same OS can be grouped together.
The matching process in NOT
case-sensitive.
The process follows a first-match-wins strategy. As soon as a match is found, the department associated with that match is selected, and the remaining conditions are ignored.
Example Workflow of Matching
Here’s a simple example to illustrate how the matching process works:
- Controller Information:
- Label:
"hr"
- Hostname:
"newyork-hr1"
- IP Addresses:
["123.11.219.5"]
- MAC Address:
"23:ab:123:45:67:89"
-
OS:
"linux"
-
Matching Process:
- Label Matching: The system checks the label
"hr"
. It finds that the HR department has a matching label, so it immediately selects the HR department. - Hostname, IP, MAC, and OS: Since the label matched, the system does not proceed to check the other conditions (hostname, IP, MAC, or OS).
The HR department is selected based on the label match, even though other conditions might also match.
Conclusion
By customizing the information based on the department's criteria, users can ensure that the watchdog server provides the most suitable settings for each controller. This enhances the flexibility and efficiency of managing different configurations across various departments within a company while also ensuring secure access through the key expiry feature. Additionally, the Grafana dashboard provides a powerful tool for visualizing and analyzing the collected data, further enhancing the capabilities of the watchdog server.
Watchdog Deployment Guide (with Kafka and Elasticsearch)
Introduction
This guide explains how to install and configure Watchdog using the watchdog-installer
Python script. Watchdog can optionally integrate with Kafka (for data ingestion) and Elasticsearch (for data storage and searching).
The installer supports:
1. Interactive prompts for Docker registry authentication (optional).
2. Enabling/disabling Kafka mode and/or Elasticsearch mode.
3. Automatic creation of necessary directories under /opt/watchdog
.
4. File-by-file copy of important Watchdog files (prompts only for /opt/watchdog/watchdog/
overwrites).
5. Automatic generation of a .env
file in your current directory, containing the environment variables Docker Compose will need.
6. A final Docker Compose deployment that launches the selected services.
Prerequisites
-
Root/Sudo Access
The installer must be run asroot
(or withsudo
). It manages system directories (e.g.,/opt/watchdog
) and sets ownership of data directories. -
Docker and Docker Compose
- Docker installed and running (
docker ps
should work). - Docker Compose plugin or Docker Compose CLI installed.
-
Optionally, Docker registry credentials if you plan to pull images from a private Docker registry.
-
Local Files/Directories
- A local
deps/
directory that contains:deps/connect-jars/
(Kafka connector JARs).deps/watchdog/
(Watchdog source files).deps/init-kafka-connect.sh
(initialization script).
- Docker Compose YAML files in the same directory from which you run the installer:
docker-compose.base.yml
(required).docker-compose.kafka.yml
(if enabling Kafka).docker-compose.elastic.yml
(if enabling Elasticsearch).
- Optional config files (if needed for custom setups):
elasticsearch.yml
(ifelastic_mode=true
and you want to override default ES config).- Any custom
.conf
files for Watchdog (placed indeps/watchdog
before running the script).
Installation Steps
-
Clone or place the
watchdog-installer
script in the same directory where yourdocker-compose.*.yml
files exist (because it writes a.env
file locally and references the compose files in the current directory). -
Ensure the script is executable:
If you’re using the Python file directly, you can just run
python watchdog-installer install
withoutchmod +x
. -
Run the installer (as root):
-
The script will:
- Prompt you for Docker registry authentication (optional).
- Prompt whether to enable Kafka/Elasticsearch modes.
- If Kafka mode is enabled, prompt for a
KAFKA_EXTERNAL_IP
. - If Elasticsearch mode is enabled, prompt for host, port, passwords, etc.
- Create
/opt/watchdog
,/opt/watchdog/kafka_data
, and/opt/watchdog/elasticsearch_data
as needed. - Copy files from
deps/
into/opt/watchdog
.connect-jars
andinit-kafka-connect.sh
are forced overwrites (no prompt).- The
watchdog
directory is copied file-by-file with a prompt for each existing file.
- Generate a
.env
file in your current directory (where Docker Compose can see it). - Finally, run
docker compose up -d
usingdocker-compose.base.yml
, plus the Kafka and/or Elastic Compose files if those modes were selected.
-
Verify installation:
- Check running containers:
- If Kafka was enabled:
kafka
,zookeeper
, andkafka-connect
containers should be running.
- If Elasticsearch was enabled:
- An
elasticsearch
container (and possiblykibana
) should be running (depending on your compose files).
- An
Environment Variables and .env
File
The script automatically writes environment variables to a .env
file in the current working directory. Docker Compose will automatically load them. If Kafka/Elasticsearch is enabled, you’ll see lines like:
KAFKA_EXTERNAL_IP=your.machine.ip
ELASTIC_HOST=elasticsearch
ELASTIC_PORT=9200
ELASTIC_PASSWORD=BrowsermonElasticAdmin
ELASTIC_USER_PASSWORD=BrowsermonElasticUser
ELASTIC_SCHEME=https
You can modify these directly if needed (though re-running the script may overwrite them).
Configuration Files
Depending on your Watchdog setup, you might need additional configuration files within deps/watchdog
(which eventually lands in /opt/watchdog/watchdog
):
1. watchdog.conf
2. ssl-config.ini
3. mapping.conf
4. browsermon-watchdog.conf
5. elasticsearch.yml
(if elastic_mode=true
and you want to override defaults)
Make sure to place these files in deps/watchdog
before running the installer if you want them copied to /opt/watchdog/watchdog
.
Service Configuration
- Kafka
- Typically uses port
8092
(or whatever is in yourdocker-compose.kafka.yml
). - Uses Kafka Connect to push data to MongoDB (or other sinks).
-
The
init-kafka-connect.sh
script is placed in/opt/watchdog
, but you typically don’t need to run it manually unless your setup requires it. -
MongoDB
- Often deployed alongside Kafka (depending on your
docker-compose.kafka.yml
). -
The sink connector is configured to push Watchdog data to MongoDB.
-
Elasticsearch
- Typically listens on
9200
for HTTP/HTTPS calls. - The default scheme is
https
(from the script prompt) but can be changed if you have a custom ES config. - If using
elasticsearch.yml
, it should be placed indeps/watchdog
or your custom location and referenced bydocker-compose.elastic.yml
.
Updating the Installation
If you re-run the installer and /opt/watchdog
is detected, the script enters Update Mode. You will be:
- Prompted only for overwriting files inside /opt/watchdog/watchdog
.
- Other files (like init-kafka-connect.sh
or connect-jars
) are overwritten automatically.
- The script will then re-run Docker Compose to update containers.
Example:
If it sees an existing installation, you’ll be asked:Existing installation detected at /opt/watchdog
Do you want to proceed with the update? (y/n)
Uninstalling / Cleaning Up
To stop and remove the Watchdog containers (Kafka/Elasticsearch included), run:
This will: 1. Look fordocker-compose.base.yml
, docker-compose.kafka.yml
, and docker-compose.elastic.yml
in your current directory.
2. Run docker compose down -v
with whichever files are found, removing containers and volumes.
Note: This does not delete
/opt/watchdog
or the data directories. If you want to remove them entirely, you can do so manually:
Troubleshooting
1. Checking Logs
View logs for a specific container:
Examples: -docker logs kafka-connect
- docker logs elasticsearch
2. Verifying Kafka Connect
Inside the Kafka Connect container:
Then check connector status: A valid Mongo Sink Connector shows:{
"name": "mongo-sink-connector",
"connector": {
"state": "RUNNING",
"worker_id": "connect-worker-1"
},
"tasks": [
{
"id": 0,
"state": "RUNNING",
"worker_id": "connect-worker-1"
}
],
"type": "sink"
}
3. Checking Elasticsearch
If Elasticsearch is running with HTTPS and basic auth:
--k
ignores self-signed certificate errors.
- Adjust the user/password as you configured them during installation prompts.
4. Internet Access
Important For the functioning of the Elasticsearch-based URL classification, the following domains must be accessible from the network where your watchdog is deployed.
- PhishTank : data.phishtank.com
- URLHaus : urlhaus.abuse.ch
Offline Image Deployment (Optional)
If you have Docker images saved locally (e.g., .tar
files) for offline deployment:
1. Load them: