Most VMware Aria Operations for Applications (formerly known as Tanzu Observability by Wavefront) customers use an automated proxy install:
- Option 1: Install the Wavefront proxy and the Telegraf agent when the set up an integration.
- Option 2: Perform a scripted installation of the Wavefront proxy and Telegraf agent.
In some environments, it’s necessary to perform a manual installation instead. This page gives some guidance. You can perform additional customization using proxy configuration properties.
Proxy Install - Full Network Access
Follow these steps to install a proxy on a host with full network access (incoming and outgoing connections).
Prerequisites
-
Networking: Test connectivity between the target proxy host and your Operations for Applications service.
-
JRE: The Wavefront proxy is a Java jar file and requires a JRE - for example, openjdk11. See the requirements in the Wavefront Proxy README file.
Note: Starting with Wavefront proxy 11.1, the proxy installation packages don’t include JRE. Before you can install the proxy.rpm
or.deb
file, you must have the JRE in the execution path.
Step 1: Download the Proxy
If your system accepts incoming traffic, you can download the proxy file as follows:
- Download the proxy
.rpm
or.deb
file from packagecloud.io/wavefront/proxy. - Run
sudo rpm -U <name_of_file.rpm>
orsudo dpkg -i <name_of_file.deb>
.
Step 2: Determine Proxy Settings
Before you can customize the proxy configuration, you have to find the values for your environment. You need the following information to customize the settings.
Parameter | Description | Example |
---|---|---|
server | URL of your Operations for Applications service instance. | https://try.wavefront.com/api/ |
token | A valid Operations for Applications token associated with an active user or service account. The account must have the Proxies permission. | |
proxyname | Name of the proxy running. The proxyname is not used to tag your data. Rather, it’s used to tag data internal to the proxy, such as JVM statistics, per-proxy point rates, and so on. Alphanumeric and periods are allowed. | cust42Proxy |
enable graphite | Whether to enable the Graphite format. See the Graphite integration for details on Graphite configuration. | |
tlsPorts | Comma-separated list of ports to be used for incoming HTTPS connections. | |
privateKeyPath | Path to PKCS#8 private key file in PEM format. Incoming HTTPS connections access this private key. | |
privateCertPath | Path to X.509 certificate chain file in PEM format. Incoming HTTPS connections access this certificate. |
Step 3: Make Configuration Changes
You can make configuration changes by editing the config file or by running a script.
Option 1: Editing the Config File
If you want to edit the configuration file manually:
- Find, uncomment and modify the configuration, for example:
Change to Make Config Parameters Change target. server= proxyname= token=
Use Graphite If you want to use Graphite, specify the Graphite configuration section starting with: graphitePorts=2003
- Start the Wavefront proxy service:
sudo service wavefront-proxy start
Option 2: Running the autoconf Script
You can specify the settings by running bin/autoconf-wavefront-proxy.sh
.
After the interactive configuration is complete:
- The Wavefront proxy configuration at
/etc/wavefront/wavefront-proxy/wavefront.conf
is updated with the input that you provided. - The
wavefront-proxy
service is started.
Proxy Install – Limited Network Access
In some cases, you might need to run the proxy on a host with limited network access.
Prerequisites
-
Networking: The minimum requirement is an outbound HTTPS connection to your Operations for Applications service, so the proxy can send metrics to the service. For metrics, by default the proxy uses port 2878. You can change this port and you can configure separate proxy ports for histograms and traces.
You can use an HTTP proxy for the connection.
-
JRE: The Wavefront proxy is a Java jar file and requires a JRE - for example, openjdk11. See the requirements in the Wavefront Proxy README file.
Note: Starting with Wavefront proxy 11.1, the proxy installation packages don’t include JRE. Before you can install the proxy.rpm
or.deb
file, you must have the JRE in the execution path.
Installation and Configuration
Installation and configuration is similar to environments with full network access but might require additional work.
- Make sure all prerequisites are met, including an open outgoing HTTPS connection to your Operations for Applications service and JRE.
- Install the .rpm or .deb file.
- Update the settings, either by editing the configuration file or by running the autoconf script, as explained above.
- You may need to update the Wavefront proxy control file
/etc/init.d/wavefront.proxy
to the following settings:
desc=${DESC:-Wavefront Proxy}
user="wavefront"
wavefront_dir="/opt/wavefront"
proxy_dir=${PROXY_DIR:-$wavefront_dir/wavefront-proxy}
config_dir=${CONFIG_DIR:-/etc/wavefront/wavefront-proxy}
proxy_jre_dir="$proxy_dir/proxy-jre" <-- set to location of currently installed JRE
export JAVA_HOME=${PROXY_JAVA_HOME:-$proxy_jre_dir} <-- set to location of currently installed JRE
Proxy Custom Install with Incoming TLS/SSL
By default, the Wavefront proxy can accept incoming TCP and HTTP requests on the port specified by pushListenerPorts
. You can also configure the proxy to accept only connections with a certificate and key.
In that case:
- Specify that you want to open the port with the
pushListenerPorts
config parameter. - Specify the
tlsPorts
,privateKeyPath
, andprivateCertPath
parameters.
The following parameters support TLS/SS. You can specify those parameters in the configuration file or by running bin/autoconf-wavefront-proxy.sh
, as discussed above.
Parameter | Description |
---|---|
tlsPorts | Comma-separated list of ports to be used for incoming TLS/SSL connections. |
privateKeyPath | Path to PKCS#8 private key file in PEM format. Incoming TLS/SSL connections access this private key. |
privateCertPath | Path to X.509 certificate chain file in PEM format. Incoming TLS/SSL connections access this certificate. |
Testing Proxy Host Connectivity
You can test connectivity from the proxy host to your service instance using curl.
Run this test before installing the proxy, and again after installing and configuring the proxy.
For example:
- Find the values for server and token:
- Click Integrations on the toolbar.
- Select Linux Host and click the Setup tab.
- Run the following command:
curl -v https://your-server.wavefront.com/api/daemon/test?token=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Here is an example of the expected return when you use the -v
parameter (without -v
only HTTP(S) errors are reported):
* About to connect() to myhost.wavefront.com port 443 (#0)
* Trying NN.NN.NNN.NNN...
* Connected to myhost.wavefront.com (NN.NN.NNN.NNN) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
* Server certificate:
* subject: CN=*.wavefront.com,O="VMware, Inc",L=Palo Alto,ST=California,C=US
* start date: <date>
* expire date: <date>
* common name: *.wavefront.com
* issuer: <issuer details>
> GET /api/daemon/test?token=<mytoken> HTTP/1.1
> User-Agent: curl/7.29.0
> Host: myhost.wavefront.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Wed, 09 Jan 2019 20:41:45 GMT
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Upstream: 10.15.N.NNN>:NNNNN
< X-Wavefront-Cluster: /services-<services>
< X-Frame-Options: SAMEORIGIN
<
* Connection #0 to host myhost.wavefront.com left intact
Testing Your Installation
After you have started the proxy you just configured, you can verify its status from the UI or with curl
commands.
Testing From the UI
To check your proxy from the UI:
- Log in to your service instance.
- From the toolbar, select Browse > Proxies to view a list of all proxies.
If the list is long, enter the proxy name as defined in
proxyname=
inwavefront.conf
to locate the proxy by name.
Testing Using curl
You can test your proxy using curl
. Documentation for the following curl
commands can be found directly on your service instance at https://<your_instance>.wavefront.com>/api-docs/ui/#!/Proxy/getAllProxy
.
You can run the commands directly from the API documentation. This is less error prone than copy/paste of the token.
For this task, you first get the list of proxies for your Operations for Applications service, then you display information for just the proxy you installed.
Step 1: Get the list of proxies for your service instance:
curl -X GET --header "Accept: application/json" --header "Authorization: Bearer xxxxxxxxx-<your api token>-xxxxxxxxxxxx" "https://<your_instance>.wavefront.com/api/v2/proxy?offset=0&limit=100"
This command returns a JSON formated list of all proxies.
Step 2: Get the proxy ID. You can search the output using the proxy name configured in wavefront.conf
, or find the proxy ID in the UI.
Step 3: Return information for only this proxy:
curl -X GET --header "Accept: application/json" --header "Authorization: Bearer xxxxxxxxx-<your api token>-xxxxxxxxxxxx" "https://<your_instance>.wavefront.com/api/v2/proxy/443e5771-67c8-40fc-a0e2-674675d1e0a6"
Sample output for single proxy:
{
"status": {
"result": "OK",
"message": "",
"code": 200
},
"response": {
"customerStatus": "ACTIVE",
"version": "4.34",
"status": "ACTIVE",
"customerId": "mike",
"inTrash": false,
"proxyname": "mikeKubeH",
"id": "443e5771-67c8-40fc-a0e2-674675d1e0a6",
"lastCheckInTime": 1547069052859,
"timeDrift": -728,
"bytesLeftForBuffer": 7624290304,
"bytesPerMinuteForBuffer": 31817,
"localQueueSize": 0,
"sshAgent": false,
"ephemeral": false,
"deleted": false,
"statusCause": "",
"name": "Proxy on mikeKubeH"
}
}
Configure Wavefront Proxy with an HTTP/HTTPS Proxy
The Wavefront proxy initiates an HTTPS connection to your Operations for Applications service. The connection is made over the default HTTPS port 443.
Instead of sending traffic directly, you can send traffic from the Wavefront proxy to an HTTP or HTTPS proxy, which forwards to the Operations for Applications service. You set the connection parameters in the wavefront.conf
file (/etc/wavefront/wavefront-proxy/wavefront.conf
by default). See
- the sample conf file on Github
- Some detail on configuration options.
wavefront.conf
. The Wavefront proxy does not fully support the http_proxy
environment variables. Modify wavefront.conf to Use an HTTP/HTTPS Proxy
By default, the HTTP/HTTPS proxy section is commented out. Uncomment the section in wavefront.conf
if you want to use an HTTP/HTTPS proxy, and specify the following information:
## The following settings are used to connect to an Operations for Applications service instance through a HTTP proxy:
#proxyHost=<location of the HTTP/HTTPS proxy>
#proxyPort=<port for connecting with the HTTP/HTTPS proxy. Default is 8080>
## Optional: if http/https proxy supports username/password authentication
#proxyUser=proxy_user
#proxyPassword=proxy_password
#
wavefront.conf
. If the HTTPS proxy requires certificates, see the next section.Set up Wavefront Proxy to Use the CAcerts of the HTTPS Proxy
An HTTPS proxy requires that its clients use one or more site-specific CA-signed certificate. Those certificates (in PEM format) must be imported into the trust store of the Wavefront proxy.
- The HTTPS proxy includes the CA signed certificates.
- The Wavefront proxy must have those certificates (PEM files) as well.
Use keytool
to import the CA certificates into the Wavefront proxy
keytool -noprompt -cacerts -importcert -storepass changeit -file ${<filename>} -alias ${alias}
Here, filename
is the name of the PEM file. If there’s more than one PEM file, run the command again.
Installing Telegraf Manually
If the system has network access, follow our instructions for installing Telegraf
We include instructions for using .deb, .rpm, node, python, and gem for installing from the network.
If you’re in an environment with restricted network access:
- Download the appropriate Telegraf package from InfluxData and install as directed.
- Create a file called
10-wavefront.conf
in/etc/telegraf/telegraf.d
and enter the following snippet:[[outputs.wavefront]] host = "WAVEFRONT_PROXY_ADDRESS" port = 2878 metric_separator = "." source_override = ["proxyname", "agent_host", "node_host"] convert_paths = true
- Start the Telegraf agent:
sudo service telegraf start
How to Chain Proxies
Sometimes, the output from one Wavefront proxy needs to be sent to another Wavefront proxy (proxy chaining).
Common Use Cases for Chained Proxies
Common use cases include:
- Restrictions on outbound connections: In environments where no direct outbound connections to Operations for Applications are possible, you can use a Wavefront proxy that has outbound access to act as a relay and forward data received on its endpoint to the Operations for Applications service.
- Log data filtering: If you use a proxy to parse log data, you might need to perform filtering or tagging with proxy preprocessor rules. One proxy in a chain can have the job of altering or dropping certain strings before data is sent to Operations for Applications.
- Preprocessor rule consolidation: Proxy chaining can consolidate preprocessing rules to a central proxy. For example, proxies running in containers on a Kubernetes cluster could relay metrics to the chained proxy which has all required defined preprocessor rules.
Set Up the Configuration Files for Chaining
Let’s set up proxy chaining. Proxy A sends data to the relay proxy (Proxy B). Proxy B then sends data to the Operations for Applications service. Follow these steps:
- On the proxy which will act as the relay (Proxy B) open the proxy configuration file (
wavefront.conf
) for edit. See Proxy File Paths for the default location. - Uncomment the
pushRelayListenerPorts
line so the proxy will listen for any relay messages.## This setting is a comma-separated list of ports. (Default: none) pushRelayListenerPorts=2978
-
On the proxy that will send its metrics to the relay proxy (Proxy A), open the proxy configuration file
wavefront.conf
for editing. - Change the server address to the address of relay (Proxy B). Here’s an example:
# The server should be either the primary Operations for Applications cloud server, or your custom VPC address. # This will be provided to you by the Operations for Applications team. server=http://192.168.xxx.xxx:2978/api/
The authentication token that is specified in the
wavefront.conf
of the relay proxy (Proxy B) will be used to send the metrics to the Operations for Applications instances. An authentication token for Proxy A is not needed. -
After making the changes, restart both proxies and examine the
wavefront.log
file from the relay proxy (Proxy B). Look for points that are delivered on the relay listener port, as in the following example:2021-02-04 17:11:50,201 INFO [AbstractReportableEntityHandler:printStats] [2978] Points received rate: 4 pps (1 min), 4 pps (5 min), 0 pps (current). 2021-02-04 17:11:50,201 INFO [AbstractReportableEntityHandler:printStats] [2978] Points delivered rate: 4 pps (1 min), 4 pps (5 min) 2021-02-04 17:12:00,200 INFO [AbstractReportableEntityHandler:printStats] [2978] Points received rate: 4 pps (1 min), 4 pps (5 min), 0 pps (current). 2021-02-04 17:12:00,201 INFO [AbstractReportableEntityHandler:printStats] [2978] Points delivered rate: 4 pps (1 min), 4 pps (5 min)
Test Proxy Host Connectivity
You can test connectivity from the originating proxy to the relay proxy. Pick one option:
- Use curl
- Send the points locally and verify that they are delivered by querying the metric in the GUI
In both scenarios the relay proxy uses its own token to authenticate. A separate token for the proxy that’s sending the metrics is not needed.
Test Connectivity Using curl
Run the following command from the proxy that is sending metrics to the relay proxy (http://192.168.xxx.xxx:2978 is the address/port of the relay proxy.)
curl -v 'http://192.168.xxx.xxx:2978' -X POST -d "test.metric 100 source=test.source"
Sample output:
* About to connect() to 192.168.xxx.xxx port 2978 (#0)
* Trying 192.168.xxx.xxx...
* Connected to 192.168.xxx.xxx (192.168.xxx.xxx) port 2978 (#0)
> POST / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 192.168.xxx.xxx:2978
> Accept: */*
> Content-Length: 35
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 35 out of 35 bytes
< HTTP/1.1 204 No Content
< content-type: text/plain
< connection: keep-alive
Test Connectivity by Sending Points
To send points directly:
- Run the following command locally on the proxy that sending metrics to the relay proxy.
`echo 'test.metric 300 source=test.source' | nc localhost 2878`
- Verify that you can query the metric in the GUI.