Prometheus is a time series database. I am currently using Prometheus with Kafka, having the jmx agent expose beans for Prometheus to ingest with Grafana for visualization. It's great, but temporary. Prometheus is designed to be an ephemeral cache and does not try to solve distributed data storage. To that end, Prometheus provides a "remote_write" configuration option to POST data sampling to an endpoint for the ingest point for persistence.

The protocol is a snappy compressed protobuf that contains the data sampling.

Most of the ecosystem around this feature is Go based. To the point where Googling around on how to do this in other languages suggests stripping out gogolang items from the protobuf spec files. I could not find a single example of doing this in another language, so here is an example in Java for posterity.

Get the .proto files Prometheus uses for the "remote_write" protobuf. They can be found here. Make sure you get the ones that match the version of the Prometheus server you are running.

remote.proto
types.proto

You will then need gogo.proto, which can be found here.

In summary, you now have remote, types, and gogo .proto files.

Compile these .proto files with protoc into your language of choice. For this tutorial I am going to use Java. You could build protoc from source, but I found it easier to just download the precompiled binary. They can be found here. You want the protoc-3.11.2-PLATFORM.zip

Directory layout for these instructions:

./
protoc/ //unzipped binaries from above
output/ //destination for our language files
imports/
    remote.proto
    types.proto
    gogoproto/
        gogo.proto

Generate Java code:

./protoc/bin/protoc --proto_path=./imports --java_out=./java_output/ imports/types.proto
./protoc/bin/protoc --proto_path=./imports --java_out=./java_output/ imports/remote.proto
./protoc/bin/protoc --proto_path=./imports --java_out=./java_output/ imports/gogoproto/gogo.proto

Order doesn't matter, for some reason it won't import generated items...? I assume this is because I don't know enough about the protobuf system.

In any event, you should then see the following:

ls -R java_output/
java_output/:
  com prometheus
java_output/com:
  google
java_output/com/google:
  protobuf
java_output/com/google/protobuf:
  GoGoProtos.java
java_output/prometheus:
  Remote.java Types.java

Now let's write a server to receive requests from Prometheus. Jetty is nice for a quick prototype.

You'll need the following additional dependencies for your project:

<!-- protobuf -->
<dependency>
   <groupId>com.google.protobuf</groupId>
   <artifactId>protobuf-java</artifactId>
   <version>3.11.1</version>
</dependency>
<dependency>
   <groupId>com.google.protobuf</groupId>
   <artifactId>protobuf-java-util</artifactId>
   <version>3.11.1</version>
</dependency>

<!-- snappy compression -->
<dependency>
   <groupId>org.xerial.snappy</groupId>
   <artifactId>snappy-java</artifactId>
   <version>1.1.7.3</version>
</dependency>

Handler for our prometheus metric protobuf data:

public class PrometheusHandler extends AbstractHandler {

    private static final Logger logger = LoggerFactory.getLogger(PrometheusHandler.class);

    private static JsonFormat.Printer JSON_PRINTER = JsonFormat.printer();

    public PrometheusHandler() {
        super();
    }

    @Override
    public void handle(String target, Request baseRequest,
                       HttpServletRequest request, HttpServletResponse response) throws IOException {

        try (InputStream is = request.getInputStream()) {
            ByteArrayOutputStream buffer = new ByteArrayOutputStream();
            int nRead;
            byte[] data = new byte[1024];
            while ((nRead = is.read(data, 0, data.length)) != -1) {
                buffer.write(data, 0, nRead);
            }

            buffer.flush();
            Remote.WriteRequest writeRequest = Remote.WriteRequest.parseFrom(Snappy.uncompress(buffer.toByteArray()));
            String json = JSON_PRINTER.print(writeRequest);
            logger.info(json);
        }
        catch (IOException e) {
            throw e;
        }
    }
}

Main Server:

public class MetricsReporter {

    private static final Logger logger = LoggerFactory.getLogger(MetricsReporter.class);

    private static Server createServer(final int port){
        Server server = new Server(port);
        server.setHandler(new PrometheusHandler());
        return server;
    }

    public static void main(String[] args) throws Exception {
        logger.info("Starting metrics reporting server: ");
        Server server = createServer(8000);
        server.start();
        server.join();
    }
}

Set the remote_write config section in prometheus.yml for your Prometheus server:

remote_write:
  - url: 'http://localhost:8000/receive'

Start the Jetty server and start/restart Prometheus. You should shortly see data coming in fairly frequently.

The original documentation and suggestions indicated that this was not doable outside of Go, which seemed weird considering the whole point of an agnostic data format like protobuf is specifically to avoid eco system lockdown.

Note: This post is an amalgamtion of two previous posts on the subject, outling the methodology using python3 and then an implementation in Java.