How we build our gRPC APIs at Polar Signals using buf

Tools and strategies that we use to build APIs that we love at Polar Signals.

November 11, 2021

For a cloud service there are two ways users interact with your product. Through the user interface, and through the API. So it's important to get both of those right, such that using your product is both intuitive and easy. There are many books and blogs written about API design, and what makes a good API. I don't claim to be an expert on this topic so I'll leave that exercise up to the reader. Instead, what I want to talk about is how we build our APIs and Polar Signals and why that works for us. Like any technology choice, this isn't a one size fits all solution and isn't meant to be prescriptive.

Define the API

To define our APIs we start with Protocol Buffers (protobufs). Protobufs allow us to define sets of structured data and couple those with API definitions. Protobufs have become widely used in our industry, and as such there is a wide array of plugins and support that make them incredibly powerful for developing APIs quickly.

One of the plugins that we use for our protobufs is grpc-gateway. grpc-gateway allows us to define RESTful endpoints alongside our gRPC endpoints, such that we can support both protocols while only having to implement it one time.

You can view these examples on our open source continuous profiling project Parca. Below is a snippet of the QueryService in Parca. It defines a set of APIs for being able to query and retrieve profiles from the database. It defines a set of grpc-gateway definitions alongside each RPC definition.

// QueryService is the service that provides APIs to retrieve and inspect profiles
service QueryService {
// QueryRange performs a profile query over a time range
rpc QueryRange(QueryRangeRequest) returns (QueryRangeResponse) {
option (google.api.http) = {
get: "/profiles/query_range"
};
}
// Query performs a profile query
rpc Query(QueryRequest) returns (QueryResponse) {
option (google.api.http) = {
get: "/profiles/query"
};
}
// Series is unimplemented
rpc Series(SeriesRequest) returns (SeriesResponse) {
option (google.api.http) = {
get: "/profiles/series"
};
}
// Labels returns the set of label names against a given matching string and time frame
rpc Labels(LabelsRequest) returns (LabelsResponse) {
option (google.api.http) = {
get: "/profiles/labels"
};
}
// Values returns the set of values that match a given label and time frame
rpc Values(ValuesRequest) returns (ValuesResponse) {
option (google.api.http) = {
get: "/profiles/labels/{label_name}/values"
};
}
}

Generation

At Polar Signals once we have a proto definition, we use Buf to generate code for us. We love using Buf because it's extremely fast at compiling protobufs and generating code, but it's also able to handle all of our external protobuf dependencies. It even allows us to publish our proto definitions to it's own registry, so that others in the community can also consume these definitions and build other tools that interact with these APIs. The published definitions for Parca can be found here.

Below is our buf.yaml definition in Parca that defines our external dependencies, so we no longer have to write bash scripts that download and vendor these files, Buf does that for us. Buf also handles our linting of our protobuf files so that we can have consistent definitions across our codebase.

version: v1
name: buf.build/parca-dev/parca
deps:
- buf.build/googleapis/googleapis
- buf.build/grpc-ecosystem/grpc-gateway
lint:
use:
- DEFAULT
- COMMENTS

Buf generates a lot of code for us from a simple buf.gen.yaml file. Below is that file definition where you can see we define generation of Go code, gRPC code, Javascript clients, Typescript clients, the grpc-gateway Go code, and even an OpenAPI definition.

version: v1
managed:
enabled: true
go_package_prefix:
default: github.com/parca-dev/parca/gen/proto/go
except:
- buf.build/googleapis/googleapis
plugins:
- name: go
out: gen/proto/go
opt: paths=source_relative
- name: go-grpc
out: gen/proto/go
opt:
- paths=source_relative
- require_unimplemented_servers=false
- name: js
out: ui/packages/shared/client/src
opt:
- import_style=commonjs,binary
- name: ts
path: ./node_modules/.bin/protoc-gen-ts
out: ui/packages/shared/client/src
opt:
- service=grpc-web
- name: grpc-gateway
out: gen/proto/go
opt:
- paths=source_relative
- generate_unbound_methods=true
- name: openapiv2
out: gen/proto/swagger

Implementation

Once we've generated all our code, we only need to implement the API once, and once that is implemented we simply start a gRPC server and register our implementation with the server. Our server has a handler function that serves both our gRPC as well as our grpc-gateway endpoints on the same port, to do so we validate the ProtoMajor version in the request, and the Content-Type header.

And since we've generated Javascript and Typescript clients, when we go to implement a web user interface on top of our API, we only have to import our clients to interact with the server. No longer defining any of that handling by hand. Since our APIs can be very data heavy, we generate our Typescript clients using grpc-web so that they payloads can be transferred using the proto format.

Protobufs allow us to have a single API definition that we're then able to generate a large amount of code from and utilize in many aspects of our code base which allows us to focus on the important functionality of our code and leverage all the work the community has already done for us to make APIs easy.

Discuss:
Sign up for the latest Polar Signals news