Back

Playing around with gRPC & Protobufs

Posted on April 14th, 2019

I’m so bored with REST APIs. Just 4 methods - GET, POST, PUT, DELETE. It feels a lot restrictive than it’s supposed to be. I need a system which makes it easier to make network calls without putting in redundant effort to define data models on both server and client-side.

I recently stumbled upon gRPC which is a modern open source Remote Procedure Call framework developed by Google that can run in any environment. It makes things a lot easy by having one universal service definition with the help of Protocol Buffers (also known as protobufs).

Ok a lot of buzzwords that make absolutely no sense but let’s break it down so we can understand the concept better.

Back in the REST days, if you had to create a back-end server in Go and a client in NodeJS, you would first have to define the data structure in Go using the Go-specific data types and then you would do the same in NodeJS using the NodeJS specific index.

Once the data models were defined and Golang back-end was up and running, you would then use a networking module in NodeJS - Express or Axios or something similar - to make HTTP calls such as GET, POST, PUT, DELETE to the back-end server.

Now with the help of Protobufs, you just define your data model along with services in a ‘.proto’ file which can be accessed by Go server and Node client.

Let’s look at an example. I built a very simple gRPC service that will take a string in request, split it into an array, and return the array as in response.

My file structure is shown below -

- server
    - main.go
- proto
    - service.proto
    - service.pb.go
- client
    - main.go
- client-js
    - server.js
    - package.json

Below proto file defines a request and response structure for this web service

syntax = "proto3";

package proto;

message Request {
    string s = 1;
}

message Response {
    repeated string items = 1;
}

service SplitService {
    rpc Split (Request) returns (Response);
}

Now with the help of a few configuration guidelines available for Golang here, I was able to convert the protobuf service definition into a Go stub that my Go server can consume.

To do so, I ran this command on Terminal that generated a Golang gRPC stub called ‘service.pb.go’ -

protoc -I proto/ proto/service.proto --go_out=plugins=grpc:proto

The service.pb.go includes a lot of configuration functions that can be used by both server and client to communicate with each other.

In my server/main.go, I coded up a simple back-end server to accept string requests, split them, and return the array

package main

import (
	"context"
	"net"
	"strings"
	pb "../proto"
	"google.golang.org/grpc"
)

type server struct{}

func main() {
	lis, err := net.Listen("tcp", ":4040")
	if err != nil {
		panic(err)
	}

	s := grpc.NewServer()
	pb.RegisterSplitServiceServer(s, server{})
	err = s.Serve(lis)
	if err != nil {
		panic(err)
	}
}

func (server) Split(ctx context.Context, request *pb.Request) (*pb.Response, error) {
	s := request.GetS()

	split := strings.Split(s, "")

	return &pb.Response{Items: split}, nil
}

The gRPC server was configured to listen on port 4040 for any incoming requests. The protobuf includes a function called RegisterSplitServiceServer which takes in an instance of gRPC server and registers it to implement the Split service as defined in the protobuf.

The Split method is created to accept the Request in the same format as defined in the protobuf. The server struct is assigned as a receiver for the Split method so it can be registered as a gRPC server. The Go protobuf configuration requires gRPC server to be an interface with a Split method.

The Split method takes in the context along with the request and returns the response as defined in the protobuf. Since the request parameter is passed in to the Split function as a pointer to the request type defined in the protobuf, we can access the GetS() function defined in the service.pb.go protobuf config.

This will give us the raw string passed into the request which we can split into an array, assign to the Response struct as defined in the protobuf, and return the memory address of the Response struct which is the return type configured in the Split service.

Now this Split service can be accessed by the Go client with the help of the same protobuf configuration. The code for the Go client is shown below -

package main

import (
	"context"
	"fmt"
	"os"
	"strings"
	pb "../proto"
	"google.golang.org/grpc"
)

func main() {
	conn, err := grpc.Dial("localhost:4040", grpc.WithInsecure())
	if err != nil {
		panic(err)
	}

	client := pb.NewSplitServiceClient(conn)

	s := strings.Join(os.Args[1:], "")
	req := &pb.Request{S: string(s)}
	if response, err := client.Split(context.Background(), req); err == nil {
		fmt.Println(response.Items)
	}
}

Here I’ve created a simple CLI through which we can input a string which will be sent to the gRPC server and returned as an array.

To call the Split method, I had to simply dial the gRPC server on localhost:4040. I had to use the WithInsecure() function because the gRPC server is set up locally without any security certificate.

Once the gRPC server was successfully dialled, I can pass the connection to NewSplitServiceClient function defined in the service.pb.go protobuf stub. This function helps in registering a gRPC client with the help of the connection to the gRPC server.

Now I can access the Split function like any other function defined locally in my client-side package.

This entire workflow seemed quite simple and elegant - I was quite amazed at how little configuration it required when compared to the regular REST APIs. The name gRPC and Protocol Buffers sounded quite intimidating when I started off.

I wanted to see if the client side code could be easily replicated in a different language. So I coded up another gRPC client on NodeJS.

const grpc = require('grpc');
const protoLoader = require('@grpc/proto-loader');

const PROTO_PATH = '../proto/service.proto'

const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
    keepCase: true,
    longs: String,
    enums: String,
    defaults: true,
    oneofs: true
});

const packageDescriptor = grpc.loadPackageDefinition(packageDefinition);

const split = packageDescriptor.proto;

const stub = new split.SplitService(
    'localhost:4040',
    grpc.credentials.createInsecure()
);

const splitRequest = {
    s: 'hello',
};

stub.Split(splitRequest, function(err, response){
    if(err) {
        console.log(err);
    } else {
        console.log(response);
    }
});

That’s all that was required. I hard-coded the ‘hello’ string in my code though. However, it was quite simple to get the gRPC client up and running on NodeJS. All I had to do was install the gRPC and gRPC/proto-loader packages. Rest was a piece of cake.

After experimenting with this, I’m actually quite excited about using gRPC more extensively in my other projects. But I’m going to try to deep dive into how it’s making a network call - I’ll have to check what happens in the ‘Network’ tab of my dev tools when I call this Split function from a browser.

But one thing is clear - gRPC and Protobufs are definitely here to stay. I’m glad RPCs are making a comeback now thanks ot Google. It seemed like the older version of RPCs were quite outdated but they were still a very effective way of making network calls. But gRPCs and proto3 have quite simplified the whole process of setting up a gRPC client and server.

This is definitely an interesting example to implement if you’re a Golang beginner interested in learning more about gRPC.

grpcprotobufsAPIHTTPNetworkingRest

About the author

Rohit Mundra

Based in India. Building a few helpful things. Find me here talking about anything and everything. Keeping a track of my ideas so they don't get lost in the void. Tweet at me if you need me.