跳到主要内容

Generate a fully-working Go gRPC server in two minutes with Ent

ent + gRPC

Introduction#

Having entity schemas defined in a central, language-neutral format has many benefits as the scale of software engineering organizations increase. To do this, many organizations use Protocol Buffers as their interface definition language (IDL). In addition, gRPC, a Protobuf-based RPC framework modeled after Google's internal Stubby is becoming increasingly popular due to its efficiency and code-generation capabilities.

Being an IDL, gRPC does not prescribe any specific guidelines on implementing the data access layer so implementations vary greatly. Ent is a natural candidate for building the data access layer in any Go application and so there is great potential in integrating the two technologies together.

Today we announce an experimental version of entproto, a Go package, and a command-line tool to add Protobuf and gRPC support for ent users. With entproto, developers can set up a fully working CRUD gRPC server in a few minutes. In this post, we will show exactly how to do just that.

Setting Up#

The final version of this tutorial is available on GitHub, you can clone it if you prefer following along that way.

Let's start by initializing a new Go module for our project:

mkdir ent-grpc-example
cd ent-grpc-example
go mod init ent-grpc-example

Next we use go run to invoke the ent code generator to initialize a schema:

go run -mod=mod entgo.io/ent/cmd/ent init User

Our directory should now look like:

.
├── ent
│   ├── generate.go
│   └── schema
│   └── user.go
├── go.mod
└── go.sum

Next, let's add the entproto package to our project:

go get -u entgo.io/contrib/entproto

Next, we will define the schema for the User entity. Open ent/schema/user.go and edit:

package schema
import (
"entgo.io/ent"
"entgo.io/ent/schema"
)
// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Unique(),
field.String("email_address").
Unique(),
}
}

In this step, we added two unique fields to our User entity: name and email_address. The ent.Schema is just the definition of the schema, to create usable production code from it we need to run Ent's code generation tool on it. Run:

go generate ./...

Notice the a bunch of new files were created from our schema definition now:

├── ent
│   ├── client.go
│   ├── config.go
// .... many more
│   ├── user
│   ├── user.go
│   ├── user_create.go
│   ├── user_delete.go
│   ├── user_query.go
│   └── user_update.go
├── go.mod
└── go.sum

At this point, we can open a connection to a database, run a migration to create the users table, and start reading and writing data to it. This is covered on the Setup Tutorial, so let's cut to the chase and learn about generating Protobuf definitions and gRPC servers from our schema.

Generating Go Protobufs with entproto#

As ent and Protobuf schemas are not identical, we must supply some annotations on our schema to help entproto figure out exactly how to generate Protobuf definitions (called "Messages" in protobuf lingo).

The first thing we need to do is to add an entproto.Message() annotation. This is our opt-in to Protobuf schema generation, we don't necessarily want to generate proto messages or gRPC service definitions from all of our schema entities, and this annotation gives us that control. To add it, append to ent/schema/user.go:

func (User) Annotations() []schema.Annotation {
return []schema.Annotation{
entproto.Message(),
}
}

Next, we need to annotate each field and assign it a field number. Recall that when defining a protobuf message type, each field must be assigned a unique number. To do that, we add an entproto.Field annotation on each field. Update the Fields in ent/schema/user.go:

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Unique().
Annotations(
entproto.Field(2),
),
field.String("email_address").
Unique().
Annotations(
entproto.Field(3),
),
}
}

Notice that we did not start our field numbers from 1, this is because ent implicitly creates the ID field for the entity, and that field is automatically assigned the number 1. We can now generate our protobuf message type definitions. To do that, we will add to ent/generate.go a go:generate directive that invokes the entproto command-line tool. It should now look like this:

package ent
//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate ./schema
//go:generate go run -mod=mod entgo.io/contrib/entproto/cmd/entproto -path ./schema

Let's re-generate our code:

go generate ./...

Observe that a new directory was created which will contain all protobuf related generated code: ent/proto. It now contains:

ent/proto
└── entpb
├── entpb.proto
└── generate.go

Two files were created. Let's look at their contents:

// Code generated by entproto. DO NOT EDIT.
syntax = "proto3";
package entpb;
option go_package = "ent-grpc-example/ent/proto/entpb";
message User {
int32 id = 1;
string user_name = 2;
string email_address = 3;
}

Nice! A new .proto file containing a message type definition that maps to our User schema was created!

package entpb
//go:generate protoc -I=.. --go_out=.. --go-grpc_out=.. --go_opt=paths=source_relative --go-grpc_opt=paths=source_relative --entgrpc_out=.. --entgrpc_opt=paths=source_relative,schema_path=../../schema entpb/entpb.proto

A new generate.go file was created with an invocation to protoc, the protobuf code generator instructing it how to generate Go code from our .proto file. For this command to work, we must first install protoc as well as 3 protobuf plugins: protoc-gen-go (which generates Go Protobuf structs), protoc-gen-go-grpc (which generates Go gRPC service interfaces and clients), and protoc-gen-entgrpc (which generates an implementation of the service interface). If you do not have these installed, please follow these directions:

After installing these dependencies, we can re-run code-generation:

go generate ./...

Observe that a new file named ent/proto/entpb/entpb.pb.go was created which contains the generated Go structs for our entities.

Let's write a test that uses it to make sure everything is wired correctly. Create a new file named pb_test.go and write:

package main
import (
"testing"
"ent-grpc-example/ent/proto/entpb"
)
func TestUserProto(t *testing.T) {
user := entpb.User{
Name: "rotemtam",
EmailAddress: "rotemtam@example.com",
}
if user.GetName() != "rotemtam" {
t.Fatal("expected user name to be rotemtam")
}
if user.GetEmailAddress() != "rotemtam@example.com" {
t.Fatal("expected email address to be rotemtam@example.com")
}
}

To run it:

go get -u./... # install deps of the generated package
go test ./...

Hooray! The test passes. We have successfully generated working Go Protobuf structs from our Ent schema. Next, let's see how to automatically generate a working CRUD gRPC server from our schema.

Generating a Fully Working gRPC Server from our Schema#

Having Protobuf structs generated from our ent.Schema can be useful, but what we're really interested in is getting an actual server that can create, read, update, and delete entities from an actual database. To do that, we need to update just one line of code! When we annotate a schema with entproto.Service, we tell the entproto code-gen that we are interested in generating a gRPC service definition, from the protoc-gen-entgrpc will read our definition and generate a service implementation. Edit ent/schema/user.go and modify the schema's Annotations:

func (User) Annotations() []schema.Annotation {
return []schema.Annotation{
entproto.Message(),
+ entproto.Service(), // <-- add this
}
}

Now re-run code-generation:

go generate ./...

Observe some interesting changes in ent/proto/entpb:

ent/proto/entpb
├── entpb.pb.go
├── entpb.proto
├── entpb_grpc.pb.go
├── entpb_user_service.go
└── generate.go

First, entproto added a service definition to entpb.proto:

service UserService {
rpc Create ( CreateUserRequest ) returns ( User );
rpc Get ( GetUserRequest ) returns ( User );
rpc Update ( UpdateUserRequest ) returns ( User );
rpc Delete ( DeleteUserRequest ) returns ( google.protobuf.Empty );
}

In addition, two new files were created. The first, ent_grpc.pb.go, contains the gRPC client stub and the interface definition. If you open the file, you will find in it (among many other things):

// UserServiceClient is the client API for UserService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type UserServiceClient interface {
Create(ctx context.Context, in *CreateUserRequest, opts ...grpc.CallOption) (*User, error)
Get(ctx context.Context, in *GetUserRequest, opts ...grpc.CallOption) (*User, error)
Update(ctx context.Context, in *UpdateUserRequest, opts ...grpc.CallOption) (*User, error)
Delete(ctx context.Context, in *DeleteUserRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
}

The second file, entpub_user_service.go contains a generated implementation for this interface. For example, an implementation for the Get method:

// Get implements UserServiceServer.Get
func (svc *UserService) Get(ctx context.Context, req *GetUserRequest) (*User, error) {
get, err := svc.client.User.Get(ctx, int(req.GetId()))
switch {
case err == nil:
return toProtoUser(get), nil
case ent.IsNotFound(err):
return nil, status.Errorf(codes.NotFound, "not found: %s", err)
default:
return nil, status.Errorf(codes.Internal, "internal error: %s", err)
}
}

Not bad! Next, let's create a gRPC server that can serve requests to our service.

Creating the Server#

Create a new file cmd/server/main.go and write:

package main
import (
"context"
"log"
"net"
_ "github.com/mattn/go-sqlite3"
"ent-grpc-example/ent"
"ent-grpc-example/ent/proto/entpb"
"google.golang.org/grpc"
)
func main() {
// Initialize an ent client.
client, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer client.Close()
// Run the migration tool (creating tables, etc).
if err := client.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Initialize the generated User service.
svc := entpb.NewUserService(client)
// Create a new gRPC server (you can wire multiple services to a single server).
server := grpc.NewServer()
// Register the User service with the server.
entpb.RegisterUserServiceServer(server, svc)
// Open port 5000 for listening to traffic.
lis, err := net.Listen("tcp", ":5000")
if err != nil {
log.Fatalf("failed listening: %s", err)
}
// Listen for traffic indefinitely.
if err := server.Serve(lis); err != nil {
log.Fatalf("server ended: %s", err)
}
}

Notice that we added an import of github.com/mattn/go-sqlite3, so we need to add it to our module:

go get -u github.com/mattn/go-sqlite3

Next, let's run the server, while we write a client that will communicate with it:

go run -mod=mod ./cmd/server

Creating the Client#

Let's create a simple client that will make some calls to our server. Create a new file named cmd/client/main.go and write:

package main
import (
"context"
"fmt"
"log"
"math/rand"
"time"
"ent-grpc-example/ent/proto/entpb"
"google.golang.org/grpc"
"google.golang.org/grpc/status"
)
func main() {
rand.Seed(time.Now().UnixNano())
// Open a connection to the server.
conn, err := grpc.Dial(":5000", grpc.WithInsecure())
if err != nil {
log.Fatalf("failed connecting to server: %s", err)
}
defer conn.Close()
// Create a User service Client on the connection.
client := entpb.NewUserServiceClient(conn)
// Ask the server to create a random User.
ctx := context.Background()
user := randomUser()
created, err := client.Create(ctx, &entpb.CreateUserRequest{
User: user,
})
if err != nil {
se, _ := status.FromError(err)
log.Fatalf("failed creating user: status=%s message=%s", se.Code(), se.Message())
}
log.Printf("user created with id: %d", created.Id)
// On a separate RPC invocation, retrieve the user we saved previously.
get, err := client.Get(ctx, &entpb.GetUserRequest{
Id: created.Id,
})
if err != nil {
se, _ := status.FromError(err)
log.Fatalf("failed retrieving user: status=%s message=%s", se.Code(), se.Message())
}
log.Printf("retrieved user with id=%d: %v", get.Id, get)
}
func randomUser() *entpb.User {
return &entpb.User{
Name: fmt.Sprintf("user_%d", rand.Int()),
EmailAddress: fmt.Sprintf("user_%d@example.com", rand.Int()),
}
}

Our client creates a connection to port 5000, where our server is listening, then issues a Create request to create a new user, and then issues a second Get request to retrieve it from the database. Let's run our client code:

go run ./cmd/client

Observe the output:

2021/03/18 10:42:58 user created with id: 1
2021/03/18 10:42:58 retrieved user with id=1: id:1 name:"user_730811260095307266" email_address:"user_7338662242574055998@example.com"

Amazing! With a few annotations on our schema, we used the super-powers of code generation to create a working gRPC server in no time!

Caveats and Limitations#

entproto is still experimental stage and lacks some basic functionality. For example, many applications will probably want a List or Find method on their service, but these are not yet supported. In addition, some other issues we plan to tackle in the near future:

  • Currently only "unique" edges are supported (O2O, O2M).
  • The generated "mutating" methods (Create/Update) currently set all fields, disregarding zero/null values and field nullability.
  • All fields are copied from the gRPC request to the ent client, support for configuring some fields to be unsettable via the service by adding a field/edge annotation is also planned.

Next Steps#

We believe that ent + gRPC can be a great way to build server applications in Go. For example, to set granular access control to the entities managed by our application, developers can already use Privacy Policies that work out-of-the-box with the gRPC integration. To run any arbitrary Go code on the different lifecycle events of entities, developers can utilize custom Hooks.

Do you want to build gRPC servers with ent? If you want some help setting up or want the integration to support your use case, please reach out to us via our Discussions Page on GitHub or in the #ent channel on the Gophers Slack.

For more Ent news and updates:

Announcing Edge-field Support in v0.7.0

Over the past few months, there has been much discussion in the Ent project issues about adding support for the retrieval of the foreign key field when retrieving entities with One-to-One or One-to-Many edges. We are happy to announce that as of v0.7.0 ent supports this feature.

Before Edge-field Support#

Prior to merging this branch, a user that wanted to retrieve the foreign-key field for an entity needed to use eager-loading. Suppose our schema looked like this:

// ent/schema/user.go:
// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}
// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Unique().
NotEmpty(),
}
}
// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.From("pets", Pet.Type).
Ref("owner"),
}
}
// ent/schema/pet.go
// Pet holds the schema definition for the Pet entity.
type Pet struct {
ent.Schema
}
// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name").
NotEmpty(),
}
}
// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.To("owner", User.Type).
Unique().
Required(),
}
}

The schema describes two related entities: User and Pet, with a One-to-Many edge between them: a user can own many pets and a pet can have one owner.

When retrieving pets from the data storage, it is common for developers to want to access the foreign-key field on the pet. However, because this field is created implicitly from the owner edge it was automatically accessible when retrieving an entity. To retrieve this from the storage a developer needed to do something like:

func Test(t *testing.T) {
ctx := context.Background()
c := enttest.Open(t, dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
defer c.Close()
// Create the User
u := c.User.Create().
SetUserName("rotem").
SaveX(ctx)
// Create the Pet
p := c.Pet.
Create().
SetOwner(u). // Associate with the user
SetName("donut").
SaveX(ctx)
petWithOwnerId := c.Pet.Query().
Where(pet.ID(p.ID)).
WithOwner(func(query *ent.UserQuery) {
query.Select(user.FieldID)
}).
OnlyX(ctx)
fmt.Println(petWithOwnerId.Edges.Owner.ID)
// Output: 1
}

Aside from being very verbose, retrieving the pet with the owner this way was inefficient in-terms of database queries. If we execute the query with the .Debug() we can see the DB queries ent generates to satisfy this call:

SELECT DISTINCT `pets`.`id`, `pets`.`name`, `pets`.`pet_owner` FROM `pets` WHERE `pets`.`id` = ? LIMIT 2
SELECT DISTINCT `users`.`id` FROM `users` WHERE `users`.`id` IN (?)

In this example, Ent first retrieves the Pet with an ID of 1, then redundantly fetches the id field from the users table for users with an ID of 1.

With Edge-field Support#

Edge-field support greatly simplifies and improves the efficiency of this flow. With this feature, developers can define the foreign key field as part of the schemas Fields(), and by using the .Field(..) modifier on the edge definition instruct Ent to expose and map the foreign column to this field. So, in our example schema, we would modify it to be:

// user.go stays the same
// pet.go
// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name").
NotEmpty(),
field.Int("owner_id"), // <-- explictly add the field we want to contain the FK
}
}
// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.To("owner", User.Type).
Field("owner_id"). // <-- tell ent which field holds the reference to the owner
Unique().
Required(),
}
}

In order to update our client code we need to re-run code generation:

go generate ./...

We can now modify our query to be much simpler:

func Test(t *testing.T) {
ctx := context.Background()
c := enttest.Open(t, dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
defer c.Close()
u := c.User.Create().
SetUserName("rotem").
SaveX(ctx)
p := c.Pet.Create().
SetOwner(u).
SetName("donut").
SaveX(ctx)
petWithOwnerId := c.Pet.GetX(ctx, p.ID) // <-- Simply retrieve the Pet
fmt.Println(petWithOwnerId.OwnerID)
// Output: 1
}

Running with the .Debug() modifier we can see that the DB queries make more sense now:

SELECT DISTINCT `pets`.`id`, `pets`.`name`, `pets`.`owner_id` FROM `pets` WHERE `pets`.`id` = ? LIMIT 2

Hooray 🎉!

Migrating Existing Schemas to Edge Fields#

If you are already using Ent with an existing schema, you may already have O2M relations whose foreign-key columns already exist in your database. Depending on how you configured your schema, chances are that they may be stored in a column by a different name than the field you are now adding. For instance, you want to create an owner_id field, but Ent auto-created the column foreign-key column as pet_owner.

To check what column name Ent is using for this field you can look in the ./ent/migrate/schema.go file:

PetsColumns = []*schema.Column{
{Name: "id", Type: field.TypeInt, Increment: true},
{Name: "name", Type: field.TypeString},
{Name: "pet_owner", Type: field.TypeInt, Nullable: true}, // <-- this is our FK
}

To allow for a smooth migration, you must explicitly tell Ent to keep using the existing column name. You can do this by using the StorageKey modifier (either on the field or on the edge). For example:

// In schema/pet.go:
// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name").
NotEmpty(),
field.Int("owner_id").
StorageKey("pet_owner"), // <-- explicitly set the column name
}
}

In the near future we plan to implement Schema Versioning, which will store the history of schema changes alongside the code. Having this information will allow ent to support such migrations in an automatic and predictable way.

Wrapping Up#

Edge-field support is readily available and can be installed by go get -u entgo.io/ent@v0.7.0.

Many thanks 🙏 to all the good people who took the time to give feedback and helped design this feature properly: Alex Snast, Ruben de Vries, Marwan Sulaiman, Andy Day, Sebastian Fekete and Joe Harvey.

For more Ent news and updates:#

欢迎使用 ent

特拉维夫Facebook Connectivity团队 Go 使用现状#

20个月前,在经历了约5年左右的 Go 开发经历和部分公司实践后,我加入了特拉维夫的 Facebook Connectivity 团队(FBC),
我加入了一个正在从事一个新项目的团队,我们需要为这个任务选择一种语言。 我们比较了几种语言,最终决定使用 Go 语言。

从那时起,Go 继续推广到其他 FBC 项目,并在特拉维夫仅有15名 Go 工程师的情况下取得了巨大成功。 新服务现在使用 Go 编写

为 Go 编写新的 ORM 的动机#

我在Facebook的5年工作时间主要方向是基础设施工具和微服务相关,并没有太多的数据模型工作。 有一个服务需要与SQL数据库进行交互,使用着一个开源的解决方案;但是这个项目有复杂的数据模型,并且使用了另一种语言和健壮的 ORM。 比如,Python 与 SQLAlchemy。

在Facebook,我们喜欢用图的概念来思考我们的数据模型。 我们在内部使用这个模型有很好的经验。
Go 没有适当的基于图的ORM ,于是我们决定以下原则编写一个:

  • Schema即代码 - 定义类型, 关联和约束应该在 Go 代码(而不是结构标签),并且应该使用 CLI 工具验证。 我们Facebook有很好的使用类似工具的内部经验。
  • 使用代码生成静态类型和明确的 API - API中到处使用 interface{} 会影响开发人员的效率;特别是对项目新人。
  • 查询,聚合和图遍历 应该是简单的 - 开发者不想要处理 SQL 查询或SQL 条目。
  • 断言应是静态类型。 不会到处都是字符串类型
  • 全面支持 context.Context - 这有助于我们在追踪和日志系统中获得充分可见性。 而且,这对于其他功能,如取消等,非常重要。
  • 存储层无关 - 我们试图使用代码模板让存储层可以动态变化,开发者可以使用Gremlin (AWS Neptune)做初期开发然后后期切换到MySQL。

开源 ent#

ent 是一个基于上述原则构建的 Go 的实体框架 (ORM)。 ent 可以使用 Go 代码轻松地定义任何数据模型或图结构;schema配置由 entc (ent codegen) 验证,这种配置生成了一个地道的静态类型的 API ,使开发人员能够富有生产性和幸福感。 It supports MySQL, MariaDB, PostgreSQL, SQLite, and Gremlin-based graph databases.

今天,我们正式开源 ent ,并邀请您开始试用 entgo.io/docs/getting-starting