跳到主要内容

· 阅读时间 3 分钟

Migrating to a new ORM is not an easy process, and the transition cost can be prohibitive to many organizations. As much as we developers are enamoured by "Shiny New Things", the truth is that we rarely get a chance to work on a truly "green-field" project. Most of our careers, we operate in contexts where many technical and business constraints (a.k.a legacy systems) dictate and limit our options for moving forward. Developers of new technologies that want to succeed must offer interoperability capability and integration paths to help organizations seamlessly transition to a new way of solving an existing problem.

To help lower the cost of transitioning to Ent (or simply experimenting with it), we have started the "Schema Import Initiative" to help support many use cases for generating Ent schemas from external resources. The centrepiece of this effort is the schemast package (source code, docs) which enables developers to easily write programs that generate and manipulate Ent schemas. Using this package, developers can program in a high-level API, relieving them from worrying about code parsing and AST manipulations.

Protobuf Import Support

The first project to use this new API, is protoc-gen-ent, a protoc plugin to generate Ent schemas from .proto files (docs). Organizations that have existing schemas defined in Protobuf can use this tool to generate Ent code automatically. For example, taking a simple message definition:

syntax = "proto3";

package entpb;

option go_package = "github.com/yourorg/project/ent/proto/entpb";

message User {
string name = 1;
string email_address = 2;
}

And setting the ent.schema.gen option to true:

syntax = "proto3";

package entpb;

+import "options/opts.proto";

option go_package = "github.com/yourorg/project/ent/proto/entpb";

message User {
+ option (ent.schema).gen = true; // <-- tell protoc-gen-ent you want to generate a schema from this message
string name = 1;
string email_address = 2;
}

Developers can invoke the standard protoc (protobuf compiler) command to use this plugin:

protoc -I=proto/ --ent_out=. --ent_opt=schemadir=./schema proto/entpb/user.proto

To generate Ent schemas from these definitions:

package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)

type User struct {
ent.Schema
}

func (User) Fields() []ent.Field {
return []ent.Field{field.String("name"), field.String("email_address")}
}
func (User) Edges() []ent.Edge {
return nil
}

To start using protoc-gen-ent today, and read about all of the different configuration options, head over to the documentation!

Join the Schema Import Initiative

Do you have schemas defined elsewhere that you would like to automatically import in to Ent? With the schemast package, it is easier than ever to write the tool that you need to do that. Not sure how to start? Want to collaborate with the community in planning and building out your idea? Reach out to our great community via our Discord server, Slack channel or start a discussion on GitHub!

For more Ent news and updates:

· 阅读时间 14 分钟

ent + gRPC

引言

随着软件工程组织的规模不断扩大,用一种中心化的、语言中立的格式来定义实体模式有着诸多好处。 在实践中,许多组织把 Protocol Buffers 作为他们的接口描述语言(interface definition language,IDL)。 此外,gRPC,一个基于 Protobuf、仿照 Google 内部使用的 Stubby 的 RPC 框架正因为其效率和代码生成能力而越来越受欢迎。

作为 IDL,gRPC 对数据访问层的实现细节没有规定具体的原则,所以不同实现间有着很大的差异。 而 Ent 是在 Go 应用程序中构建数据访问层时一个十分自然的选择,所以将这两种技术集成在一起有着很大的潜力。

今天我们宣布一个实验版本的 entproto,它既是 Go 包,也是命令行工具,为 ent 用户提供添加 Protobuf 和 gRPC 支持。 有了 entproto,开发者就能在几分钟内搭建起一个正常工作的 CRUD gRPC 服务端。 在这篇博文中,我们将演示如何做到这一点。

开始配置

本教程的最新版本可在 GitHub上找到,如果你想的话你可以克隆它。

让我们首先为我们的项目初始化一个新的Go模块:

mkdir ent-grpc-example
cd ent-grpc-example
go mod init ent-grpc-example

接下来我们使用 go run 来调用代码生成器初始化 Schema:

go run -mod=mod entgo.io/ent/cmd/ent new User

现在,我们的目录结构应该类似:

.
├── ent
│   ├── generate.go
│   └── schema
│   └── user.go
├── go.mod
└── go.sum

接下来,让我们把 entproto 包添加到项目中:

go get -u entgo.io/contrib/entproto

接下来,我们将定义 User 实体的 Schema。 打开 ent/schema/user.go 并写入:

package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema"
)

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Unique(),
field.String("email_address").
Unique(),
}
}

在这个步骤中,我们向 User 实体添加了两个唯一约束的字段: nameemail_addressent.Schema 只是结构的定义, 要根据它生成可用的代码,我们需要运行 Ent 的代码生成工具。 运行:

go generate ./...

注意,一些文件依据我们的 Schema 定义生成了出来:

├── ent
│   ├── client.go
│   ├── config.go
// .... many more
│   ├── user
│   ├── user.go
│   ├── user_create.go
│   ├── user_delete.go
│   ├── user_query.go
│   └── user_update.go
├── go.mod
└── go.sum

此时,我们可以打开与数据库的连接。 运行迁移来创建 User 表,并开始读取和写入数据。 在 安装教程 中涵盖了这个主题。 因此,让我们跳过这部分,学习如何根据我们的 Schema 生成 Protobuf 定义和 gRPC 服务器。

使用 entproto 生成Go Protobufs

由于 ent 和 Protobuf 的 Schema 并不完全相同,所以我们必须在我们的 Schema 上提供一些注解,让 entproto 能明白如何生成 Protobuf 定义(在 protobuf 术语中被称为“message”)。

我们要做的第一件事是添加一个 entproto.Message() 注解。 这是我们对是否生成 Protobuf Schema 的选择,我们不一定想将所有的 Schema 实体都生成 proto 消息或 gRPC 服务定义,这条注解让我们能够对此进行控制。 在 ent/schema/user.go 的末尾加上它:

func (User) Annotations() []schema.Annotation {
return []schema.Annotation{
entproto.Message(),
}
}

接下来,我们需要对每个字段进行注解并分配一个字段号。 回忆一下定义 protobuf message 类型时,应给每个字段分配一个唯一的号码。 为此,我们在每个字段上添加一个 entproto.Field 注解。 更新 ent/schema/user.go 中的 Fields

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Unique().
Annotations(
entproto.Field(2),
),
field.String("email_address").
Unique().
Annotations(
entproto.Field(3),
),
}
}

请注意,我们没有从 1 开始我们的字段编号。 这是因为 ent 隐含地为实体创建 ID 字段,并且该字段自动分配编号 1。 我们现在可以生成我们的 protobuf 消息类型定义。 要做到这一点,我们将在 ent/generate.go 添加一个 go:generate 指令,调用 entproto 命令行工具。 它现在看起来像这样:

package ent

//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate ./schema
//go:generate go run -mod=mod entgo.io/contrib/entproto/cmd/entproto -path ./schema

让我们重新生成代码:

go generate ./...

观察一下,一个新的目录 ent/proto 已经创建,它将包含所有与 protobuf 相关的生成代码。 它现在包含以下内容:

ent/proto
└── entpb
├── entpb.proto
└── generate.go

已创建两个文件。 让我们看看它们的内容:

// Code generated by entproto. DO NOT EDIT.
syntax = "proto3";

package entpb;

option go_package = "ent-grpc-example/ent/proto/entpb";

message User {
int32 id = 1;

string user_name = 2;

string email_address = 3;
}

不错! 创建了一个新的 .proto 文件,其中包含一个映射到 User schema 的消息类型定义!

package entpb
//go:generate protoc -I=.. --go_out=.. --go-grpc_out=.. --go_opt=paths=source_relative --go-grpc_opt=paths=source_relative --entgrpc_out=.. --entgrpc_opt=paths=source_relative,schema_path=../../schema entpb/entpb.proto

A new generate.go file was created with an invocation to protoc, the protobuf code generator instructing it how to generate Go code from our .proto file. For this command to work, we must first install protoc as well as 3 protobuf plugins: protoc-gen-go (which generates Go Protobuf structs), protoc-gen-go-grpc (which generates Go gRPC service interfaces and clients), and protoc-gen-entgrpc (which generates an implementation of the service interface). If you do not have these installed, please follow these directions:

After installing these dependencies, we can re-run code-generation:

go generate ./...

Observe that a new file named ent/proto/entpb/entpb.pb.go was created which contains the generated Go structs for our entities.

Let's write a test that uses it to make sure everything is wired correctly. Create a new file named pb_test.go and write:

package main

import (
"testing"

"ent-grpc-example/ent/proto/entpb"
)

func TestUserProto(t *testing.T) {
user := entpb.User{
Name: "rotemtam",
EmailAddress: "rotemtam@example.com",
}
if user.GetName() != "rotemtam" {
t.Fatal("expected user name to be rotemtam")
}
if user.GetEmailAddress() != "rotemtam@example.com" {
t.Fatal("expected email address to be rotemtam@example.com")
}
}

To run it:

go get -u./... # install deps of the generated package
go test ./...

太棒了! 测试通过。 我们从我们的 Ent Schema 中成功生成了可用的 Go Protobuf 结构体。 接下来,我们来看看如何从我们的 Schema 中自动生成一个可以工作的 CRUD gRPC 服务端

Generating a Fully Working gRPC Server from our Schema

Having Protobuf structs generated from our ent.Schema can be useful, but what we're really interested in is getting an actual server that can create, read, update, and delete entities from an actual database. To do that, we need to update just one line of code! When we annotate a schema with entproto.Service, we tell the entproto code-gen that we are interested in generating a gRPC service definition, from the protoc-gen-entgrpc will read our definition and generate a service implementation. Edit ent/schema/user.go and modify the schema's Annotations:

func (User) Annotations() []schema.Annotation {
return []schema.Annotation{
entproto.Message(),
+ entproto.Service(), // <-- add this
}
}

Now re-run code-generation:

go generate ./...

Observe some interesting changes in ent/proto/entpb:

ent/proto/entpb
├── entpb.pb.go
├── entpb.proto
├── entpb_grpc.pb.go
├── entpb_user_service.go
└── generate.go

First, entproto added a service definition to entpb.proto:

service UserService {
rpc Create ( CreateUserRequest ) returns ( User );

rpc Get ( GetUserRequest ) returns ( User );

rpc Update ( UpdateUserRequest ) returns ( User );

rpc Delete ( DeleteUserRequest ) returns ( google.protobuf.Empty );
}

In addition, two new files were created. The first, ent_grpc.pb.go, contains the gRPC client stub and the interface definition. If you open the file, you will find in it (among many other things):

// UserServiceClient is the client API for UserService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type UserServiceClient interface {
Create(ctx context.Context, in *CreateUserRequest, opts ...grpc.CallOption) (*User, error)
Get(ctx context.Context, in *GetUserRequest, opts ...grpc.CallOption) (*User, error)
Update(ctx context.Context, in *UpdateUserRequest, opts ...grpc.CallOption) (*User, error)
Delete(ctx context.Context, in *DeleteUserRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
}

The second file, entpub_user_service.go contains a generated implementation for this interface. For example, an implementation for the Get method:

// Get implements UserServiceServer.Get
func (svc *UserService) Get(ctx context.Context, req *GetUserRequest) (*User, error) {
get, err := svc.client.User.Get(ctx, int(req.GetId()))
switch {
case err == nil:
return toProtoUser(get), nil
case ent.IsNotFound(err):
return nil, status.Errorf(codes.NotFound, "not found: %s", err)
default:
return nil, status.Errorf(codes.Internal, "internal error: %s", err)
}
}

Not bad! Next, let's create a gRPC server that can serve requests to our service.

Creating the Server

Create a new file cmd/server/main.go and write:

package main

import (
"context"
"log"
"net"

_ "github.com/mattn/go-sqlite3"
"ent-grpc-example/ent"
"ent-grpc-example/ent/proto/entpb"
"google.golang.org/grpc"
)

func main() {
// Initialize an ent client.
client, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer client.Close()

// Run the migration tool (creating tables, etc).
if err := client.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}

// Initialize the generated User service.
svc := entpb.NewUserService(client)

// Create a new gRPC server (you can wire multiple services to a single server).
server := grpc.NewServer()

// Register the User service with the server.
entpb.RegisterUserServiceServer(server, svc)

// Open port 5000 for listening to traffic.
lis, err := net.Listen("tcp", ":5000")
if err != nil {
log.Fatalf("failed listening: %s", err)
}

// Listen for traffic indefinitely.
if err := server.Serve(lis); err != nil {
log.Fatalf("server ended: %s", err)
}
}

Notice that we added an import of github.com/mattn/go-sqlite3, so we need to add it to our module:

go get -u github.com/mattn/go-sqlite3

Next, let's run the server, while we write a client that will communicate with it:

go run -mod=mod ./cmd/server

Creating the Client

Let's create a simple client that will make some calls to our server. Create a new file named cmd/client/main.go and write:

package main

import (
"context"
"fmt"
"log"
"math/rand"
"time"

"ent-grpc-example/ent/proto/entpb"
"google.golang.org/grpc"
"google.golang.org/grpc/status"
)

func main() {
rand.Seed(time.Now().UnixNano())

// Open a connection to the server.
conn, err := grpc.Dial(":5000", grpc.WithInsecure())
if err != nil {
log.Fatalf("failed connecting to server: %s", err)
}
defer conn.Close()

// Create a User service Client on the connection.
client := entpb.NewUserServiceClient(conn)

// Ask the server to create a random User.
ctx := context.Background()
user := randomUser()
created, err := client.Create(ctx, &entpb.CreateUserRequest{
User: user,
})
if err != nil {
se, _ := status.FromError(err)
log.Fatalf("failed creating user: status=%s message=%s", se.Code(), se.Message())
}
log.Printf("user created with id: %d", created.Id)

// On a separate RPC invocation, retrieve the user we saved previously.
get, err := client.Get(ctx, &entpb.GetUserRequest{
Id: created.Id,
})
if err != nil {
se, _ := status.FromError(err)
log.Fatalf("failed retrieving user: status=%s message=%s", se.Code(), se.Message())
}
log.Printf("retrieved user with id=%d: %v", get.Id, get)
}

func randomUser() *entpb.User {
return &entpb.User{
Name: fmt.Sprintf("user_%d", rand.Int()),
EmailAddress: fmt.Sprintf("user_%d@example.com", rand.Int()),
}
}

Our client creates a connection to port 5000, where our server is listening, then issues a Create request to create a new user, and then issues a second Get request to retrieve it from the database. Let's run our client code:

go run ./cmd/client

Observe the output:

2021/03/18 10:42:58 user created with id: 1
2021/03/18 10:42:58 retrieved user with id=1: id:1 name:"user_730811260095307266" email_address:"user_7338662242574055998@example.com"

Amazing! With a few annotations on our schema, we used the super-powers of code generation to create a working gRPC server in no time!

注意事项和限制

entproto 仍然是实验阶段,缺少一些基本功能。 例如,许多应用程序的服务中很可能需要 ListFind 方法,但目前尚未被支持。 此外,我们计划在不久的将来处理一些其他问题:

  • 目前只支持"唯一"的边(O2O, O2M)。
  • 生成的“mutating”方法 (Create/Update) 目前设置所有字段,而不考虑零/空值和可空字段。
  • 所有字段都从gRPC请求拷贝到ent客户端;计划中会通过添加字段/边的注解,来配置某些字段禁止通过服务进行修改。

接下来

我们相信 ent + gRPC 可以是一个在 Go 中构建服务器应用程序的好方式。 例如,为了对我们的应用所管理的实体设置细粒度的访问控制,开发者可以使用隐私策略,它与 gRPC 的集成开箱即用。 想在实体的不同生命周期事件时运行任意的 Go 代码,开发者可以使用自定义钩子

你想用 ent 构建 gRPC 服务端吗? If you want some help setting up or want the integration to support your use case, please reach out to us via our Discussions Page on GitHub or in the #ent channel on the Gophers Slack or our Discord server.

获取更多 Ent 的新闻与进展:

· 阅读时间 5 分钟

Over the past few months, there has been much discussion in the Ent project issues about adding support for the retrieval of the foreign key field when retrieving entities with One-to-One or One-to-Many edges. We are happy to announce that as of v0.7.0 ent supports this feature.

Before Edge-field Support

Prior to merging this branch, a user that wanted to retrieve the foreign-key field for an entity needed to use eager-loading. Suppose our schema looked like this:

// ent/schema/user.go:

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Unique().
NotEmpty(),
}
}

// Edges of the User.
func (User) Edges() []ent.Edge {
return []ent.Edge{
edge.From("pets", Pet.Type).
Ref("owner"),
}
}

// ent/schema/pet.go

// Pet holds the schema definition for the Pet entity.
type Pet struct {
ent.Schema
}

// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name").
NotEmpty(),
}
}

// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.To("owner", User.Type).
Unique().
Required(),
}
}

The schema describes two related entities: User and Pet, with a One-to-Many edge between them: a user can own many pets and a pet can have one owner.

When retrieving pets from the data storage, it is common for developers to want to access the foreign-key field on the pet. However, because this field is created implicitly from the owner edge it was automatically accessible when retrieving an entity. To retrieve this from the storage a developer needed to do something like:

func Test(t *testing.T) {
ctx := context.Background()
c := enttest.Open(t, dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
defer c.Close()

// Create the User
u := c.User.Create().
SetUserName("rotem").
SaveX(ctx)

// Create the Pet
p := c.Pet.
Create().
SetOwner(u). // Associate with the user
SetName("donut").
SaveX(ctx)

petWithOwnerId := c.Pet.Query().
Where(pet.ID(p.ID)).
WithOwner(func(query *ent.UserQuery) {
query.Select(user.FieldID)
}).
OnlyX(ctx)
fmt.Println(petWithOwnerId.Edges.Owner.ID)
// Output: 1
}

Aside from being very verbose, retrieving the pet with the owner this way was inefficient in-terms of database queries. If we execute the query with the .Debug() we can see the DB queries ent generates to satisfy this call:

SELECT DISTINCT `pets`.`id`, `pets`.`name`, `pets`.`pet_owner` FROM `pets` WHERE `pets`.`id` = ? LIMIT 2 
SELECT DISTINCT `users`.`id` FROM `users` WHERE `users`.`id` IN (?)

In this example, Ent first retrieves the Pet with an ID of 1, then redundantly fetches the id field from the users table for users with an ID of 1.

With Edge-field Support

Edge-field support greatly simplifies and improves the efficiency of this flow. With this feature, developers can define the foreign key field as part of the schemas Fields(), and by using the .Field(..) modifier on the edge definition instruct Ent to expose and map the foreign column to this field. So, in our example schema, we would modify it to be:

// user.go stays the same

// pet.go
// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name").
NotEmpty(),
field.Int("owner_id"), // <-- explicitly add the field we want to contain the FK
}
}

// Edges of the Pet.
func (Pet) Edges() []ent.Edge {
return []ent.Edge{
edge.To("owner", User.Type).
Field("owner_id"). // <-- tell ent which field holds the reference to the owner
Unique().
Required(),
}
}

In order to update our client code we need to re-run code generation:

go generate ./...

We can now modify our query to be much simpler:

func Test(t *testing.T) {
ctx := context.Background()
c := enttest.Open(t, dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
defer c.Close()

u := c.User.Create().
SetUserName("rotem").
SaveX(ctx)

p := c.Pet.Create().
SetOwner(u).
SetName("donut").
SaveX(ctx)

petWithOwnerId := c.Pet.GetX(ctx, p.ID) // <-- Simply retrieve the Pet

fmt.Println(petWithOwnerId.OwnerID)
// Output: 1
}

Running with the .Debug() modifier we can see that the DB queries make more sense now:

SELECT DISTINCT `pets`.`id`, `pets`.`name`, `pets`.`owner_id` FROM `pets` WHERE `pets`.`id` = ? LIMIT 2

Hooray 🎉!

Migrating Existing Schemas to Edge Fields

If you are already using Ent with an existing schema, you may already have O2M relations whose foreign-key columns already exist in your database. Depending on how you configured your schema, chances are that they may be stored in a column by a different name than the field you are now adding. For instance, you want to create an owner_id field, but Ent auto-created the column foreign-key column as pet_owner.

To check what column name Ent is using for this field you can look in the ./ent/migrate/schema.go file:

PetsColumns = []*schema.Column{
{Name: "id", Type: field.TypeInt, Increment: true},
{Name: "name", Type: field.TypeString},
{Name: "pet_owner", Type: field.TypeInt, Nullable: true}, // <-- this is our FK
}

To allow for a smooth migration, you must explicitly tell Ent to keep using the existing column name. You can do this by using the StorageKey modifier (either on the field or on the edge). For example:

// In schema/pet.go:

// Fields of the Pet.
func (Pet) Fields() []ent.Field {
return []ent.Field{
field.String("name").
NotEmpty(),
field.Int("owner_id").
StorageKey("pet_owner"), // <-- explicitly set the column name
}
}

In the near future we plan to implement Schema Versioning, which will store the history of schema changes alongside the code. Having this information will allow ent to support such migrations in an automatic and predictable way.

Wrapping Up

Edge-field support is readily available and can be installed by go get -u entgo.io/ent@v0.7.0.

Many thanks 🙏 to all the good people who took the time to give feedback and helped design this feature properly: Alex Snast, Ruben de Vries, Marwan Sulaiman, Andy Day, Sebastian Fekete and Joe Harvey.

For more Ent news and updates:

· 阅读时间 4 分钟

特拉维夫Facebook Connectivity团队 Go 使用现状

20个月前,在经历了约5年左右的 Go 开发经历和部分公司实践后,我加入了特拉维夫的 Facebook Connectivity 团队(FBC),
我加入了一个正在从事一个新项目的团队,我们需要为这个任务选择一种语言。 我们比较了几种语言,最终决定使用 Go 语言。

从那时起,Go 继续推广到其他 FBC 项目,并在特拉维夫仅有15名 Go 工程师的情况下取得了巨大成功。 新服务现在使用 Go 编写

为 Go 编写新的 ORM 的动机

我在Facebook的5年工作时间主要方向是基础设施工具和微服务相关,并没有太多的数据模型工作。 有一个服务需要与SQL数据库进行交互,使用着一个开源的解决方案;但是这个项目有复杂的数据模型,并且使用了另一种语言和健壮的 ORM。 比如,Python 与 SQLAlchemy。

在Facebook,我们喜欢用图的概念来思考我们的数据模型。 我们在内部使用这个模型有很好的经验。
Go 没有适当的基于图的ORM ,于是我们决定以下原则编写一个:

  • Schema即代码 - 定义类型, 关联和约束应该在 Go 代码(而不是结构标签),并且应该使用 CLI 工具验证。 我们Facebook有很好的使用类似工具的内部经验。
  • 使用代码生成静态类型和明确的 API - API中到处使用 interface{} 会影响开发人员的效率;特别是对项目新人。
  • 查询,聚合和图遍历 应该是简单的 - 开发者不想要处理 SQL 查询或SQL 条目。
  • 断言应是静态类型。 不会到处都是字符串类型
  • 全面支持 context.Context - 这有助于我们在追踪和日志系统中获得充分可见性。 而且,这对于其他功能,如取消等,非常重要。
  • 存储层无关 - 我们试图使用代码模板让存储层可以动态变化,开发者可以使用Gremlin (AWS Neptune)做初期开发然后后期切换到MySQL。

开源 ent

ent 是一个基于上述原则构建的 Go 的实体框架 (ORM)。 ent 可以使用 Go 代码轻松地定义任何数据模型或图结构;schema配置由 entc (ent codegen) 验证,这种配置生成了一个地道的静态类型的 API ,使开发人员能够富有生产性和幸福感。 它支持 MySQL, MariaDB, PostgreSQL, SQLite 和 Gremlin图数据库。

今天,我们正式开源 ent ,并邀请您开始试用 entgo.io/docs/getting-starting