メインコンテンツへスキップする

· 1 分で読む

GraphQL is a query language for HTTP APIs, providing a statically-typed interface to conveniently represent today's complex data hierarchies. One way to use GraphQL is to import a library implementing a GraphQL server to which one registers custom resolvers implementing the database interface. An alternative way is to use a GraphQL cloud service to implement the GraphQL server and register serverless cloud functions as resolvers. Among the many benefits of cloud services, one of the biggest practical advantages is the resolvers' independence and composability. For example, we can write one resolver to a relational database and another to a search database.

We consider such a kind of setup using Amazon Web Services (AWS) in the following. In particular, we use AWS AppSync as the GraphQL cloud service and AWS Lambda to run a relational database resolver, which we implement using Go with Ent as the entity framework. Compared to Nodejs, the most popular runtime for AWS Lambda, Go offers faster start times, higher performance, and, from my point of view, an improved developer experience. As an additional complement, Ent presents an innovative approach towards type-safe access to relational databases, which, in my opinion, is unmatched in the Go ecosystem. In conclusion, running Ent with AWS Lambda as AWS AppSync resolvers is an extremely powerful setup to face today's demanding API requirements.

In the next sections, we set up GraphQL in AWS AppSync and the AWS Lambda function running Ent. Subsequently, we propose a Go implementation integrating Ent and the AWS Lambda event handler, followed by performing a quick test of the Ent function. Finally, we register it as a data source to our AWS AppSync API and configure the resolvers, which define the mapping from GraphQL requests to AWS Lambda events. Be aware that this tutorial requires an AWS account and the URL to a publicly-accessible Postgres database, which may incur costs.

Setting up AWS AppSync schema

To set up the GraphQL schema in AWS AppSync, sign in to your AWS account and select the AppSync service through the navbar. The landing page of the AppSync service should render you a "Create API" button, which you may click to arrive at the "Getting Started" page:

Screenshot of getting started with AWS AppSync from scratch

Getting started from sratch with AWS AppSync

In the top panel reading "Customize your API or import from Amazon DynamoDB" select the option "Build from scratch" and click the "Start" button belonging to the panel. You should now see a form where you may insert the API name. For the present tutorial, we type "Todo", see the screenshot below, and click the "Create" button.

Screenshot of creating a new AWS AppSync API resource

Creating a new API resource in AWS AppSync

After creating the AppSync API, you should see a landing page showing a panel to define the schema, a panel to query the API, and a panel on integrating AppSync into your app as captured in the screenshot below.

Screenshot of the landing page of the AWS AppSync API

Landing page of the AWS AppSync API

Click the "Edit Schema" button in the first panel and replace the previous schema with the following GraphQL schema:

input AddTodoInput {
title: String!
}

type AddTodoOutput {
todo: Todo!
}

type Mutation {
addTodo(input: AddTodoInput!): AddTodoOutput!
removeTodo(input: RemoveTodoInput!): RemoveTodoOutput!
}

type Query {
todos: [Todo!]!
todo(id: ID!): Todo
}

input RemoveTodoInput {
todoId: ID!
}

type RemoveTodoOutput {
todo: Todo!
}

type Todo {
id: ID!
title: String!
}

schema {
query: Query
mutation: Mutation
}

After replacing the schema, a short validation runs and you should be able to click the "Save Schema" button on the top right corner and find yourself with the following view:

Screenshot AWS AppSync: Final GraphQL schema for AWS AppSync API

Final GraphQL schema of AWS AppSync API

If we sent GraphQL requests to our AppSync API, the API would return errors as no resolvers have been attached to the schema. We will configure the resolvers after deploying the Ent function via AWS Lambda.

Explaining the present GraphQL schema in detail is beyond the scope of this tutorial. In short, the GraphQL schema implements a list todos operation via Query.todos, a single read todo operation via Query.todo, a create todo operation via Mutation.createTodo, and a delete operation via Mutation.deleteTodo. The GraphQL API is similar to a simple REST API design of an /todos resource, where we would use GET /todos, GET /todos/:id, POST /todos, and DELETE /todos/:id. For details on the GraphQL schema design, e.g., the arguments and returns from the Query and Mutation objects, I follow the practices from the GitHub GraphQL API.

Setting up AWS Lambda

With the AppSync API in place, our next stop is the AWS Lambda function to run Ent. For this, we navigate to the AWS Lambda service through the navbar, which leads us to the landing page of the AWS Lambda service listing our functions:

Screenshot of AWS Lambda landing page listing functions

AWS Lambda landing page showing functions.

We click the "Create function" button on the top right and select "Author from scratch" in the upper panel. Furthermore, we name the function "ent", set the runtime to "Go 1.x", and click the "Create function" button at the bottom. We should then find ourselves viewing the landing page of our "ent" function:

Screenshot of AWS Lambda landing page listing functions

AWS Lambda function overview of the Ent function.

Before reviewing the Go code and uploading the compiled binary, we need to adjust some default settings of the "ent" function. First, we change the default handler name from hello to main, which equals the filename of the compiled Go binary:

Screenshot of AWS Lambda landing page listing functions

AWS Lambda runtime settings of Ent function.

Second, we add an environment the variable DATABASE_URL encoding the database network parameters and credentials:

Screenshot of AWS Lambda landing page listing functions

AWS Lambda environemnt variables settings of Ent function.

To open a connection to the database, pass in a DSN, e.g., postgres://username:password@hostname/dbname. By default, AWS Lambda encrypts the environment variables, making them a fast and safe mechanism to supply database connection parameters. Alternatively, one can use the AWS Secretsmanager service and dynamically request credentials during the Lambda function's cold start, allowing, among others, rotating credentials. A third option is to use AWS IAM to handle the database authorization.

If you created your Postgres database in AWS RDS, the default username and database name is postgres. The password can be reset by modifying the AWS RDS instance.

Setting up Ent and deploying AWS Lambda

We now review, compile and deploy the database Go binary to the "ent" function. You can find the complete source code in bodokaiser/entgo-aws-appsync.

First, we create an empty directory to which we change:

mkdir entgo-aws-appsync
cd entgo-aws-appsync

Second, we initiate a new Go module to contain our project:

go mod init entgo-aws-appsync

Third, we create the Todo schema while pulling in the ent dependencies:

go run -mod=mod entgo.io/ent/cmd/ent init Todo

and add the title field:

ent/schema/todo.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)

// Todo holds the schema definition for the Todo entity.
type Todo struct {
ent.Schema
}

// Fields of the Todo.
func (Todo) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}

// Edges of the Todo.
func (Todo) Edges() []ent.Edge {
return nil
}

Finally, we perform the Ent code generation:

go generate ./ent

Using Ent, we write a set of resolver functions, which implement the create, read, and delete operations on the todos:

internal/handler/resolver.go
package resolver

import (
"context"
"fmt"
"strconv"

"entgo-aws-appsync/ent"
"entgo-aws-appsync/ent/todo"
)

// TodosInput is the input to the Todos query.
type TodosInput struct{}

// Todos queries all todos.
func Todos(ctx context.Context, client *ent.Client, input TodosInput) ([]*ent.Todo, error) {
return client.Todo.
Query().
All(ctx)
}

// TodoByIDInput is the input to the TodoByID query.
type TodoByIDInput struct {
ID string `json:"id"`
}

// TodoByID queries a single todo by its id.
func TodoByID(ctx context.Context, client *ent.Client, input TodoByIDInput) (*ent.Todo, error) {
tid, err := strconv.Atoi(input.ID)
if err != nil {
return nil, fmt.Errorf("failed parsing todo id: %w", err)
}
return client.Todo.
Query().
Where(todo.ID(tid)).
Only(ctx)
}

// AddTodoInput is the input to the AddTodo mutation.
type AddTodoInput struct {
Title string `json:"title"`
}

// AddTodoOutput is the output to the AddTodo mutation.
type AddTodoOutput struct {
Todo *ent.Todo `json:"todo"`
}

// AddTodo adds a todo and returns it.
func AddTodo(ctx context.Context, client *ent.Client, input AddTodoInput) (*AddTodoOutput, error) {
t, err := client.Todo.
Create().
SetTitle(input.Title).
Save(ctx)
if err != nil {
return nil, fmt.Errorf("failed creating todo: %w", err)
}
return &AddTodoOutput{Todo: t}, nil
}

// RemoveTodoInput is the input to the RemoveTodo mutation.
type RemoveTodoInput struct {
TodoID string `json:"todoId"`
}

// RemoveTodoOutput is the output to the RemoveTodo mutation.
type RemoveTodoOutput struct {
Todo *ent.Todo `json:"todo"`
}

// RemoveTodo removes a todo and returns it.
func RemoveTodo(ctx context.Context, client *ent.Client, input RemoveTodoInput) (*RemoveTodoOutput, error) {
t, err := TodoByID(ctx, client, TodoByIDInput{ID: input.TodoID})
if err != nil {
return nil, fmt.Errorf("failed querying todo with id %q: %w", input.TodoID, err)
}
err = client.Todo.
DeleteOne(t).
Exec(ctx)
if err != nil {
return nil, fmt.Errorf("failed deleting todo with id %q: %w", input.TodoID, err)
}
return &RemoveTodoOutput{Todo: t}, nil
}

Using input structs for the resolver functions allows for mapping the GraphQL request arguments. Using output structs allows for returning multiple objects for more complex operations.

To map the Lambda event to a resolver function, we implement a Handler, which performs the mapping according to an action field in the event:

internal/handler/handler.go
package handler

import (
"context"
"encoding/json"
"fmt"
"log"

"entgo-aws-appsync/ent"
"entgo-aws-appsync/internal/resolver"
)

// Action specifies the event type.
type Action string

// List of supported event actions.
const (
ActionMigrate Action = "migrate"

ActionTodos = "todos"
ActionTodoByID = "todoById"
ActionAddTodo = "addTodo"
ActionRemoveTodo = "removeTodo"
)

// Event is the argument of the event handler.
type Event struct {
Action Action `json:"action"`
Input json.RawMessage `json:"input"`
}

// Handler handles supported events.
type Handler struct {
client *ent.Client
}

// Returns a new event handler.
func New(c *ent.Client) *Handler {
return &Handler{
client: c,
}
}

// Handle implements the event handling by action.
func (h *Handler) Handle(ctx context.Context, e Event) (interface{}, error) {
log.Printf("action %s with payload %s\n", e.Action, e.Input)

switch e.Action {
case ActionMigrate:
return nil, h.client.Schema.Create(ctx)
case ActionTodos:
var input resolver.TodosInput
return resolver.Todos(ctx, h.client, input)
case ActionTodoByID:
var input resolver.TodoByIDInput
if err := json.Unmarshal(e.Input, &input); err != nil {
return nil, fmt.Errorf("failed parsing %s params: %w", ActionTodoByID, err)
}
return resolver.TodoByID(ctx, h.client, input)
case ActionAddTodo:
var input resolver.AddTodoInput
if err := json.Unmarshal(e.Input, &input); err != nil {
return nil, fmt.Errorf("failed parsing %s params: %w", ActionAddTodo, err)
}
return resolver.AddTodo(ctx, h.client, input)
case ActionRemoveTodo:
var input resolver.RemoveTodoInput
if err := json.Unmarshal(e.Input, &input); err != nil {
return nil, fmt.Errorf("failed parsing %s params: %w", ActionRemoveTodo, err)
}
return resolver.RemoveTodo(ctx, h.client, input)
}

return nil, fmt.Errorf("invalid action %q", e.Action)
}

In addition to the resolver actions, we also added a migration action, which is a convenient way to expose database migrations.

Finally, we need to register an instance of the Handler type to the AWS Lambda library.

lambda/main.go
package main

import (
"database/sql"
"log"
"os"

"entgo.io/ent/dialect"
entsql "entgo.io/ent/dialect/sql"

"github.com/aws/aws-lambda-go/lambda"
_ "github.com/jackc/pgx/v4/stdlib"

"entgo-aws-appsync/ent"
"entgo-aws-appsync/internal/handler"
)

func main() {
// open the daatabase connection using the pgx driver
db, err := sql.Open("pgx", os.Getenv("DATABASE_URL"))
if err != nil {
log.Fatalf("failed opening database connection: %v", err)
}

// initiate the ent database client for the Postgres database
client := ent.NewClient(ent.Driver(entsql.OpenDB(dialect.Postgres, db)))
defer client.Close()

// register our event handler to lissten on Lambda events
lambda.Start(handler.New(client).Handle)
}

The function body of main is executed whenever an AWS Lambda performs a cold start. After the cold start, a Lambda function is considered "warm," with only the event handler code being executed, making Lambda executions very efficient.

To compile and deploy the Go code, we run:

GOOS=linux go build -o main ./lambda
zip function.zip main
aws lambda update-function-code --function-name ent --zip-file fileb://function.zip

The first command creates a compiled binary named main. The second command compresses the binary to a ZIP archive, required by AWS Lambda. The third command replaces the function code of the AWS Lambda named ent with the new ZIP archive. If you work with multiple AWS accounts you want to use the --profile <your aws profile> switch.

After you successfully deployed the AWS Lambda, open the "Test" tab of the "ent" function in the web console and invoke it with a "migrate" action:

Screenshot of invoking the Ent Lambda with a migrate action

Invoking Lambda with a "migrate" action

On success, you should get a green feedback box and test the result of a "todos" action:

Screenshot of invoking the Ent Lambda with a todos action

Invoking Lambda with a "todos" action

In case the test executions fail, you most probably have an issue with your database connection.

Configuring AWS AppSync resolvers

With the "ent" function successfully deployed, we are left to register the ent Lambda as a data source to our AppSync API and configure the schema resolvers to map the AppSync requests to Lambda events. First, open our AWS AppSync API in the web console and move to "Data Sources", which you find in the navigation pane on the left.

Screenshot of the list of data sources registered to the AWS AppSync API

List of data sources registered to the AWS AppSync API

Click the "Create data source" button in the top right to start registering the "ent" function as data source:

Screenshot registering the ent Lambda as data source to the AWS AppSync API

Registering the ent Lambda as data source to the AWS AppSync API

Now, open the GraphQL schema of the AppSync API and search for the Query type in the sidebar to the right. Click the "Attach" button next to the Query.Todos type:

Screenshot attaching a resolver to Query type in the AWS AppSync API

Attaching a resolver for the todos Query in the AWS AppSync API

In the resolver view for Query.todos, select the Lambda function as data source, enable the request mapping template option,

Screenshot configuring the resolver mapping for the todos Query in the AWS AppSync API

Configuring the resolver mapping for the todos Query in the AWS AppSync API

and copy the following template:

Query.todos
{
"version" : "2017-02-28",
"operation": "Invoke",
"payload": {
"action": "todos"
}
}

Repeat the same procedure for the remaining Query and Mutation types:

Query.todo
{
"version" : "2017-02-28",
"operation": "Invoke",
"payload": {
"action": "todo",
"input": $util.toJson($context.args.input)
}
}
Mutation.addTodo
{
"version" : "2017-02-28",
"operation": "Invoke",
"payload": {
"action": "addTodo",
"input": $util.toJson($context.args.input)
}
}
Mutation.removeTodo
{
"version" : "2017-02-28",
"operation": "Invoke",
"payload": {
"action": "removeTodo",
"input": $util.toJson($context.args.input)
}
}

The request mapping templates let us construct the event objects with which we invoke the Lambda functions. Through the $context object, we have access to the GraphQL request and the authentication session. In addition, it is possible to arrange multiple resolvers sequentially and reference the respective outputs via the $context object. In principle, it is also possible to define response mapping templates. However, in most cases it is sufficient enough to return the response object "as is".

Testing AppSync using the Query explorer

The easiest way to test the API is to use the Query Explorer in AWS AppSync. Alternatively, one can register an API key in the settings of their AppSync API and use any standard GraphQL client.

Let us first create a todo with the title foo:

mutation MyMutation {
addTodo(input: {title: "foo"}) {
todo {
id
title
}
}
}
Screenshot of an executed addTodo Mutation using the AppSync Query Explorer

"addTodo" Mutation using the AppSync Query Explorer

Requesting a list of the todos should return a single todo with title foo:

query MyQuery {
todos {
title
id
}
}
Screenshot of an executed addTodo Mutation using the AppSync Query Explorer

"addTodo" Mutation using the AppSync Query Explorer

Requesting the foo todo by id should work too:

query MyQuery {
todo(id: "1") {
title
id
}
}
Screenshot of an executed addTodo Mutation using the AppSync Query Explorer

"addTodo" Mutation using the AppSync Query Explorer

Wrapping Up

We successfully deployed a serverless GraphQL API for managing simple todos using AWS AppSync, AWS Lambda, and Ent. In particular, we provided step-by-step instructions on configuring AWS AppSync and AWS Lambda through the web console. In addition, we discussed a proposal for how to structure our Go code.

We did not cover testing and setting up a database infrastructure in AWS. These aspects become more challenging in the serverless than the traditional paradigm. For example, when many Lambda functions are cold started in parallel, we quickly exhaust the database's connection pool and need some database proxy. In addition, we need to rethink testing as we only have access to local and end-to-end tests because we cannot run cloud services easily in isolation.

Nevertheless, the proposed GraphQL server scales well into the complex demands of real-world applications benefiting from the serverless infrastructure and Ent's pleasurable developer experience.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

· 1 分で読む

I've been writing software for years, but, until recently, I didn't know what an ORM was. I learned many things obtaining my B.S. in Computer Engineering, but Object-Relational Mapping was not one of those; I was too focused on building things out of bits and bytes to be bothered with something that high-level. It shouldn't be too surprising then, that when I found myself tasked with helping to build a distributed web application, I ended up outside my comfort zone.

One of the difficulties with developing software for someone else is, that you aren't able to see inside their head. The requirements aren't always clear and asking questions only helps you understand so much of what they are looking for. Sometimes, you just have to build a prototype and demonstrate it to get useful feedback.

The issue with this approach, of course, is that it takes time to develop prototypes, and you need to pivot frequently. If you were like me and didn't know what an ORM was, you would waste a lot of time doing simple, but time-consuming tasks:

  1. Re-define the data model with new customer feedback.
  2. Re-create the test database.
  3. Re-write the SQL statements for interfacing with the database.
  4. Re-define the gRPC interface between the backend and frontend services.
  5. Re-design the frontend and web interface.
  6. Demonstrate to customer and get feedback
  7. Repeat

Hundreds of hours of work only to find out that everything needs to be re-written. So frustrating! I think you can imagine my relief (and also embarrassment), when a senior developer asked me why I wasn't using an ORM like Ent.

Discovering Ent

It only took one day to re-implement our current data model with Ent. I couldn't believe I had been doing all this work by hand when such a framework existed! The gRPC integration through entproto was the icing on the cake! I could perform basic CRUD operations over gRPC just by adding a few annotations to my schema. This allows me to skip all the steps between data model definition and re-designing the web interface! There was, however, just one problem for my use case: How do you get the details of entities over the gRPC interface if you don't know their IDs ahead of time? I see that Ent can query all, but where is the GetAll method for entproto?

Becoming an Open-Source Contributor

I was surprised to find it didn't exist! I could have added it to my project by implementing the feature in a separate service, but it seemed like a generic enough method to be generally useful. For years, I had wanted to find an open-source project that I could meaningfully contribute to; this seemed like the perfect opportunity!

So, after poking around entproto's source into the early morning hours, I managed to hack the feature in! Feeling accomplished, I opened a pull request and headed off to sleep, not realizing the learning experience I had just signed myself up for.

In the morning, I awoke to the disappointment of my pull request being closed by Rotem, but with an invitation to collaborate further to refine the idea. The reason for closing the request was obvious, my implementation of GetAll was dangerous. Returning an entire table's worth of data is only feasible if the table is small. Exposing this interface on a large table could have disastrous results!

Optional Service Method Generation

My solution was to make the GetAll method optional by passing an argument into entproto.Service(). This provides control over whether this feature is exposed. We decided that this was a desirable feature, but that it should be more generic. Why should GetAll get special treatment just because it was added last? It would be better if all methods could be optionally generated. Something like:

entproto.Service(entproto.Methods(entproto.Create | entproto.Get))

However, to keep everything backwards-compatible, an empty entproto.Service() annotation would also need to generate all methods. I'm not a Go expert, so the only way I knew of to do this was with a variadic function:

func Service(methods ...Method)

The problem with this approach is that you can only have one argument type that is variable length. What if we wanted to add additional options to the service annotation later on? This is where I was introduced to the powerful design pattern of functional options:

// ServiceOption configures the entproto.Service annotation.
type ServiceOption func(svc *service)

// Service annotates an ent.Schema to specify that protobuf service generation is required for it.
func Service(opts ...ServiceOption) schema.Annotation {
s := service{
Generate: true,
}
for _, apply := range opts {
apply(&s)
}
// Default to generating all methods
if s.Methods == 0 {
s.Methods = MethodAll
}
return s
}

This approach takes in a variable number of functions that are called to set options on a struct, in this case, our service annotation. With this approach, we can implement any number of other options functions aside from Methods. Very cool!

List: The Superior GetAll

With optional method generation out of the way, we could return our focus to adding GetAll. How could we implement this method in a safe fashion? Rotem suggested we base the method off of Google's API Improvement Proposal (AIP) for List, AIP-132. This approach allows a client to retrieve all entities, but breaks the retrieval up into pages. As an added bonus, it also sounds better than "GetAll"!

List Request

With this design, a request message would look like:

message ListUserRequest {
int32 page_size = 1;

string page_token = 2;

View view = 3;

enum View {
VIEW_UNSPECIFIED = 0;

BASIC = 1;

WITH_EDGE_IDS = 2;
}
}

Page Size

The page_size field allows the client to specify the maximum number of entries they want to receive in the response message, subject to a maximum page size of 1000. This eliminates the issue of returning more results than the client can handle in the initial GetAll implementation. Additionally, the maximum page size was implemented to prevent a client from overburdening the server.

Page Token

The page_token field is a base64-encoded string utilized by the server to determine where the next page begins. An empty token means that we want the first page.

View

The view field is used to specify whether the response should return the edge IDs associated with the entities.

List Response

The response message would look like:

message ListUserResponse {
repeated User user_list = 1;

string next_page_token = 2;
}

List

The user_list field contains page entities.

Next Page Token

The next_page_token field is a base64-encoded string that can be utilized in another List request to retrieve the next page of entities. An empty token means that this response contains the last page of entities.

Pagination

With the gRPC interface determined, the challenge of implementing it began. One of the most critical design decisions was how to implement the pagination. The naive approach would be to use LIMIT/OFFSET pagination to skip over the entries we've already seen. However, this approach has massive drawbacks; the most problematic being that the database has to fetch all the rows it is skipping to get the rows we want.

Keyset Pagination

Rotem proposed a much better approach: keyset pagination. This approach is slightly more complicated since it requires the use of a unique column (or combination of columns) to order the rows. But in exchange we gain a significant performance improvement. This is because we can take advantage of the sorted rows to select only entries with unique column(s) values that are greater (ascending order) or less (descending order) than / equal to the value(s) in the client-provided page token. Thus, the database doesn't have to fetch the rows we want to skip over, significantly speeding up queries on large tables!

With keyset pagination selected, the next step was to determine how to order the entities. The most straightforward approach for Ent was to use the id field; every schema will have this, and it is guaranteed to be unique for the schema. This is the approach we chose to use for the initial implementation. Additionally, a decision needed to be made regarding whether ascending or descending order should be employed. Descending order was chosen for the initial release.

Usage

Let's take a look at how to actually use the new List feature:

package main

import (
"context"
"log"

"ent-grpc-example/ent/proto/entpb"
"google.golang.org/grpc"
"google.golang.org/grpc/status"
)

func main() {
// Open a connection to the server.
conn, err := grpc.Dial(":5000", grpc.WithInsecure())
if err != nil {
log.Fatalf("failed connecting to server: %s", err)
}
defer conn.Close()
// Create a User service Client on the connection.
client := entpb.NewUserServiceClient(conn)
ctx := context.Background()
// Initialize token for first page.
pageToken := ""
// Retrieve all pages of users.
for {
// Ask the server for the next page of users, limiting entries to 100.
users, err := client.List(ctx, &entpb.ListUserRequest{
PageSize: 100,
PageToken: pageToken,
})
if err != nil {
se, _ := status.FromError(err)
log.Fatalf("failed retrieving user list: status=%s message=%s", se.Code(), se.Message())
}
// Check if we've reached the last page of users.
if users.NextPageToken == "" {
break
}
// Update token for next request.
pageToken = users.NextPageToken
log.Printf("users retrieved: %v", users)
}
}

Looking Ahead

The current implementation of List has a few limitations that can be addressed in future revisions. First, sorting is limited to the id column. This makes List compatible with any schema, but it isn't very flexible. Ideally, the client should be able to specify what columns to sort by. Alternatively, the sort column(s) could be defined in the schema. Additionally, List is restricted to descending order. In the future, this could be an option specified in the request. Finally, List currently only works with schemas that use int32, uuid, or string type id fields. This is because a separate conversion method to/from the page token must be defined for each type that Ent supports in the code generation template (I'm only one person!).

Wrap-up

I was pretty nervous when I first embarked on my quest to contribute this functionality to entproto; as a newbie open-source contributor, I didn't know what to expect. I'm happy to share that working on the Ent project was a ton of fun! I got to work with awesome, knowledgeable people while helping out the open-source community. From functional options and keyset pagination to smaller insights gained through PR review, I learned so much about Go (and software development in general) in the process! I'd highly encourage anyone thinking they might want to contribute something to take that leap! You'll be surprised with how much you gain from the experience.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

· 1 分で読む

The OpenAPI Specification (OAS, formerly known as Swagger Specification) is a technical specification defining a standard, language-agnostic interface description for REST APIs. This allows both humans and automated tools to understand the described service without the actual source code or additional documentation. Combined with the Swagger Tooling you can generate both server and client boilerplate code for more than 20 languages, just by passing in the OAS document.

In a previous blogpost, we presented to you a new feature of the Ent extension elk: a fully compliant OpenAPI Specification document generator.

Today, we are very happy to announce, that the specification generator is now an official extension to the Ent project and has been moved to the ent/contrib repository. In addition, we have listened to the feedback of the community and have made some changes to the generator, that we hope you will like.

Getting Started

To use the entoas extension use the entc (ent codegen) package as described here. First install the extension to your Go module:

go get entgo.io/contrib/entoas

Now follow the next two steps to enable it and to configure Ent to work with the entoas extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore

package main

import (
"log"

"entgo.io/contrib/entoas"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
)

func main() {
ex, err := entoas.NewExtension()
if err != nil {
log.Fatalf("creating entoas extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent

//go:generate go run -mod=mod entc.go

With these steps complete, all is set up for generating an OAS document from your schema! If you are new to Ent and want to learn more about it, how to connect to different types of databases, run migrations or work with entities, then head over to the Setup Tutorial.

Generate an OAS document

The first step on our way to the OAS document is to create an Ent schema graph. For the sake of brevity here is an example schema to use:

ent/schema/schema.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Fridge holds the schema definition for the Fridge entity.
type Fridge struct {
ent.Schema
}

// Fields of the Fridge.
func (Fridge) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}

// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type),
}
}

// Compartment holds the schema definition for the Compartment entity.
type Compartment struct {
ent.Schema
}

// Fields of the Compartment.
func (Compartment) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Compartment.
func (Compartment) Edges() []ent.Edge {
return []ent.Edge{
edge.From("fridge", Fridge.Type).
Ref("compartments").
Unique(),
edge.To("contents", Item.Type),
}
}

// Item holds the schema definition for the Item entity.
type Item struct {
ent.Schema
}

// Fields of the Item.
func (Item) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Item.
func (Item) Edges() []ent.Edge {
return []ent.Edge{
edge.From("compartment", Compartment.Type).
Ref("contents").
Unique(),
}
}

The code above is the Ent-way to describe a schema-graph. In this particular case we created three Entities: Fridge, Compartment and Item. Additionally, we added some edges to the graph: A Fridge can have many Compartments and a Compartment can contain many Items.

Now run the code generator:

go generate ./...

In addition to the files Ent normally generates, another file named ent/openapi.json has been created. Here is a sneak peek into the file:

ent/openapi.json
{
"info": {
"title": "Ent Schema API",
"description": "This is an auto generated API description made out of an Ent schema definition",
"termsOfService": "",
"contact": {},
"license": {
"name": ""
},
"version": "0.0.0"
},
"paths": {
"/compartments": {
"get": {
[...]

If you feel like it, copy its contents and paste them into the Swagger Editor. It should look like this:

Swagger Editor

Swagger Editor

Basic Configuration

The description of our API does not yet reflect what it does, but entoas lets you change that! Open up ent/entc.go and pass in the updated title and description of our Fridge API:

ent/entc.go
//go:build ignore
// +build ignore

package main

import (
"log"

"entgo.io/contrib/entoas"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
)

func main() {
ex, err := entoas.NewExtension(
entoas.SpecTitle("Fridge CMS"),
entoas.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
entoas.SpecVersion("0.0.1"),
)
if err != nil {
log.Fatalf("creating entoas extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Rerunning the code generator will create an updated OAS document.

ent/openapi.json
{
"info": {
"title": "Fridge CMS",
"description": "API to manage fridges and their cooled contents. **ICY!**",
"termsOfService": "",
"contact": {},
"license": {
"name": ""
},
"version": "0.0.1"
},
"paths": {
"/compartments": {
"get": {
[...]

Operation configuration

There are times when you do not want to generate endpoints for every operation for every node. Fortunately, entoas lets us configure what endpoints to generate and which to ignore. entoas' default policy is to expose all routes. You can either change this behaviour to not expose any route but those explicitly asked for, or you can just tell entoas to exclude a specific operation by using an entoas.Annotation. Policies are used to enable / disable the generation of sub-resource operations as well:

ent/schema/fridge.go
// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type).
// Do not generate an endpoint for POST /fridges/{id}/compartments
Annotation(
entoas.CreateOperation(
entoas.OperationPolicy(entoas.PolicyExclude),
),
),
}
}

// Annotations of the Fridge.
func (Fridge) Annotations() []schema.Annotation {
return []schema.Annotation{
// Do not generate an endpoint for DELETE /fridges/{id}
entoas.DeleteOperation(entoas.OperationPolicy(entoas.PolicyExclude)),
}
}

And voilà! the operations are gone.

For more information about how entoas's policies work and what you can do with it, have a look at the godoc.

Simple Models

By default entoas generates one response-schema per endpoint. To learn about the naming strategy have a look at the godoc.

One Schema per Endpoint

One Schema per Endpoint

Many users have requested to change this behaviour to simply map the Ent schema to the OAS document. Therefore, you now can configure entoas to do that:

ex, err := entoas.NewExtension(
entoas.SpecTitle("Fridge CMS"),
entoas.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
entoas.SpecVersion("0.0.1"),
entoas.SimpleModels(),
)
Simple Schemas

Simple Schemas

Wrapping Up

In this post we announced entoas, the official integration of the former elk OpenAPI Specification generation into Ent. This feature connects between Ent's code-generation capabilities and OpenAPI/Swagger's rich tooling ecosystem.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

· 1 分で読む

One of the common questions we get from the Ent community is how to synchronize objects or references between the database backing an Ent application (e.g. MySQL or PostgreSQL) with external services. For example, users would like to create or delete a record from within their CRM when a user is created or deleted in Ent, publish a message to a Pub/Sub system when an entity is updated, or verify references to blobs in object storage such as AWS S3 or Google Cloud Storage.

Ensuring consistency between two separate data systems is not a simple task. When we want to propagate, for example, the deletion of a record in one system to another, there is no obvious way to guarantee that the two systems will end in a synchronized state, since one of them may fail, and the network link between them may be slow or down. Having said that, and especially with the prominence of microservices-architectures, these problems have become more common, and distributed systems researchers have come up with patterns to solve them, such as the Saga Pattern.

The application of these patterns is usually complex and difficult, and so in many cases architects do not go after a "perfect" design, and instead go after simpler solutions that involve either the acceptance of some inconsistency between the systems or background reconciliation procedures.

In this post, we will not discuss how to solve distributed transactions or implement the Saga pattern with Ent. Instead, we will limit our scope to study how to hook into Ent mutations before and after they occur, and run our custom logic there.

Propagating Mutations to External Systems

In our example, we are going to create a simple User schema with 2 immutable string fields, "name" and "avatar_url". Let's run the ent init command for creating a skeleton schema for our User:

go run entgo.io/ent/cmd/ent init User

Then, add the name and the avatar_url fields and run go generate to generate the assets.

ent/schema/user.go
type User struct {
ent.Schema
}

func (User) Fields() []ent.Field {
return []ent.Field{
field.String("name").
Immutable(),
field.String("avatar_url").
Immutable(),
}
}
go generate ./ent

The Problem

The avatar_url field defines a URL to an image in a bucket on our object storage (e.g. AWS S3). For the purpose of this discussion we want to make sure that:

  • When a user is created, an image with the URL stored in "avatar_url" exists in our bucket.
  • Orphan images are deleted from the bucket. This means that when a user is deleted from our system, its avatar image is deleted as well.

For interacting with blobs, we will use the gocloud.dev/blob package. This package provides abstraction for reading, writing, deleting and listing blobs in a bucket. Similar to the database/sql package, it allows interacting with variety of object storages with the same API by configuring its driver URL. For example:

// Open an in-memory bucket. 
if bucket, err := blob.OpenBucket(ctx, "mem://photos/"); err != nil {
log.Fatal("failed opening in-memory bucket:", err)
}

// Open an S3 bucket named photos.
if bucket, err := blob.OpenBucket(ctx, "s3://photos"); err != nil {
log.Fatal("failed opening s3 bucket:", err)
}

// Open a bucket named photos in Google Cloud Storage.
if bucket, err := blob.OpenBucket(ctx, "gs://my-bucket"); err != nil {
log.Fatal("failed opening gs bucket:", err)
}

Schema Hooks

Hooks are a powerful feature of Ent that allows adding custom logic before and after operations that mutate the graph.

Hooks can be either defined dynamically using client.Use (called "Runtime Hooks"), or explicitly on the schema (called "Schema Hooks") as follows:

// Hooks of the User.
func (User) Hooks() []ent.Hook {
return []ent.Hook{
EnsureImageExists(),
DeleteOrphans(),
}
}

As you can imagine, the EnsureImageExists hook will be responsible for ensuring that when a user is created, their avatar URL exists in the bucket, and the DeleteOrphans will ensure that orphan images are deleted. Let's start writing them.

ent/schema/hooks.go
func EnsureImageExists() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
avatarURL, exists := m.AvatarURL()
if !exists {
return nil, errors.New("avatar field is missing")
}
// TODO:
// 1. Verify that "avatarURL" points to a real object in the bucket.
// 2. Otherwise, fail.
return next.Mutate(ctx, m)
})
}
// Limit the hook only to "Create" operations.
return hook.On(hk, ent.OpCreate)
}

func DeleteOrphans() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
id, exists := m.ID()
if !exists {
return nil, errors.New("id field is missing")
}
// TODO:
// 1. Get the AvatarURL field of the deleted user.
// 2. Cascade the deletion to object storage.
return next.Mutate(ctx, m)
})
}
// Limit the hook only to "DeleteOne" operations.
return hook.On(hk, ent.OpDeleteOne)
}

Now, you may ask yourself, how do we access the blob client from the mutations hooks? You are going to find out in the next section.

Injecting Dependencies

The entc.Dependency option allows extending the generated builders with external dependencies as struct fields, and provides options for injecting them on client initialization.

To inject a blob.Bucket to be available inside our hooks, we can follow the tutorial about external dependencies in the website, and define the gocloud.dev/blob.Bucket as a dependency.

ent/entc.go
func main() {
opts := []entc.Option{
entc.Dependency(
entc.DependencyName("Bucket"),
entc.DependencyType(&blob.Bucket{}),
),
}
if err := entc.Generate("./schema", &gen.Config{}, opts...); err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Next, re-run code generation:

go generate ./ent

We can now access the Bucket API from all generated builders. Let's finish the implementations of the above hooks.

ent/schema/hooks.go
// EnsureImageExists ensures the avatar_url points
// to a real object in the bucket.
func EnsureImageExists() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
avatarURL, exists := m.AvatarURL()
if !exists {
return nil, errors.New("avatar field is missing")
}
switch exists, err := m.Bucket.Exists(ctx, avatarURL); {
case err != nil:
return nil, fmt.Errorf("check key existence: %w", err)
case !exists:
return nil, fmt.Errorf("key %q does not exist in the bucket", avatarURL)
default:
return next.Mutate(ctx, m)
}
})
}
return hook.On(hk, ent.OpCreate)
}

// DeleteOrphans cascades the user deletion to the bucket.
// Hence, when a user is deleted, its avatar image is deleted
// as well.
func DeleteOrphans() ent.Hook {
hk := func(next ent.Mutator) ent.Mutator {
return hook.UserFunc(func(ctx context.Context, m *ent.UserMutation) (ent.Value, error) {
id, exists := m.ID()
if !exists {
return nil, errors.New("id field is missing")
}
u, err := m.Client().User.Get(ctx, id)
if err != nil {
return nil, fmt.Errorf("getting deleted user: %w", err)
}
if err := m.Bucket.Delete(ctx, u.AvatarURL); err != nil {
return nil, fmt.Errorf("deleting user avatar from bucket: %w", err)
}
return next.Mutate(ctx, m)
})
}
return hook.On(hk, ent.OpDeleteOne)
}

Now, it's time to test our hooks! Let's write a testable example that verifies that our 2 hooks work as expected.

package main

import (
"context"
"fmt"
"log"

"github.com/a8m/ent-sync-example/ent"
_ "github.com/a8m/ent-sync-example/ent/runtime"

"entgo.io/ent/dialect"
_ "github.com/mattn/go-sqlite3"
"gocloud.dev/blob"
_ "gocloud.dev/blob/memblob"
)

func Example_SyncCreate() {
ctx := context.Background()
// Open an in-memory bucket.
bucket, err := blob.OpenBucket(ctx, "mem://photos/")
if err != nil {
log.Fatal("failed opening bucket:", err)
}
client, err := ent.Open(
dialect.SQLite,
"file:ent?mode=memory&cache=shared&_fk=1",
// Inject the blob.Bucket on client initialization.
ent.Bucket(bucket),
)
if err != nil {
log.Fatal("failed opening connection to sqlite:", err)
}
defer client.Close()
if err := client.Schema.Create(ctx); err != nil {
log.Fatal("failed creating schema resources:", err)
}
if err := client.User.Create().SetName("a8m").SetAvatarURL("a8m.png").Exec(ctx); err == nil {
log.Fatal("expect user creation to fail because the image does not exist in the bucket")
}
if err := bucket.WriteAll(ctx, "a8m.png", []byte{255, 255, 255}, nil); err != nil {
log.Fatalf("failed uploading image to the bucket: %v", err)
}
fmt.Printf("%q\n", keys(ctx, bucket))

// User creation should pass as image was uploaded to the bucket.
u := client.User.Create().SetName("a8m").SetAvatarURL("a8m.png").SaveX(ctx)

// Deleting a user, should delete also its image from the bucket.
client.User.DeleteOne(u).ExecX(ctx)
fmt.Printf("%q\n", keys(ctx, bucket))

// Output:
// ["a8m.png"]
// []
}

Wrapping Up

Great! We have configured Ent to extend our generated code and inject the blob.Bucket as an External Dependency. Next, we defined two mutation hooks and used the blob.Bucket API to ensure our product constraints are satisfied.

The code for this example is available at github.com/a8m/ent-sync-example.

For more Ent news and updates:

· 1 分で読む

Ent is a powerful Entity framework that helps developers write neat code that is translated into (possibly complex) database queries. As the usage of your application grows, it doesn’t take long until you stumble upon performance issues with your database. Troubleshooting database performance issues is notoriously hard, especially when you’re not equipped with the right tools.

The following example shows how Ent query code is translated into an SQL query.

ent example 1

Example 1 - ent code is translated to SQL query

Traditionally, it has been very difficult to correlate between poorly performing database queries and the application code that is generating them. Database performance analysis tools could help point out slow queries by analyzing database server logs, but how could they be traced back to the application?

Sqlcommenter

Earlier this year, Google introduced Sqlcommenter. Sqlcommenter is

an open source library that addresses the gap between the ORM libraries and understanding database performance. Sqlcommenter gives application developers visibility into which application code is generating slow queries and maps application traces to database query plansIn other words, Sqlcommenter adds application context metadata to SQL queries. This information can then be used to provide meaningful insights. It does so by adding [SQL comments](https://en.wikipedia.org/wiki/SQL_syntax#Comments) to the query that carry metadata but are ignored by the database during query execution. For example, the following query contains a comment that carries metadata about the application that issued it (`users-mgr`), which controller and route triggered it (`users` and `user_rename`, respectively), and the database driver that was used (`ent:v0.9.1`):
update users set username = ‘hedwigz’ where id = 88
/*application='users-mgr',controller='users',route='user_rename',db_driver='ent:v0.9.1'*/

To get a taste of how the analysis of metadata collected from Sqlcommenter metadata can help us better understand performance issues of our application, consider the following example: Google Cloud recently launched Cloud SQL Insights, a cloud-based SQL performance analysis product. In the image below, we see a screenshot from the Cloud SQL Insights Dashboard that shows that the HTTP route 'api/users' is causing many locks on the database. We can also see that this query got called 16,067 times in the last 6 hours.

Cloud SQL insights

Screenshot from Cloud SQL Insights Dashboard

This is the power of SQL tags - they provide you correlation between your application-level information and your Database monitors.

sqlcomment

sqlcomment is an Ent driver that adds metadata to SQL queries using comments following the sqlcommenter specification. By wrapping an existing Ent driver with sqlcomment, users can leverage any tool that supports the standard to triage query performance issues. Without further ado, let’s see sqlcomment in action.

First, to install sqlcomment run:

go get ariga.io/sqlcomment

sqlcomment is wrapping an underlying SQL driver, therefore, we need to open our SQL connection using ent’s sql module, instead of Ent's popular helper ent.Open.

Make sure to import entgo.io/ent/dialect/sql in the following snippet :::
// Create db driver.
db, err := sql.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("Failed to connect to database: %v", err)
}

// Create sqlcomment driver which wraps sqlite driver.
drv := sqlcomment.NewDriver(db,
sqlcomment.WithDriverVerTag(),
sqlcomment.WithTags(sqlcomment.Tags{
sqlcomment.KeyApplication: "my-app",
sqlcomment.KeyFramework: "net/http",
}),
)

// Create and configure ent client.
client := ent.NewClient(ent.Driver(drv))

Now, whenever we execute a query, sqlcomment will suffix our SQL query with the tags we set up. If we were to run the following query:

client.User.
Update().
Where(
user.Or(
user.AgeGT(30),
user.Name("bar"),
),
user.HasFollowers(),
).
SetName("foo").
Save()

Ent would output the following commented SQL query:

UPDATE `users`
SET `name` = ?
WHERE (
`users`.`age` > ?
OR `users`.`name` = ?
)
AND `users`.`id` IN (
SELECT `user_following`.`follower_id`
FROM `user_following`
)
/*application='my-app',db_driver='ent:v0.9.1',framework='net%2Fhttp'*/

As you can see, Ent outputted an SQL query with a comment at the end, containing all the relevant information associated with that query.

sqlcomment supports more tags, and has integrations with OpenTelemetry and OpenCensus. To see more examples and scenarios, please visit the github repo.

Wrapping-Up

In this post I showed how adding metadata to queries using SQL comments can help correlate between source code and database queries. Next, I introduced sqlcomment - an Ent driver that adds SQL tags to all of your queries. Finally, I got to see sqlcomment in action, by installing and configuring it with Ent. If you like the code and/or want to contribute - feel free to checkout the project on GitHub.

Have questions? Need help with getting started? Feel free to join our Slack channel.

:::

· 1 分で読む

While working on Ariga's operational data graph query engine, we saw the opportunity to greatly improve the performance of many use cases by building a robust caching library. As heavy users of Ent, it was only natural for us to implement this layer as an extension to Ent. In this post, I will briefly explain what caches are, how they fit into software architectures, and present entcache - a cache driver for Ent.

Caching is a popular strategy for improving application performance. It is based on the observation that the speed for retrieving data using different types of media can vary within many orders of magnitude. Jeff Dean famously presented the following numbers in a lecture about "Software Engineering Advice from Building Large-Scale Distributed Systems":

cache numbers

These numbers show things that experienced software engineers know intuitively: reading from memory is faster than reading from disk, retrieving data from the same data center is faster than going out to the internet to fetch it. We add to that, that some calculations are expensive and slow, and that fetching a precomputed result can be much faster (and less expensive) than recomputing it every time.

The collective intelligence of Wikipedia tells us that a Cache is "a hardware or software component that stores data so that future requests for that data can be served faster". In other words, if we can store a query result in RAM, we can fulfill a request that depends on it much faster than if we need to go over the network to our database, have it read data from disk, run some computation on it, and only then send it back to us (over a network).

However, as software engineers, we should remember that caching is a notoriously complicated topic. As the phrase coined by early-day Netscape engineer Phil Karlton says: "There are only two hard things in Computer Science: cache invalidation and naming things". For instance, in systems that rely on strong consistency, a cache entry may be stale, therefore causing the system to behave incorrectly. For this reason, take great care and pay attention to detail when you are designing caches into your system architectures.

Presenting entcache

The entcache package provides its users with a new Ent driver that can wrap one of the existing SQL drivers available for Ent. On a high level, it decorates the Query method of the given driver, and for each call:

  1. Generates a cache key (i.e. hash) from its arguments (i.e. statement and parameters).

  2. Checks the cache to see if the results for this query are already available. If they are (this is called a cache-hit), the database is skipped and results are returned to the caller from memory.

  3. If the cache does not contain an entry for the query, the query is passed to the database.

  4. After the query is executed, the driver records the raw values of the returned rows (sql.Rows), and stores them in the cache with the generated cache key.

The package provides a variety of options to configure the TTL of the cache entries, control the hash function, provide custom and multi-level cache stores, evict and skip cache entries. See the full documentation in https://pkg.go.dev/ariga.io/entcache.

As we mentioned above, correctly configuring caching for an application is a delicate task, and so entcache provides developers with different caching levels that can be used with it:

  1. A context.Context-based cache. Usually, attached to a request and does not work with other cache levels. It is used to eliminate duplicate queries that are executed by the same request.

  2. A driver-level cache used by the ent.Client. An application usually creates a driver per database, and therefore, we treat it as a process-level cache.

  3. A remote cache. For example, a Redis database that provides a persistence layer for storing and sharing cache entries between multiple processes. A remote cache layer is resistant to application deployment changes or failures, and allows reducing the number of identical queries executed on the database by different process.

  4. A cache hierarchy, or multi-level cache allows structuring the cache in hierarchical way. The hierarchy of cache stores is mostly based on access speeds and cache sizes. For example, a 2-level cache that composed of an LRU-cache in the application memory, and a remote-level cache backed by a Redis database.

Let's demonstrate this by explaining the context.Context based cache.

Context-Level Cache

The ContextLevel option configures the driver to work with a context.Context level cache. The context is usually attached to a request (e.g. *http.Request) and is not available in multi-level mode. When this option is used as a cache store, the attached context.Context carries an LRU cache (can be configured differently), and the driver stores and searches entries in the LRU cache when queries are executed.

This option is ideal for applications that require strong consistency, but still want to avoid executing duplicate database queries on the same request. For example, given the following GraphQL query:

query($ids: [ID!]!) {
nodes(ids: $ids) {
... on User {
id
name
todos {
id
owner {
id
name
}
}
}
}
}

A naive solution for resolving the above query will execute, 1 for getting N users, another N queries for getting the todos of each user, and a query for each todo item for getting its owner (read more about the N+1 Problem).

However, Ent provides a unique approach for resolving such queries(read more in Ent website) and therefore, only 3 queries will be executed in this case. 1 for getting N users, 1 for getting the todo items of all users, and 1 query for getting the owners of all todo items.

With entcache, the number of queries may be reduced to 2, as the first and last queries are identical (see code example).

context-level-cache

The different levels are explained in depth in the repository README.

Getting Started

If you are not familiar with how to set up a new Ent project, complete Ent Setting Up tutorial first.

First, go get the package using the following command.

go get ariga.io/entcache

After installing entcache, you can easily add it to your project with the snippet below:

// Open the database connection.
db, err := sql.Open(dialect.SQLite, "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatal("opening database", err)
}
// Decorates the sql.Driver with entcache.Driver.
drv := entcache.NewDriver(db)
// Create an ent.Client.
client := ent.NewClient(ent.Driver(drv))

// Tell the entcache.Driver to skip the caching layer
// when running the schema migration.
if client.Schema.Create(entcache.Skip(ctx)); err != nil {
log.Fatal("running schema migration", err)
}

// Run queries.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}
// The query below is cached.
if u, err := client.User.Get(ctx, id); err != nil {
log.Fatal("querying user", err)
}

To see more advanced examples, head over to the repo's examples directory.

Wrapping Up

In this post, I presented “entcache” a new cache driver for Ent that I developed while working on Ariga's Operational Data Graph query engine. We started the discussion by briefly mentioning the motivation for including caches in software systems. Following that, we described the features and capabilities of entcache and concluded with a short example of how you can set it up in your application.

There are a few features we are working on, and wish to work on, but need help from the community to design them properly (solving cache invalidation, anyone? ;)). If you are interested to contribute, reach out to me on the Ent Slack channel.

For more Ent news and updates:

· 1 分で読む

A few months ago the Ent project announced the Schema Import Initiative, its goal is to help support many use cases for generating Ent schemas from external resources. Today, I'm happy to share a project I’ve been working on: entimport - an importent (pun intended) command line tool designed to create Ent schemas from existing SQL databases. This is a feature that has been requested by the community for some time, so I hope many people find it useful. It can help ease the transition of an existing setup from another language or ORM to Ent. It can also help with use cases where you would like to access the same data from different platforms (such as to automatically sync between them).
The first version supports both MySQL and PostgreSQL databases, with some limitations described below. Support for other relational databases such as SQLite is in the works.

Getting Started

To give you an idea of how entimport works, I want to share a quick example of end to end usage with a MySQL database. On a high-level, this is what we’re going to do:

  1. Create a Database and Schema - we want to show how entimport can generate an Ent schema for an existing database. We will first create a database, then define some tables in it that we can import into Ent.
  2. Initialize an Ent Project - we will use the Ent CLI to create the needed directory structure and an Ent schema generation script.
  3. Install entimport
  4. Run entimport against our demo database - next, we will import the database schema that we’ve created into our Ent project.
  5. Explain how to use Ent with our generated schemas.

Let's get started.

Create a Database

We’re going to start by creating a database. The way I prefer to do it is to use a Docker container. We will use a docker-compose which will automatically pass all needed parameters to the MySQL container.

Start the project in a new directory called entimport-example. Create a file named docker-compose.yaml and paste the following content inside:

version: "3.7"

services:

mysql8:
platform: linux/amd64
image: mysql
environment:
MYSQL_DATABASE: entimport
MYSQL_ROOT_PASSWORD: pass
healthcheck:
test: mysqladmin ping -ppass
ports:
- "3306:3306"

This file contains the service configuration for a MySQL docker container. Run it with the following command:

docker-compose up -d

Next, we will create a simple schema. For this example we will use a relation between two entities:

  • User
  • Car

Connect to the database using MySQL shell, you can do it with the following command:

Make sure you run it from the root project directory

docker-compose exec mysql8 mysql --database=entimport -ppass
create table users
(
id bigint auto_increment primary key,
age bigint not null,
name varchar(255) not null,
last_name varchar(255) null comment 'surname'
);

create table cars
(
id bigint auto_increment primary key,
model varchar(255) not null,
color varchar(255) not null,
engine_size mediumint not null,
user_id bigint null,
constraint cars_owners foreign key (user_id) references users (id) on delete set null
);

Let's validate that we've created the tables mentioned above, in your MySQL shell, run:

show tables;
+---------------------+
| Tables_in_entimport |
+---------------------+
| cars |
| users |
+---------------------+

We should see two tables: users & cars

Initialize Ent Project

Now that we've created our database, and a baseline schema to demonstrate our example, we need to create a Go project with Ent. In this phase I will explain how to do it. Since eventually we would like to use our imported schema, we need to create the Ent directory structure.

Initialize a new Go project inside a directory called entimport-example

go mod init entimport-example

Run Ent Init:

go run -mod=mod entgo.io/ent/cmd/ent init 

The project should look like this:

├── docker-compose.yaml
├── ent
│ ├── generate.go
│ └── schema
└── go.mod

Install entimport

OK, now the fun begins! We are finally ready to install entimport and see it in action.
Let’s start by running entimport:

go run -mod=mod ariga.io/entimport/cmd/entimport -h

entimport will be downloaded and the command will print:

Usage of entimport:
-dialect string
database dialect (default "mysql")
-dsn string
data source name (connection information)
-schema-path string
output path for ent schema (default "./ent/schema")
-tables value
comma-separated list of tables to inspect (all if empty)

Run entimport

We are now ready to import our MySQL schema to Ent!

We will do it with the following command:

This command will import all tables in our schema, you can also limit to specific tables using -tables flag.

go run ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

Like many unix tools, entimport doesn't print anything on a successful run. To verify that it ran properly, we will check the file system, and more specifically ent/schema directory.

├── docker-compose.yaml
├── ent
│ ├── generate.go
│ └── schema
│ ├── car.go
│ └── user.go
├── go.mod
└── go.sum

Let’s see what this gives us - remember that we had two schemas: the users schema and the cars schema with a one to many relationship. Let’s see how entimport performed.

entimport-example/ent/schema/user.go
type User struct {
ent.Schema
}

func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.Int("age"), field.String("name"), field.String("last_name").Optional().Comment("surname")}
}
func (User) Edges() []ent.Edge {
return []ent.Edge{edge.To("cars", Car.Type)}
}
func (User) Annotations() []schema.Annotation {
return nil
}
entimport-example/ent/schema/car.go
type Car struct {
ent.Schema
}

func (Car) Fields() []ent.Field {
return []ent.Field{field.Int("id"), field.String("model"), field.String("color"), field.Int32("engine_size"), field.Int("user_id").Optional()}
}
func (Car) Edges() []ent.Edge {
return []ent.Edge{edge.From("user", User.Type).Ref("cars").Unique().Field("user_id")}
}
func (Car) Annotations() []schema.Annotation {
return nil
}

entimport successfully created entities and their relation!

So far looks good, now let’s actually try them out. First we must generate the Ent schema. We do it because Ent is a schema first ORM that generates Go code for interacting with different databases.

To run the Ent code generation:

go generate ./ent

Let's see our ent directory:

...
├── ent
│ ├── car
│ │ ├── car.go
│ │ └── where.go
...
│ ├── schema
│ │ ├── car.go
│ │ └── user.go
...
│ ├── user
│ │ ├── user.go
│ │ └── where.go
...

Ent Example

Let’s run a quick example to verify that our schema works:

Create a file named example.go in the root of the project, with the following content:

This part of the example can be found here

entimport-example/example.go
package main

import (
"context"
"fmt"
"log"

"entimport-example/ent"

"entgo.io/ent/dialect"
_ "github.com/go-sql-driver/mysql"
)

func main() {
client, err := ent.Open(dialect.MySQL, "root:pass@tcp(localhost:3306)/entimport?parseTime=True")
if err != nil {
log.Fatalf("failed opening connection to mysql: %v", err)
}
defer client.Close()
ctx := context.Background()
example(ctx, client)
}

Let's try to add a user, write the following code at the end of the file:

entimport-example/example.go
func example(ctx context.Context, client *ent.Client) {
// Create a User.
zeev := client.User.
Create().
SetAge(33).
SetName("Zeev").
SetLastName("Manilovich").
SaveX(ctx)
fmt.Println("User created:", zeev)
}

Then run:

go run example.go

This should output:

# User created: User(id=1, age=33, name=Zeev, last_name=Manilovich)

Let's check with the database if the user was really added

SELECT *
FROM users
WHERE name = 'Zeev';

+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+

Great! now let's play a little more with Ent and add some relations, add the following code at the end of the example() func:

make sure you add "entimport-example/ent/user" to the import() declaration

entimport-example/example.go
// Create Car.
vw := client.Car.
Create().
SetModel("volkswagen").
SetColor("blue").
SetEngineSize(1400).
SaveX(ctx)
fmt.Println("First car created:", vw)

// Update the user - add the car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(vw).SaveX(ctx)

// Query all cars that belong to the user.
cars := zeev.QueryCars().AllX(ctx)
fmt.Println("User cars:", cars)

// Create a second Car.
delorean := client.Car.
Create().
SetModel("delorean").
SetColor("silver").
SetEngineSize(9999).
SaveX(ctx)
fmt.Println("Second car created:", delorean)

// Update the user - add another car relation.
client.User.Update().Where(user.ID(zeev.ID)).AddCars(delorean).SaveX(ctx)

// Traverse the sub-graph.
cars = delorean.
QueryUser().
QueryCars().
AllX(ctx)
fmt.Println("User cars:", cars)

This part of the example can be found here

Now do: go run example.go.
After Running the code above, the database should hold a user with 2 cars in a O2M relation.

SELECT *
FROM users;

+--+---+----+----------+
|id|age|name|last_name |
+--+---+----+----------+
|1 |33 |Zeev|Manilovich|
+--+---+----+----------+

SELECT *
FROM cars;

+--+----------+------+-----------+-------+
|id|model |color |engine_size|user_id|
+--+----------+------+-----------+-------+
|1 |volkswagen|blue |1400 |1 |
|2 |delorean |silver|9999 |1 |
+--+----------+------+-----------+-------+

Syncing DB changes

Since we want to keep the database in sync, we want entimport to be able to change the schema after the database was changed. Let's see how it works.

Run the following SQL code to add a phone column with a unique index to the users table:

alter table users
add phone varchar(255) null;

create unique index users_phone_uindex
on users (phone);

The table should look like this:

describe users;
+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | bigint | NO | PRI | NULL | auto_increment |
| age | bigint | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| last_name | varchar(255) | YES | | NULL | |
| phone | varchar(255) | YES | UNI | NULL | |
+-----------+--------------+------+-----+---------+----------------+

Now let's run entimport again to get the latest schema from our database:

go run -mod=mod ariga.io/entimport/cmd/entimport -dialect mysql -dsn "root:pass@tcp(localhost:3306)/entimport"

We can see that the user.go file was changed:

entimport-example/ent/schema/user.go
func (User) Fields() []ent.Field {
return []ent.Field{field.Int("id"), ..., field.String("phone").Optional().Unique()}
}

Now we can run go generate ./ent again and use the new schema to add a phone to the User entity.

Future Plans

As mentioned above this initial version supports MySQL and PostgreSQL databases.
It also supports all types of SQL relations. I have plans to further upgrade the tool and add features such as missing PostgreSQL fields, default values, and more.

Wrapping Up

In this post, I presented entimport, a tool that was anticipated and requested many times by the Ent community. I showed an example of how to use it with Ent. This tool is another addition to Ent schema import tools, which are designed to make the integration of ent even easier. For discussion and support, open an issue. The full example can be found in here. I hope you found this blog post useful!

For more Ent news and updates:

· 1 分で読む

In a previous blogpost, we presented to you elk - an extension to Ent enabling you to generate a fully-working Go CRUD HTTP API from your schema. In the today's post I'd like to introduce to you a shiny new feature that recently made it into elk: a fully compliant OpenAPI Specification (OAS) generator.

OAS (formerly known as Swagger Specification) is a technical specification defining a standard, language-agnostic interface description for REST APIs. This allows both humans and automated tools to understand the described service without the actual source code or additional documentation. Combined with the Swagger Tooling you can generate both server and client boilerplate code for more than 20 languages, just by passing in the OAS file.

Getting Started

The first step is to add the elk package to your project:

go get github.com/masseelch/elk@latest

elk uses the Ent Extension API to integrate with Ent’s code-generation. This requires that we use the entc (ent codegen) package as described here to generate code for our project. Follow the next two steps to enable it and to configure Ent to work with the elk extension:

1. Create a new Go file named ent/entc.go and paste the following content:

// +build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)

func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec("openapi.json"),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

2. Edit the ent/generate.go file to execute the ent/entc.go file:

package ent

//go:generate go run -mod=mod entc.go

With these steps complete, all is set up for generating an OAS file from your schema! If you are new to Ent and want to learn more about it, how to connect to different types of databases, run migrations or work with entities, then head over to the Setup Tutorial.

Generate an OAS file

The first step on our way to the OAS file is to create an Ent schema graph:

go run -mod=mod entgo.io/ent/cmd/ent init Fridge Compartment Item

To demonstrate elk's OAS generation capabilities, we will build together an example application. Suppose I have multiple fridges with multiple compartments, and my significant-other and I want to know its contents at all times. To supply ourselves with this incredibly useful information we will create a Go server with a RESTful API. To ease the creation of client applications that can communicate with our server, we will create an OpenAPI Specification file describing its API. Once we have that, we can build a frontend to manage fridges and contents in a language of our choice by using the Swagger Codegen! You can find an example that uses docker to generate a client here.

Let's create our schema:

ent/fridge.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Fridge holds the schema definition for the Fridge entity.
type Fridge struct {
ent.Schema
}

// Fields of the Fridge.
func (Fridge) Fields() []ent.Field {
return []ent.Field{
field.String("title"),
}
}

// Edges of the Fridge.
func (Fridge) Edges() []ent.Edge {
return []ent.Edge{
edge.To("compartments", Compartment.Type),
}
}
ent/compartment.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Compartment holds the schema definition for the Compartment entity.
type Compartment struct {
ent.Schema
}

// Fields of the Compartment.
func (Compartment) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Compartment.
func (Compartment) Edges() []ent.Edge {
return []ent.Edge{
edge.From("fridge", Fridge.Type).
Ref("compartments").
Unique(),
edge.To("contents", Item.Type),
}
}
ent/item.go
package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
)

// Item holds the schema definition for the Item entity.
type Item struct {
ent.Schema
}

// Fields of the Item.
func (Item) Fields() []ent.Field {
return []ent.Field{
field.String("name"),
}
}

// Edges of the Item.
func (Item) Edges() []ent.Edge {
return []ent.Edge{
edge.From("compartment", Compartment.Type).
Ref("contents").
Unique(),
}
}

Now, let's generate the Ent code and the OAS file.

go generate ./...

In addition to the files Ent normally generates, another file named openapi.json has been created. Copy its contents and paste them into the Swagger Editor. You should see three groups: Compartment, Item and Fridge.

Swagger Editor Example

Swagger Editor Example

If you happen to open up the POST operation tab in the Fridge group, you see a description of the expected request data and all the possible responses. Great!

POST operation on Fridge

POST operation on Fridge

Basic Configuration

The description of our API does not yet reflect what it does, let's change that! elk provides easy-to-use configuration builders to manipulate the generated OAS file. Open up ent/entc.go and pass in the updated title and description of our Fridge API:

ent/entc.go
//go:build ignore
// +build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"github.com/masseelch/elk"
)

func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
"openapi.json",
// It is a Content-Management-System ...
elk.SpecTitle("Fridge CMS"),
// You can use CommonMark syntax (https://commonmark.org/).
elk.SpecDescription("API to manage fridges and their cooled contents. **ICY!**"),
elk.SpecVersion("0.0.1"),
),
)
if err != nil {
log.Fatalf("creating elk extension: %v", err)
}
err = entc.Generate("./schema", &gen.Config{}, entc.Extensions(ex))
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}

Rerunning the code generator will create an updated OAS file you can copy-paste into the Swagger Editor.

Updated API Info

Updated API Info

Operation configuration

We do not want to expose endpoints to delete a fridge (seriously, who would ever want that?!). Fortunately, elk lets us configure what endpoints to generate and which to ignore. elks default policy is to expose all routes. You can either change this behaviour to not expose any route but those explicitly asked for, or you can just tell elk to exclude the DELETE operation on the Fridge by using an elk.SchemaAnnotation:

ent/schema/fridge.go
// Annotations of the Fridge.
func (Fridge) Annotations() []schema.Annotation {
return []schema.Annotation{
elk.DeletePolicy(elk.Exclude),
}
}

And voilà! the DELETE operation is gone.

DELETE operation is gone

DELETE operation is gone

For more information about how elk's policies work and what you can do with it, have a look at the godoc.

Extend specification

The one thing I should be interested the most in this example is the current contents of a fridge. You can customize the generated OAS to any extend you like by using Hooks. However, this would exceed the scope of this post. An example of how to add an endpoint fridges/{id}/contents to the generated OAS file can be found here.

Generating an OAS-implementing server

I promised you in the beginning we'd create a server behaving as described in the OAS. elk makes this easy, all you have to do is call elk.GenerateHandlers() when you configure the extension:

ent/entc.go
[...]
func main() {
ex, err := elk.NewExtension(
elk.GenerateSpec(
[...]
),
+ elk.GenerateHandlers(),
)
[...]
}

Next, re-run code generation:

go generate ./...

Observe, that a new directory named ent/http was created.

» tree ent/http
ent/http
├── create.go
├── delete.go
├── easyjson.go
├── handler.go
├── list.go
├── read.go
├── relations.go
├── request.go
├── response.go
└── update.go

0 directories, 10 files

You can spin-up the generated server with this very simple main.go:

package main

import (
"context"
"log"
"net/http"

"<your-project>/ent"
elk "<your-project>/ent/http"

_ "github.com/mattn/go-sqlite3"
"go.uber.org/zap"
)

func main() {
// Create the ent client.
c, err := ent.Open("sqlite3", "file:ent?mode=memory&cache=shared&_fk=1")
if err != nil {
log.Fatalf("failed opening connection to sqlite: %v", err)
}
defer c.Close()
// Run the auto migration tool.
if err := c.Schema.Create(context.Background()); err != nil {
log.Fatalf("failed creating schema resources: %v", err)
}
// Start listen to incoming requests.
if err := http.ListenAndServe(":8080", elk.NewHandler(c, zap.NewExample())); err != nil {
log.Fatal(err)
}
}
go run -mod=mod main.go

Our Fridge API server is up and running. With the generated OAS file and the Swagger Tooling you can now generate a client stub in any supported language and forget about writing a RESTful client ever ever again.

Wrapping Up

In this post we introduced a new feature of elk - automatic OpenAPI Specification generation. This feature connects between Ent's code-generation capabilities and OpenAPI/Swagger's rich tooling ecosystem.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

· 1 分で読む

A few months ago, Ariel made a silent but highly-impactful contribution to Ent's core, the Extension API. While Ent has had extension capabilities (such as Code-gen Hooks, External Templates, and Annotations) for a long time, there wasn't a convenient way to bundle together all of these moving parts into a coherent, self-contained component. The Extension API which we discuss in the post does exactly that.

Many open-source ecosystems thrive specifically because they excel at providing developers an easy and structured way to extend a small, core system. Much criticism has been made of the Node.js ecosystem (even by its original creator Ryan Dahl) but it is very hard to argue that the ease of publishing and consuming new npm modules facilitated the explosion in its popularity. I've discussed on my personal blog how protoc's plugin system works and how that made the Protobuf ecosystem thrive. In short, ecosystems are only created under modular designs.

In our post today, we will explore Ent's Extension API by building a toy example.

Getting Started

The Extension API only works for projects use Ent's code-generation as a Go package. To set that up, after initializing your project, create a new file named ent/entc.go:

ent/entc.go
//+build ignore

package main

import (
"log"

"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
"entgo.io/ent/schema/field"
)

func main() {
err := entc.Generate("./schema", &gen.Config{})
if err != nil {
log.Fatal("running ent codegen:", err)
}
}

Next, modify ent/generate.go to invoke our entc file:

ent/generate.go
package ent

//go:generate go run entc.go

Creating our Extension

All extension's must implement the Extension interface:

type Extension interface {
// Hooks holds an optional list of Hooks to apply
// on the graph before/after the code-generation.
Hooks() []gen.Hook
// Annotations injects global annotations to the gen.Config object that
// can be accessed globally in all templates. Unlike schema annotations,
// being serializable to JSON raw value is not mandatory.
//
// {{- with $.Config.Annotations.GQL }}
// {{/* Annotation usage goes here. */}}
// {{- end }}
//
Annotations() []Annotation
// Templates specifies a list of alternative templates
// to execute or to override the default.
Templates() []*gen.Template
// Options specifies a list of entc.Options to evaluate on
// the gen.Config before executing the code generation.
Options() []Option
}

To simplify the development of new extensions, developers can embed entc.DefaultExtension to create extensions without implementing all methods. In entc.go, add:

ent/entc.go
// ...

// GreetExtension implements entc.Extension.
type GreetExtension {
entc.DefaultExtension
}

Currently, our extension doesn't do anything. Next, let's connect it to our code-generation config. In entc.go, add our new extension to the entc.Generate invocation:

err := entc.Generate("./schema", &gen.Config{}, entc.Extensions(&GreetExtension{})

Adding Templates

External templates can be bundled into extensions to enhance Ent's core code-generation functionality. With our toy example, our goal is to add to each entity a generated method name Greet that returns a greeting with the type's name when invoked. We're aiming for something like:

func (u *User) Greet() string {
return "Greetings, User"
}

To do this, let's add a new external template file and place it in ent/templates/greet.tmpl:

ent/templates/greet.tmpl
{{ define "greet" }}

{{/* Add the base header for the generated file */}}
{{ $pkg := base $.Config.Package }}
{{ template "header" $ }}

{{/* Loop over all nodes and add the Greet method */}}
{{ range $n := $.Nodes }}
{{ $receiver := $n.Receiver }}
func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "Greetings, {{ $n.Name }}"
}
{{ end }}
{{ end }}

Next, let's implement the Templates method:

ent/entc.go
func (*GreetExtension) Templates() []*gen.Template {
return []*gen.Template{
gen.MustParse(gen.NewTemplate("greet").ParseFiles("templates/greet.tmpl")),
}
}

Next, let's kick the tires on our extension. Add a new schema for the User type in a file named ent/schema/user.go:

package schema

import (
"entgo.io/ent"
"entgo.io/ent/schema/field"
)

// User holds the schema definition for the User entity.
type User struct {
ent.Schema
}

// Fields of the User.
func (User) Fields() []ent.Field {
return []ent.Field{
field.String("email_address").
Unique(),
}
}

Next, run:

go generate ./...

Observe that a new file, ent/greet.go, was created, it contains:

ent/greet.go
// Code generated by entc, DO NOT EDIT.

package ent

func (u *User) Greet() string {
return "Greetings, User"
}

Great! Our extension was invoked from Ent's code-generation and produced the code we wanted for our schema!

Adding Annotations

Annotations provide a way to supply users of our extension with an API to modify the behavior of code generation logic. To add annotations to our extension, implement the Annotations method. Suppose that for our GreetExtension we want to provide users with the ability to configure the greeting word in the generated code:

// GreetingWord implements entc.Annotation
type GreetingWord string

func (GreetingWord) Name() string {
return "GreetingWord"
}

Next, we add a word field to our GreetExtension struct:

type GreetExtension struct {
entc.DefaultExtension
Word GreetingWord
}

Next, implement the Annotations method:

func (s *GreetExtension) Annotations() []entc.Annotation {
return []entc.Annotation{
s.Word,
}
}

Now, from within your templates you can access the GreetingWord annotation. Modify ent/templates/greet.tmpl to use our new annotation:

func ({{ $receiver }} *{{ $n.Name }}) Greet() string {
return "{{ $.Annotations.GreetingWord }}, {{ $n.Name }}"
}

Next, modify the code-generation configuration to set the GreetingWord annotation:

"ent/entc.go
err := entc.Generate("./schema",
&gen.Config{},
entc.Extensions(&GreetExtension{
Word: GreetingWord("Shalom"),
}),
)

To see our annotation control the generated code, re-run:

go generate ./...

Finally, observe that the generated ent/greet.go was updated:

func (u *User) Greet() string {
return "Shalom, User"
}

Hooray! We added an option to use an annotation to control the greeting word in the generated Greet method!

More Possibilities

In addition to templates and annotations, the Extension API allows developers to bundle gen.Hooks and entc.Options in extensions to further control the behavior of your code-generation. In this post we will not discuss these possibilities, but if you are interested in using them head over to the documentation.

Wrapping Up

In this post we explored via a toy example how to use the Extension API to create new Ent code-generation extensions. As we've mentioned above, modular design that allows anyone to extend the core functionality of software is critical to the success of any ecosystem. We're seeing this claim start to realize with the Ent community, here's a list of some interesting projects that use the Extension API:

  • elk - an extension to generate REST endpoints from Ent schemas.
  • entgql - generate GraphQL servers from Ent schemas.
  • entviz - generate ER diagrams from Ent schemas.

And what about you? Do you have an idea for a useful Ent extension? I hope this post demonstrated that with the new Extension API, it is not a difficult task.

Have questions? Need help with getting started? Feel free to join our Slack channel.

For more Ent news and updates:

· 1 分で読む

Dear community,

I’m really happy to share something that has been in the works for quite some time. Yesterday (August 31st), a press release was issued announcing that Ent is joining the Linux Foundation.

Ent was open-sourced while I was working on it with my peers at Facebook in 2019. Since then, our community has grown, and we’ve seen the adoption of Ent explode across many organizations of different sizes and sectors.

Our goal with moving under the governance of the Linux Foundation is to provide a corporate-neutral environment in which organizations can more easily contribute code, as we’ve seen with other successful OSS projects such as Kubernetes and GraphQL. In addition, the move under the governance of the Linux Foundation positions Ent where we would like it to be, a core, infrastructure technology that organizations can trust because it is guaranteed to be here for a long time.

In terms of our community, nothing in particular changes, the repository has already moved to github.com/ent/ent a few months ago, the license remains Apache 2.0, and we are all 100% committed to the success of the project. We’re sure that the Linux Foundation’s strong brand and organizational capabilities will help to build even more confidence in Ent and further foster its adoption in the industry.

I wanted to express my deep gratitude to the amazing folks at Facebook and the Linux Foundation that have worked hard on making this change possible and showing trust in our community to keep pushing the state-of-the-art in data access frameworks. This is a big achievement for our community, and so I want to take a moment to thank all of you for your contributions, support, and trust in this project.

On a personal note, I wanted to share that Rotem (a core contributor to Ent) and I have founded a new company, Ariga. We’re on a mission to build something that we call an “operational data graph” that is heavily built using Ent, we will be sharing more details on that in the near future. You can expect to see many new exciting features contributed to the framework by our team. In addition, Ariga employees will dedicate time and resources to support and foster this wonderful community.

If you have any questions about this change or have any ideas on how to make it even better, please don’t hesitate to reach out to me on our Slack channel.

Ariel ❤️